LP 0836
LP 0836
September 2018
REDP-1234-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page 797.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Do you have the latest version? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
September 2018. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems
1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.1 Control enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.2 Lenovo Storage V3700 V2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.3 Lenovo Storage V3700 V2 XP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.4 Lenovo Storage V5030 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.5 Expansion enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.6 Host interface cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.7 Disk drive types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.1 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.2 Node canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.3 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.4 Clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6.5 RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6.6 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.7 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.8 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.6.10 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6.11 Serial-attached SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6.12 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.7 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.7.1 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.7.2 Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.7.3 Real-time Compression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.7.4 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7.5 Storage Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7.6 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7.8 IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.7.9 External virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.7.10 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.8 Problem management and support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.8.1 Support assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.8.2 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.8.3 SNMP traps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
iv Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3.6.4 Host mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.6.5 Volumes by host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.7 Copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.7.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.7.2 Consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.7.3 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.7.4 Remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
3.7.5 Partnerships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
3.8 Access menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
3.8.1 Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.8.2 Audit Log option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.9 Settings menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.9.1 Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.9.2 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.9.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.9.4 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.9.5 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.9.6 GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Contents v
5.5 Creating hosts by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.5.1 Creating Fibre Channel hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.5.2 Configuring the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 for FC
connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.5.3 Creating iSCSI hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.5.4 Configuring the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 for iSCSI host
connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.5.5 Creating SAS hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
5.6 Host Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.6.1 Creating a host cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
5.6.2 Adding a member to a host cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.6.3 Listing a host cluster member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
5.6.4 Assigning a volume to a Host Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.6.5 Remove volume mapping from a host cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . 255
5.6.6 Removing a host cluster member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.6.7 Removing a host cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
5.6.8 I/O throttling for hosts and Host Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5.7 Proactive Host Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
vi Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
7.3.3 Overview of the storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
7.3.4 Storage migration wizard tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Contents vii
9.2.14 Downloading Easy Tier I/O measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
9.2.15 Easy Tier I/O Measurement through the command-line interface. . . . . . . . . . . 427
9.2.16 IBM Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
9.2.17 Processing heat log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
9.3 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
9.3.1 Configuring a thin provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
9.3.2 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
9.3.3 Limitations of virtual capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
9.4 Real-time Compression Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
9.4.1 Common use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
9.4.2 Real-time Compression concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
9.4.3 Random Access Compression Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
9.4.4 Random Access Compression Engine in stack . . . . . . . . . . . . . . . . . . . . . . . . . 444
9.4.5 Data write flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
9.4.6 Data read flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
9.4.7 Compression of existing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
9.4.8 Configuring compressed volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
9.4.9 Comprestimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
viii Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10.4.4 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
10.4.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
10.4.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 499
10.4.7 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
10.4.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 505
10.4.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 506
10.4.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
10.4.11 Renaming FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
10.4.12 Renaming Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
10.4.13 Deleting FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
10.4.14 Deleting FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
10.4.15 Starting FlashCopy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
10.4.16 Stopping FlashCopy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
10.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
10.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
10.6.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
10.6.2 Lenovo Storage V series System Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
10.6.3 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
10.6.4 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
10.6.5 IP partnership and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
10.6.6 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
10.6.7 Remote copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
10.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
10.7.1 Multiple Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems mirroring .
523
10.7.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
10.7.3 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
10.7.4 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
10.7.5 Synchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
10.7.6 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
10.7.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
10.7.8 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
10.7.9 Global Mirror Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
10.7.10 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
10.7.11 Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
10.7.12 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
10.7.13 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
10.7.14 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
10.7.15 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
10.7.16 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
10.7.17 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
10.7.18 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . 541
10.7.19 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
10.7.20 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
10.8 Consistency protection for Remote and Global mirror . . . . . . . . . . . . . . . . . . . . . . . 549
10.9 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
10.9.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
10.9.2 Listing available system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
10.9.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
10.9.4 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
10.9.5 Creating a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 555
10.9.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 556
10.9.7 Changing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 556
Contents ix
10.9.8 Changing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 556
10.9.9 Starting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 556
10.9.10 Stopping Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 557
10.9.11 Starting Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . 557
10.9.12 Stopping Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 558
10.9.13 Deleting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 558
10.9.14 Deleting Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . . . . 558
10.9.15 Reversing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 559
10.9.16 Reversing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 559
10.10 Managing Remote Copy using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
10.10.1 Creating Fibre Channel partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
10.10.2 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . 562
10.10.3 Creating Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
10.10.4 Renaming Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
10.10.5 Renaming remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
10.10.6 Moving stand-alone remote copy relationship to Consistency Group . . . . . . . 579
10.10.7 Removing remote copy relationship from Consistency Group . . . . . . . . . . . . 580
10.10.8 Starting remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
10.10.9 Starting remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
10.10.10 Switching copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
10.10.11 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . 584
10.10.12 Stopping a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
10.10.13 Stopping Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
10.10.14 Deleting stand-alone remote copy relationships . . . . . . . . . . . . . . . . . . . . . . 588
10.10.15 Deleting Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
10.11 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
10.11.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
10.11.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
10.12 HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
10.12.1 Introduction to HyperSwap volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
10.12.2 Failure scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
10.12.3 Current HyperSwap limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
x Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
12.3 Configuration backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
12.3.1 Generating a manual configuration backup by using the CLI . . . . . . . . . . . . . . 646
12.3.2 Downloading a configuration backup by using the GUI . . . . . . . . . . . . . . . . . . 646
12.4 System update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
12.4.1 Updating node canister software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
12.4.2 Updating the drive firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
12.5 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
12.5.1 Email notifications and Call Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
12.6 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
12.7 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
12.7.1 Managing the event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
12.7.2 Alert handling and recommended actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
12.8 Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
12.8.1 Configuring support assistance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
12.8.2 Set up Support Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
12.8.3 Disable Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
12.9 Collecting support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
12.9.1 Collecting support information by using the GUI. . . . . . . . . . . . . . . . . . . . . . . . 689
12.9.2 Automatic upload of Support Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
12.9.3 Manual upload of Support Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
12.9.4 Collecting support information by using the SAT . . . . . . . . . . . . . . . . . . . . . . . 697
12.10 Powering off the system and shutting down the infrastructure . . . . . . . . . . . . . . . . 699
12.10.1 Powering off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
12.10.2 Shutting down and starting up the infrastructure. . . . . . . . . . . . . . . . . . . . . . . 703
Contents xi
13.8.4 Encrypted MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
13.8.5 Encrypted volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
13.8.6 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
13.9 Rekeying an encryption-enabled system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
13.9.1 Rekeying using a key server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
13.9.2 Rekeying using USB flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
13.10 Migrating between key providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
13.11 Disabling encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
xii Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Preface
Organizations of all sizes face the challenge of managing massive volumes of increasingly
valuable data. But storing this data can be costly, and extracting value from the data is
becoming more difficult. IT organizations have limited resources but must stay responsive to
dynamic environments and act quickly to consolidate, simplify, and optimize their IT
infrastructures. The Lenovo® Storage V3700 V2, V2 XP and V5030 systems provide a
smarter solution that is affordable, easy to use, and self-optimizing, which enables
organizations to overcome these storage challenges.
These storage systems deliver efficient, entry-level configurations that are designed to meet
the needs of small and midsize businesses. Designed to provide organizations with the ability
to consolidate and share data at an affordable price, the Lenovo Storage V3700 V2, V2 XP
and V5030 offer advanced software capabilities that are found in more expensive systems.
This book is intended for pre-sales and post-sales technical support professionals and
storage administrators. It applies to the Lenovo Storage V3700 V2, V3700 V2 XP and V5030
with IBM Spectrum Virtualize V8.1.
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or in
one of the following ways:
Use the online feedback form found at the web page for this document:
http://lenovopress.com/lp0836
Send your comments in an email to:
comments@lenovopress.com
This section describes the changes made in this update and in previous updates. These
updates might also include minor corrections and editorial changes that are not identified.
September 2018
Covers IBM Spectrum Virtualize V8.1
Updated screenshots and descriptions of the user interfaces
New and updated information on encryption
New information on storage migration
The three Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 offer a range of performance
scalability and functional capabilities. Table 1-1 shows a summary of the features of these
models.
Table 1-1 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 models
Lenovo V3700 V2 Lenovo V3700 V2 XP Lenovo V5030
CPU cores 2 2 6
Cache 16 GB Up to 32 GB Up to 64 GB
Supported expansion 10 10 20
enclosures
Compression No No Yes
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 feature the following benefits:
Enterprise technology available to entry and midrange storage
Expert administrators are not required
Easy client setup and service
Simple integration into the server environment
Ability to grow the system incrementally as storage capacity and performance needs
change
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 address the block storage
requirements of small and midsize organizations. The Lenovo Storage V3700 V2, V3700 V2
XP, and V5030 consist of one 2U control enclosure and, optionally, up to ten 2U expansion
enclosures on the Lenovo Storage V3700 V2 and Lenovo Storage V3700 V2 XP systems and
up to twenty 2U expansion enclosures on the Lenovo Storage V5030 systems. The Lenovo
Storage V5030 systems are connected by serial-attached Small Computer Systems Interface
(SCSI) (SAS) cables that make up one system that is called an I/O group.
With the Lenovo Storage V5030 systems, two I/O groups can be connected to form a cluster,
providing a maximum of two control enclosures and 40 expansion enclosures. With the High
Density expansion drawers you will be able to attach up to 16 expansion enclosures to a
cluster.
The control and expansion enclosures are available in the following form factors, and they
can be intermixed within an I/O group:
12 x 3.5-inch (8.89-centimeter) drives in a 2U unit
24 x 2.5-inch (6.35-centimeter) drives in a 2U unit
2 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Two canisters are in each enclosure. Control enclosures contain two node canisters, and
expansion enclosures contain two expansion canisters.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support up to 1008 x 2.5 inch or
504 x 3.5 inch drives or a combination of both drive form factors for the internal storage in a
two I/O group Lenovo StorageV5030 cluster.
SAS, Nearline (NL)-SAS, and solid-state drive (SSD) types are supported.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 are designed to accommodate the
most common storage network technologies to enable easy implementation and
management. It can be attached to hosts through a Fibre Channel (FC) SAN fabric, an
Internet Small Computer System Interface (iSCSI) infrastructure, or SAS. Hosts can be
attached directly or through a network.
http://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v
5030/6536/documentation
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 are a virtualized storage solutions
that groups their internal drives into RAID arrays, which are called managed disks (MDisks).
MDisks can also be created on the Lenovo Storage V5030 systems by importing logical unit
numbers (LUNs) from external FC SAN-attached storage. These MDisks are then grouped
into storage pools. Volumes are created from these storage pools and provisioned out to
hosts.
Storage pools are normally created with MDisks of the same drive type and drive capacity.
Volumes can be moved non-disruptively between storage pools with differing performance
characteristics. For example, a volume can be moved between a storage pool that is made up
of NL-SAS drives to a storage pool that is made up of SAS drives to improve performance.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems also provide several
configuration options to simplify the implementation process. It also provides configuration
presets and automated wizards that are called Directed Maintenance Procedures (DMP) to
help resolve any events that might occur.
Included with an Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems are a simple
and easy to use graphical user interface (GUI) to allow storage to be deployed quickly and
efficiently. The GUI runs on any supported browser. The management GUI contains a series
of preestablished configuration options that are called presets that use commonly used
settings to quickly configure objects on the system. Presets are available for creating volumes
and IBM FlashCopy mappings and for setting up a RAID configuration.
You can also use the command-line interface (CLI) to set up or control the system.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 3
1.2 Terminology
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems use terminology that is
consistent with the entire IBM Storwize for Lenovo family The terms are defined in Table 1-2.
More terms can be found in Appendix B, “Terminology” on page 779.
Table 1-2 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 terminology
Lenovo Storage V3700 V2, Definition
V3700 V2 XP, and V5030 term
Chain Each control enclosure has either one or two chains, which are
used to connect expansion enclosures to provide redundant
connections to the inside drives.
Control enclosure A hardware unit that includes a chassis, node canisters, drives,
and power sources.
Data migration Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 can
migrate data from existing external storage to its internal
volumes.
Distributed RAID (DRAID) No dedicated spare drives are in an array. The spare capacity is
distributed across the array, which allows faster rebuild of the
failed disk.
Drive Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support
a range of hard disk drives (HDDs) and Flash Drives.
Expansion canister A hardware unit that includes the SAS interface hardware that
enables the control enclosure hardware to use the drives of the
expansion enclosure. Each expansion enclosure has two
expansion canisters.
Expansion enclosure A hardware unit that includes expansion canisters, drives, and
power supply units.
External storage MDisks that are SCSI logical units (LUs) that are presented by
storage systems that are attached to and managed by the
clustered system.
Fibre Channel port Fibre Channel ports are connections for the hosts to get access
to the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030.
Host mapping The process of controlling which hosts can access specific
volumes within an Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030.
Internal storage Array MDisks and drives that are held in enclosures that are part
of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030.
iSCSI (Internet Small Computer Internet Protocol (IP)-based storage networking standard for
System Interface) linking data storage facilities.
4 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Lenovo Storage V3700 V2, Definition
V3700 V2 XP, and V5030 term
Managed disk (MDisk) A component of a storage pool that is managed by a clustered
system. An MDisk is part of a RAID array of internal storage or a
SCSI LU for external storage. An MDisk is not visible to a host
system on the SAN.
Node canister A hardware unit that includes the node hardware, fabric, and
service interfaces, SAS expansion ports, and battery. Each
control enclosure contains two node canisters.
PHY A single SAS lane. Four PHYs are in each SAS cable.
Power Supply Unit Each enclosure has two power supply units (PSU).
Quorum disk A disk that contains a reserved area that is used exclusively for
cluster management. The quorum disk is accessed when it is
necessary to determine which half of the cluster continues to
read and write data.
Serial-Attached SCSI (SAS) ports SAS ports are connections for expansion enclosures and direct
attachment of hosts to access the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030.
Thin provisioning or thin The ability to define a storage unit (full system, storage pool, or
provisioned volume) with a logical capacity size that is larger than the
physical capacity that is assigned to that storage unit.
Traditional RAID (TRAID) Traditional Raid is uses the standard RAID levels.
1.3 Models
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 platform consist of different
models. Each model type supports a different set of features, as shown in Table 1-3.
Table 1-3 IBM Storwize V5000for Lenovo and Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 feature comparison
Feature IBM V5000 for Lenovo V3700 Lenovo V3700 Lenovo V5030
Lenovo V2 V2 XP
Cache 16 GB 16 GB 16 GB or 32 GB 32 GB or 64 GB
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 5
Feature IBM V5000 for Lenovo V3700 Lenovo V3700 Lenovo V5030
Lenovo V2 V2 XP
More information: For more information about the features, benefits, and specifications of
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 models, see the Lenovo Press
product guides:
https://lenovopress.com/lp0497-lenovo-storage-v3700-v2-and-v3700-v2-xp
https://lenovopress.com/lp0498-lenovo-storage-v5030
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 models are described in Table 1-4.
All control enclosures have two node canisters. F models are expansion enclosures.
Table 1-4 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 models
Model Description Cache Drive Slots
One-year warranty
6535-HC1 Lenovo V3700 V2large form factor (LFF) Control Enclosure 16 GB 12 x 3.5-inch
6535-HC4 Lenovo V3700 V2 small form factor (SFF) Control Enclosure 16 GB 24 x 2.5-inch
6 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Model Description Cache Drive Slots
6536-HC6 IBM Storwize V5030F All-Flash Array Control Enclosure 64GB 24 x 2.5-inch
The Lenovo Storage V5030 systems can be added to an existing IBM Storwize V5000 for
Lenovo cluster to form a two-I/O group configuration. This configuration can be used as a
migration mechanism to upgrade from the IBM Storwize V5000 for Lenovo to the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030. The IBM Storwize V5000 for Lenovo models
are described in Table 1-5 for completeness.
6194-12C 16 GB 12 x 3.5-inch
6194-24C 16 GB 24 x 2.5-inch
Figure 1-1 shows the front view of the LFF(12 x 3.5-inch) enclosures.
Figure 1-1 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 front view for 6535/6536 LFF(12 x 3.5
inch) enclosures
The drives are positioned in four columns of three horizontally mounted drive assemblies.
The drive slots are numbered 1 - 12, starting at the upper left and moving left to right, top to
bottom.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 7
Figure 1-2 shows the front view of the SFF(24 x 2.5-inch) enclosures.
Figure 1-2 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 front view for 6535/6536 SFF(24 x
2.5-inch) enclosure
The drives are positioned in one row of 24 vertically mounted drive assemblies. The drive
slots are numbered 1 - 24, starting from the left. A vertical center drive bay molding is
between slots 12 and 13.
1.4 Compatibility
The Lenovo Storage V5030 system can be added into existing IBM Storwize V5000 for
Lenovo clustered systems. All systems within a cluster must use the same version of IBM
Storwize V5000 for Lenovo software, which is version 7.6.1 or later.
Restriction: The Lenovo Storage V3700 V2 and V3700 V2 XP are not compatible with
IBM Storwize V5000 for Lenovo system as they are not able to join an existing I/O group.
A single Lenovo Storage V5030 control enclosure can be added to a single IBM Storwize
V5000 for Lenovo cluster to bring the total number of I/O groups to two. They can be
clustered by using either Fibre Channel or Fibre Channel over Ethernet (FCoE). The possible
I/O group configuration options for all IBM Storwize V5000 for Lenovo and Lenovo V3700 V2,
V3700 V2 XP and V5030 models are shown in Table 1-6.
IBM Storwize V5000 for Lenovo IBM Storwize V5000 for Lenovo
8 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
1.5 Hardware
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 solution is a modular storage
system that is built on a common enclosure platform that is shared by the control enclosures
and expansion enclosures.
Figure 1-3 shows an overview of hardware components of the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 solution.
Figure 1-3 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 hardware components
Figure 1-4 shows the control enclosure rear view of a Lenovo Storage V3700 V2, V3700 V2
XP, and V5030 enclosure.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 9
In Figure 1-4 on page 9, you can see two power supply slots at the bottom of the enclosure.
The power supplies are identical and exchangeable. Two canister slots are at the top of the
chassis.
In Figure 1-5, you can see the rear view of an Lenovo Storage V3700, V3700 V2 XP, and
V5030 expansion enclosure.
Figure 1-5 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 expansion enclosure rear view
You can see that the only difference between the control enclosure and the expansion
enclosure is the canister. The canisters of the expansion enclosure have only two SAS ports.
For more information about the expansion enclosure, see 1.5.5, “Expansion enclosure” on
page 14.
The two node canisters act as a single processing unit and form an I/O group that is attached
to the SAN fabric, an iSCSI infrastructure, or that is directly attached to hosts through FC or
SAS. The pair of nodes is responsible for serving I/O to a volume. The two nodes provide a
highly available fault-tolerant controller so that if one node fails, the surviving node
automatically takes over. Nodes are deployed in pairs that are called I/O groups.
One node is designated as the configuration node, but each node in the control enclosure
holds a copy of the control enclosure state information.
The Lenovo Storage V3700 V2 and Lenovo Storage V3700 V2 XP support a single I/O group.
The Lenovo Storage V5030 supports two I/O groups in a clustered system.
The terms node canister and node are used interchangeably throughout this book.
The battery is used if power is lost. The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
systems use this battery to power the canister while the cache data is written to the internal
system flash. This memory dump is called a fire hose memory dump.
Note: The batteries of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 are able to
process two fire hose memory dump in a row. After this you will not be able to power up the
system immediately. There is a need to wait until the batteries are charged over a level
which allows them to run the next fire hose memory dump.
After the system is up again, this data is loaded back to the cache for destaging to the disks.
10 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
1.5.2 Lenovo Storage V3700 V2
Figure 1-6 shows a single Lenovo Storage V3700 V2 node canister.
Each Lenovo Storage V3700 V2 node canister contains the following hardware:
Battery
Memory: 8 GB
One 12 Gbps SAS port
Two 10/100/1000 Mbps Ethernet ports
One USB 2.0 port that is used to gather system information
System flash
Host interface card (HIC) slot (different options are possible)
Figure 1-6 shows the following features that are provided by the Lenovo Storage V3700 V2
node canister:
Two 10/100/1000 Mbps Ethernet ports. Port 1 must be used for management, and port 2
can optionally be used for management. Port 2 serves as a technician port (as denoted by
the white box with “T” in it) for system initialization and service.
Note: All three models use a technician port to perform initial setup. The
implementation of the technician port varies between models: On Lenovo Storage
V3700 V2 and V3700 V2 XP the second 1GbE port (labelled T) is initially enabled as a
technician port After cluster creation this port is disabled and can then be used for I/O
and/or management. On Lenovo Storage V5030 the onboard 1GbE port (labelled T) is
permanently enabled as a technician port. Connecting the technician port to the LAN
will disable the port. The Lenovo Storage V3700 V2 and V3700 V2 XP technician port
can be re-enabled after initial setup.
Both ports can be used for iSCSI traffic and IP replication. For more information, see
Chapter 5, “Host configuration” on page 189 and Chapter 10, “Copy services” on
page 451.
One USB port for gathering system information.
System initialization: Unlike the IBM Storwize V5000 for Lenovo, you must perform
the system initialization of the LenovoV3700 V2 by using the technician port instead of
the USB port.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 11
One 12Gbs serial-attached SCSI (SAS 3.0) port to connect to the optional expansion
enclosures. The Lenovo Storage V3700 V2 supports up to 10 expansion enclosures.
Important: The canister SAS port on the Lenovo Storage V3700 V2 does not support
SAS host attachment. The Lenovo Storage V3700 V2 supports SAS hosts by using an
optional host interface card. See 1.5.6, “Host interface cards” on page 15.
Do not use the port that is marked with a wrench. This port is a service port only.
Figure 1-7 shows the following features that are provided by the Lenovo Storage V3700 V2
XP node canister:
Two 10/100/1000 Mbps Ethernet ports. Port 1 must be used for management, and port 2
can optionally be used for management. Port 2 serves as a technician port (as denoted by
the white box with “T” in it) for system initialization and service.
Note: All three models use a technician port to perform initial setup. The
implementation of the technician port varies between models: On Lenovo Storage
V3700 V2 and V3700 V2 XP the second 1GbE port (labelled T) is initially enabled as a
technician port After cluster creation this port is disabled and can then be used for I/O
and/or management. On Lenovo Storage V5030 the onboard 1GbE port (labelled T) is
permanently enabled as a technician port. Connecting the technician port to the LAN
will disable the port. The Lenovo Storage V3700 V2 and V3700 V2 XP technician port
can be re-enabled after initial setup.
Both ports can be used for iSCSI traffic and IP replication. For more information, see
Chapter 5, “Host configuration” on page 189 and Chapter 10, “Copy services” on
page 451
12 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
One USB port for gathering system information.
System initialization: Unlike the IBM Storwize V5000 for Lenovo, you must perform
the system initialization of the Lenovo Storage V3700 V2 XP by using the technician
port instead of the USB port.
Three 12Gbps serial-attached SCSI (SAS 3.0) ports. The ports are numbered 1 - 3 from
left to right. Port 1 is used to connect to the optional expansion enclosures. Ports 2 and 3
can be used to connect directly to SAS hosts. (Both 6G and 12G hosts are supported.)
The Lenovo Storage V3700 V2 XP supports up to 10 expansion enclosures.
Service port: Do not use the port that is marked with a wrench. This port is a service
port only.
Figure 1-8 shows the following features that are provided by the Lenovo Storage V5030 node
canister:
One Ethernet technician port (as denoted by the white box with “T” in it). This port can be
used for system initialization and service only. For more information, see Chapter 1,
“Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems” on
page 1. It cannot be used for anything else.
Two 1/10 Gbps Ethernet ports. These ports are Copper 10GBASE-T with RJ45
connectors. Port 1 must be used for management. Port 2 can optionally be used for
management. Both ports can be used for iSCSI traffic and IP replication. For more
information, see Chapter 5, “Host configuration” on page 189 and Chapter 10, “Copy
services” on page 451.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 13
Important: The 1/10 Gbps Ethernet ports do not support speeds less than 1 Gbps (100
Mbps is not supported).
Ensure that you use the correct port connectors. The Lenovo Storage V5030 canister
10 Gbps connectors appear the same as the 1 Gbps connectors on the other IBM
Storwize V5000for Lenovo models. These RJ45 connectors differ from the optical small
form-factor plug able (SFP+) connectors on the optional 10 Gbps HIC. When you plan
to implement the Lenovo Storage V5030, ensure that any network switches provide the
correct connector type.
System initialization: Unlike the IBM Storwize V5000 for Lenovo, you must perform
the system initialization of the Lenovo Storage V5030 by using the technician port
instead of the USB port.
Two 12Gbps serial-attached SCSI (SAS 3.0) ports. The ports are numbered 1 and 2 from
left to right to connect to the optional expansion enclosures. The Lenovo Storage V5030
supports up to 20 expansion enclosures. Ten expansion enclosures can be connected to
each port.
Important: The canister SAS ports on the Lenovo Storage V5030 do not support SAS
host attachment. The Lenovo Storage V5030 supports SAS hosts by using an HIC. See
1.5.6, “Host interface cards” on page 15.
Do not use the port that is marked with a wrench. This port is a service port only.
Figure 1-9 Expansion enclosure of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
The expansion enclosure power supplies are the same as the control enclosure power
supplies. A single power lead connector is on each power supply unit.
14 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Each expansion canister provides two SAS interfaces that are used to connect to the control
enclosure and any further optional expansion enclosures. The ports are numbered 1 on the
left and 2 on the right. SAS port 1 is the IN port, and SAS port 2 is the OUT port.
The use of SAS connector 1 is mandatory because the expansion enclosure must be
attached to a control enclosure or another expansion enclosure further up in the chain. SAS
connector 2 is optional because it is used to attach to further expansion enclosures down the
chain.
The Lenovo Storage V3700 V2 and Lenovo Storage V3700 V2 XP support a single chain of
up to 10 expansion enclosures that attach to the control enclosure. The Lenovo Storage
V5030 supports up to 40 expansion enclosures in a configuration that consists of two control
enclosures, which are each attached to 20 expansion enclosures in two separate chains.
Table 1-7 shows the maximum number of supported expansion enclosures and the drive
limits for each model.
Each port includes two LEDs to show the status. The first LED indicates the link status and
the second LED indicates the fault status.
For more information about LED and ports, see this web page:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/v
3700_system_leds.html
Restriction: The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 expansion
enclosures can be used with an Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
control enclosure only. The IBM Storwize V5000 for Lenovo expansion enclosures cannot
be used with an Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 control enclosure.
Additional host connectivity options are available through an optional adapter card. Table 1-8
shows the available configurations for a single control enclosure.
Table 1-8 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 configurations available
1 Gb Ethernet 10 Gb Ethernet 12 Gb SAS 16 Gb FC 10 Gb Ethernet
(iSCSI) Copper Optical SFP+
10GBASE-T iSCSI/FCoE
(iSCSI)
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 15
Lenovo Storage 8 ports (with 4 ports 8 ports (with 8 ports (with 8 ports (with
V5030 optional adapter (standard). optional adapter optional adapter optional adapter
card). card). card). card).
Lenovo Storage 4 ports N/A 8 ports (with 8 ports (with 8 ports (with
V3700 V2 (standard) optional adapter optional adapter optional adapter
Additional 8 ports card) card) card)
(with optional
adapter card)
Optional adapter cards: Only one pair of identical adapter cards is allowed for each
control enclosure.
Table 1-9 shows the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 disk drive types,
disk revolutions per minute (RPMs), and sizes that are available at the time of writing.
Table 1-9 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 disk drive types
Drive type RPM Size
2.5-inch form factor Flash Drive N/A 400 GB, 800 GB, 1.6 TB, and 3.2 TB
2.5-inch form factor Read Intensive N/A 1,92 TB, 3.84 TB and 7.68 TB
(RI) Flash Drive
2.5-inch form factor SAS 10,000 900 GB, 1.2 TB, and 1.8 TB
2.5-inch form factor SAS 15,000 300 GB, 600 GB and 900 GB
3.5-inch form factor SAS 10,000 900 GB, 1.2 TB, and 1.8 TBa
3.5-inch form factor SAS 15,000 300 GB, 600 GB and 900 GBa
1.6 Terms
In this section, we introduce the terms that are used for the Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 throughout this book.
16 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
1.6.1 Hosts
A host system is a server that is connected to Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 through a Fibre Channel connection, an iSCSI connection, or an SAS connection.
Hosts are defined on Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 by identifying their
WWPNs for Fibre Channel and SAS hosts. The iSCSI hosts are identified by using their
iSCSI names. The iSCSI names can be iSCSI qualified names (IQNs) or extended unique
identifiers (EUIs). For more information, see Chapter 5, “Host configuration” on page 189.
One of the nodes within the system, which is known as the configuration node, manages
configuration activity for the clustered system. If this node fails, the system nominates the
other node to become the configuration node.
When a host server performs I/O to one of its volumes, all of the I/Os for a specific volume are
directed to the I/O group. Also, under normal conditions, the I/Os for that specific volume are
always processed by the same node within the I/O group.
When a host server performs I/O to one of its volumes, all of the I/O for that volume is
directed to the I/O group where the volume was defined. Under normal conditions, these I/Os
are also always processed by the same node within that I/O group.
Both nodes of the I/O group act as preferred nodes for their own specific subset of the total
number of volumes that the I/O group presents to the host servers (a maximum of 2,048
volumes for each host). However, both nodes also act as a failover node for the partner node
within the I/O group. Therefore, a node takes over the I/O workload from its partner node
(if required) without affecting the server’s application.
In a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 environment (which uses
active-active architecture), the I/O handling for a volume can be managed by both nodes of
the I/O group. The I/O groups must be connected to the SAN so that all hosts can access all
nodes. The hosts must use multipath device drivers to handle this capability.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 17
Up to 256 host server objects can be defined to one-I/O group or 512 host server objects can
be defined in a two-I/O group system. More information about I/O groups is in Chapter 6,
“Volume configuration” on page 269.
Important: The active/active architecture provides the availability to process I/Os for both
controller nodes and allows the application to continue to run smoothly, even if the server
has only one access route or path to the storage controller. This type of architecture
eliminates the path/LUN thrashing that is typical of an active/passive architecture.
A process exists to back up the system configuration data on to disk so that the clustered
system can be restored in a disaster. This method does not back up application data. Only
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems configuration information is
backed up.
System configuration backup: After the system configuration is backed up, save the
backup data on to your local hard disk (or at the least outside of the SAN). If you are
unable to access the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030, you do not
have access to the backup data if it is on the SAN. Perform this configuration backup after
each configuration change to be safe.
The system can be configured by using the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 management software (GUI), CLI, or USB key.
1.6.5 RAID
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 contain several internal drive
objects, but these drives cannot be directly added to the storage pools. Drives need to be
included in a Redundant Array of Independent Disks (RAID) to provide protection against the
failure of individual drives.
These drives are referred to as members of the array. Each array has a RAID level. RAID
levels provide various degrees of redundancy and performance. The maximum number of
members in the array varies based on the RAID level.
Traditional RAID (TRAID) has the concept of hot spare drives. When an array member drive
fails, the system automatically replaces the failed member with a hot spare drive and rebuilds
the array to restore its redundancy. Candidate and spare drives can be manually exchanged
with array members.
Apart from traditional disk arrays, spectrum virtualize software V7.6 introduced Distributed
RAID. Distributed RAID improves recovery time of failed disk drives in an array by the
distribution of spare capacity between primary disks, rather than dedicating a whole spare
drive for replacement.
18 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Details about traditional and distributed RAID arrays are described in details in Chapter 4,
“Storage pools” on page 139.
An MDisk is invisible to a host system on the storage area network because it is internal to
the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems.
The clustered system automatically forms the quorum disk by taking a small amount of space
from an MDisk. It allocates space from up to three different MDisks for redundancy, although
only one quorum disk is active.
To avoid the possibility of losing all of the quorum disks because of a failure of a single
storage system if the environment has multiple storage systems, you need to allocate the
quorum disk on different storage systems. You can manage the quorum disks by using the
CLI.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 19
IP quorum base support provides an alternative for IBM Storwize V5000for Lenovo
HyperSwap implementations. Instead of Fibre Channel storage on a third site, the IP network
is used for communication between the IP quorum application and node canisters in the
system to cope with tie-break situations if the inter-site link fails. The IP quorum application is
a Java application that runs on a host at the third site. The IP quorum application enables the
use of a lower-cost IP network-attached host as a quorum disk for simplified implementation
and operation.
Note: IP Quorum allows the user to replace a third-site Fibre Channel attached quorum
disk with an IP Quorum application. The Java application runs on a Linux host and is used
to resolve split-brain situations. Quorum disks are still required in sites 1 and 2 for cookie
crumb and meta data. The application can also be used with clusters in standard a
standard topology configuration - but the primary use case is a customer with a cluster split
over two sites (stretched or HyperSwap). You need Java to run the IP quorum. Your
Network must provide as least < 80ms round-trip latency and all node need a service ip
address and all service ip addresses must be ping able from the quorum host.
MDisks can be added to a storage pool at any time to increase the capacity of the pool.
MDisks can belong in only one storage pool. For more information, see Chapter 4, “Storage
pools” on page 139.
Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator when the storage pool is created and cannot be changed later.
The size of the extent ranges from 16 MB - 8 GB.
Default extent size: The GUI of Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 has
a default extent size value of 1024 MB when you define a new storage pool.
The extent size directly affects the maximum volume size and storage capacity of the
clustered system.
A system can manage 2^22 (4,194,304) extents. For example, with a 16 MB extent size, the
system can manage up to 16 MB x 4,194,304 = 64 TB of storage.
20 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The effect of extent size on the maximum volume and cluster size is shown in Table 1-10.
16 2048 (2 TB) 64 TB
Use the same extent size for all storage pools in a clustered system. This rule is a
prerequisite if you want to migrate a volume between two storage pools. If the storage pool
extent sizes are not the same, you must use volume mirroring to copy volumes between
storage pools, as described in Chapter 4, “Storage pools” on page 139.
You can set a threshold warning for a storage pool that automatically issues a warning alert
when the used capacity of the storage pool exceeds the set limit.
Child pools can be created by using the management GUI, CLI, or IBM Spectrum Control
when you create VMware vSphere virtual volumes. For more information about child pools,
see Chapter 4, “Storage pools” on page 139.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 21
Multi-tiered storage pool
A multi-tiered storage pool has a mix of MDisks with more than one type of disk, for example,
a storage pool that contains a mix of generic_hdd and generic_ssd MDisks.
A multi-tiered storage pool contains MDisks with different characteristics unlike the
single-tiered storage pool. MDisks with similar characteristics then form the tiers within the
pool. However, each tier needs to have MDisks of the same size and that provide the same
number of extents.
A multi-tiered storage pool is used to enable automatic migration of extents between disk tiers
by using the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 Easy Tier function, as
described in Chapter 9, “Advanced features for storage efficiency” on page 403.
This functionality can help improve the performance of host volumes on the IBM Storwize
V5000 for Lenovo.
1.6.9 Volumes
A volume is a logical disk that is presented to a host system by the clustered system. In our
virtualized environment, the host system has a volume that is mapped to it by Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030. Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
translates this volume into a number of extents, which are allocated across MDisks. The
advantage with storage virtualization is that the host is decoupled from the underlying
storage, so the virtualization appliance can move around the extents without affecting the
host system.
The host system cannot directly access the underlying MDisks in the same manner as it can
access RAID arrays in a traditional storage environment.
22 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The following types of volumes are available:
Striped
A striped volume is allocated one extent in turn from each MDisk in the storage pool. This
process continues until the space that is required for the volume is satisfied.
It also is possible to supply a list of MDisks to use.
Figure 1-10 shows how a striped volume is allocated, assuming that 10 extents are
required.
Sequential
A sequential volume is a volume in which the extents are allocated one after the other
from one MDisk to the next MDisk, as shown in Figure 1-11.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 23
Image mode
Image mode volumes are special volumes that have a direct relationship with one MDisk.
They are used to migrate existing data into and out of the clustered system to or from
external FC SAN-attached storage.
When the image mode volume is created, a direct mapping is made between extents that
are on the MDisk and the extents that are on the volume. The logical block address (LBA)
x on the MDisk is the same as the LBA x on the volume, which ensures that the data on
the MDisk is preserved as it is brought into the clustered system, as shown in Figure 1-12.
Certain virtualization functions are not available for image mode volumes, so it is often useful
to migrate the volume into a new storage pool. After it is migrated, the MDisk becomes a
managed MDisk.
If you want to migrate data from an existing storage subsystem, use the storage migration
wizard, which guides you through the process.
If you add an MDisk that contains data to a storage pool, any data on the MDisk is lost. If you
are presenting externally virtualized LUNs that contain data to an Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030, import them as image mode volumes to ensure data integrity or
use the migration wizard.
24 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
1.6.10 iSCSI
iSCSI is an alternative method of attaching hosts to the Lenovo Storage V3700 V2, V3700 V2
XP, and V5030. The iSCSI function is a software function that is provided by the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 code, not hardware.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over an
Internet Protocol network that is based on IP routers and Ethernet switches. iSCSI is a
block-level protocol that encapsulates SCSI commands into TCP/IP packets and uses an
existing IP network instead of requiring FC host bus adapters (HBAs) and a SAN fabric
infrastructure.
An iSCSI address specifies the iSCSI name of an iSCSI node and a location of that node. The
address consists of a host name or IP address, a TCP port number (for the target), and the
iSCSI name of the node. An iSCSI node can have any number of addresses, which can
change at any time, particularly if they are assigned by way of Dynamic Host Configuration
Protocol (DHCP). An IBM Storwize V5000for Lenovo node represents an iSCSI node and
provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique IQN, which can have a size of up
to 255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes.
The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An
alias can be assigned to an initiator or a target.
For more information about configuring iSCSI, see Chapter 4, “Storage pools” on page 139.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 25
1.7 Features
In this section, we describe the features of the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030. Different models offer a different range of features. See Table 1-3 on page 5 for a
comparison.
When a host system issues a write to a mirrored volume, Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 write the data to both copies. When a host system issues a read to a
mirrored volume, Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 request it from the
primary. Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 automatically use the
alternative copy without any outage for the host system. When the mirrored volume copy is
repaired, Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 synchronize the data again.
A mirrored volume can be converted into a non-mirrored volume by deleting one copy or by
splitting away one copy to create a non-mirrored volume.
The use of mirrored volumes can also assist with migrating volumes between storage pools
that have different extent sizes. Mirrored volumes can also provide a mechanism to migrate
fully allocated volumes to thin-provisioned or compressed volumes without any host outages.
The Volume Mirroring feature is included as part of the base software, and no license is
required.
The real capacity determines the quantity of MDisk extents that are allocated for the volume.
The virtual capacity is the capacity of the volume that is reported to Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 and to the host servers.
The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.
The Thin Provisioning feature can be used on its own to create over-allocated volumes, or it
can be used with FlashCopy. Thin-provisioned volumes can be used with the mirrored volume
feature, also.
26 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
A thin-provisioned volume can be configured to auto expand, which causes the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 to automatically expand the real capacity of a
thin-provisioned volume as it gets used. This feature prevents the volume from going offline.
Auto expand attempts to maintain a fixed amount of unused real capacity on the volume. This
amount is known as the contingency capacity. When the thin-provisioned volume is initially
created, the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 initially allocate only 2% of
the virtual capacity in real physical storage. The contingency capacity and auto expand
features seek to preserve this 2% of free space as the volume grows.
If the user modifies the real capacity, the contingency capacity is reset to be the difference
between the used capacity and real capacity. In this way, the autoexpand feature does not
cause the real capacity to grow much beyond the virtual capacity.
A volume that is created with a zero contingency capacity goes offline when it must expand.
A volume with a non-zero contingency capacity stays online until it is used up.
To support the auto expansion of thin-provisioned volumes, the volumes themselves have a
configurable warning capacity. When the used free capacity of the volume exceeds the
warning capacity, a warning is logged. For example, if a warning of 80% is specified, the
warning is logged when 20% of the free capacity remains. This approach is similar to the
capacity warning that is available on storage pools.
The Thin Provisioning feature is included as part of the base software, and no license is
required.
Software-only compression: The use of RtC on the Lenovo Storage V5030 requires
dedicated CPU resources from the node canisters. If more performance is required for
deploying RtC, consider purchasing the IBM Storwize V7000for Lenovo system. The IBM
Storwize V7000 for Lenovo system uses dedicated hardware options for compression
acceleration.
The Lenovo Storage V5030 model has the additional memory upgrade (32 GB for each node
canister). When the first compressed volume is created 4 of the 6 CPU cores are allocated to
RtC. Of the 32GB of memory on each node canister roughly 9-10 GB is allocated to RtC.
There are no hardware compression accelerators as in the IBM Storwize V7000 for Lenovo.
The actual LZ4 compression is done by the CPUs as was the case with the IBM Storwize
V7000 for Lenovo. Table 1-11 on page 28 shows how the cores are used with RtC.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 27
Table 1-11 Cores usage with RtC
Compression Disabled Compression Enabled
The faster CPU with more cores, the extra memory and the hyper-threading capability of the
Lenovo Storage V5030 as well as improvements to RtC software results in good performance
for smaller customer configurations common to the market segment this product is intended
to serve.The feature is licensed per enclosure. Conversely, Real-time Compression is not
available on the Lenovo V3700 V2 or V3700 V2 XP model.
The Easy Tier function can be turned on or turned off at the storage pool and volume level.
You can demonstrate the potential benefit of Easy Tier in your environment before you install
Flash drives by using the IBM Storage Advisor Tool. For more information about Easy Tier,
see Chapter 9, “Advanced features for storage efficiency” on page 403.
The Storage Migration feature is included in the base software, and no license is required.
28 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
1.7.6 FlashCopy
The FlashCopy feature copies a source volume on to a target volume. The original contents
of the target volume is lost. After the copy operation starts, the target volume has the contents
of the source volume as it existed at a single point in time. Although the copy operation
completes in the background, the resulting data at the target appears as though the copy was
made instantaneously. FlashCopy is sometimes described as an instance of a time-zero (T0)
copy or point-in-time (PiT) copy technology.
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the
management operations to be coordinated so that a common single point in time is chosen
for copying target volumes from their respective source volumes.
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 also permit multiple target volumes to
be FlashCopies from the same source volume. This capability can be used to create images
from separate points in time for the source volume, and to create multiple images from a
source volume at a common point in time. Source and target volumes can be any volume
type (generic, thin-provisioned, or compressed).
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship and without waiting for the original copy
operation to complete. Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support multiple
targets and multiple rollback points.
For more information about FlashCopy copy services, see Chapter 10, “Copy services” on
page 451.
With the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030, Metro Mirror and Global
Mirror are the IBM branded terms for the functions that are synchronous Remote Copy and
asynchronous Remote Copy.
By using the Metro Mirror and Global Mirror copy services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same system or on two
different systems.
For both Metro Mirror and Global Mirror copy types, one volume is designated as the primary
and the other volume is designated as the secondary. Host applications write data to the
primary volume, and updates to the primary volume are copied to the secondary volume.
Normally, host applications do not perform I/O operations to the secondary volume.
The Metro Mirror feature provides a synchronous copy process. When a host writes to the
primary volume, it does not receive confirmation of I/O completion until the write operation
completes for the copy on the primary and secondary volumes. This design ensures that the
secondary volume is always up-to-date with the primary volume if a failover operation must
be performed.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 29
The Global Mirror feature provides an asynchronous copy process. When a host writes to the
primary volume, confirmation of I/O completion is received before the write operation
completes for the copy on the secondary volume. If a failover operation is performed, the
application must recover and apply any updates that were not committed to the secondary
volume. If I/O operations on the primary volume are paused for a brief time, the secondary
volume can become an exact match of the primary volume.
Global Mirror can operate with or without cycling. When it is operating without cycling, write
operations are applied to the secondary volume as soon as possible after they are applied to
the primary volume. The secondary volume is less than 1 second behind the primary volume,
which minimizes the amount of data that must be recovered in a failover. However, this
approach requires that a high-bandwidth link is provisioned between the two sites.
When Global Mirror operates with cycling mode, changes are tracked and where needed
copied to intermediate change volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered in a failover. Because the data transfer can be smoothed over a
longer time period, lower bandwidth is required to provide an effective solution.
For more information about the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 copy
services, see Chapter 10, “Copy services” on page 451.
1.7.8 IP replication
IP replication enables the use of lower-cost Ethernet connections for remote mirroring. The
capability is available as a chargeable option on all IBM Storwize for Lenovo and Lenovo
Storage V series family systems.
The function is transparent to servers and applications in the same way that traditional Fibre
Channel-based mirroring is transparent. All remote mirroring modes (Metro Mirror, Global
Mirror, and Global Mirror with Change Volumes) are supported.
Configuration of the system is straightforward. The IBM Storwize for Lenovo and Lenovo
Storage V series systems normally find each other in the network, and they can be selected
from the GUI.
IP connections that are used for replication can have long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many “hops”
between switches and other appliances in the network. Traditional replication solutions
transmit data, wait for a response, and then transmit more data, which can result in network
utilization as low as 20% (based on IBM measurements). And this scenario gets worse the
longer the latency.
Bridgeworks SANSlide technology that is integrated with the IBM Storwize for Lenovo and
Lenovo Storage V series families require no separate appliances, no additional cost, and no
configuration steps. It uses artificial intelligence (AI) technology to transmit multiple data
streams in parallel, adjusting automatically to changing network environments and workloads.
SANSlide improves network bandwidth utilization up to 3x so clients can deploy a less costly
30 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
network infrastructure or take advantage of faster data transfer to speed up replication cycles,
improve remote data currency, and enjoy faster recovery.
IP replication can be configured to use any of the available 1 GbE or 10 GbE Ethernet ports
(apart from the technician port) on the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030.
See Table 1-8 on page 15 for port configuration options.
1.7.10 Encryption
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 provide optional encryption of
data-at-rest functionality, which protects against the potential exposure of sensitive user data
and user metadata that is stored on discarded, lost, or stolen storage devices. Encryption can
be enabled and configured only on the LenovoV3700 V2 XP and Lenovo Storage V5030
enclosures that support encryption. LenovoV3700 V2 does not offer encryption functionality.
Encryption is a licensed feature that requires a license key to enable it before it can be used.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 31
Center when significant events are detected. Any combination of these notification methods
can be used simultaneously.
You can configure Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 to send different
types of notification to specific recipients and choose the alerts that are important to you.
When you configure Call Home to the Lenovo Support Center, all events are sent through
email only.
You can use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030. This file can be used with SNMP messages from all
versions of Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 software.
To send email, you must configure at least one SMTP server. You can specify as many as five
other SMTP servers for backup purposes. The SMTP server must accept the relaying of
email from the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 clustered system IP
address. You can then use the management GUI or the CLI to configure the email settings,
including contact information and email recipients. Set the reply address to a valid email
address. Send a test email to check that all connections and infrastructure are set up
correctly. You can disable the Call Home function at any time by using the management GUI
or the CLI.
32 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
1.9 More information resources
For more information about Lenovo Storage V3700 V2, V3700 V2 XP, and V5030, see the
following web pages:
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 home page:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v3700.doc/lenovo_v
series.html
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 Information Center:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.common.nav.doc/ove
rview_storage_vseries.html
The Online Information Center also includes a Learning and Tutorial section where you can
obtain videos that describe the use and implementation of the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030.
Chapter 1. Overview of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems 33
34 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2
http://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v5030
/6536/documentation
An appropriate 19-inch rack must be available. Depending on the number of enclosures to
install, more than one might be required. Each enclosure measures 2 U. A single Lenovo
Storage V3700 V2 or Lenovo Storage V3700 V2 XP control enclosure supports up to 10
expansion enclosures. A single Lenovo Storage V5030 control enclosure supports up to
20 expansion enclosures.
Redundant power outlets must be in the rack for each of the two power cords that are
required for each enclosure to be installed. Several power outlets are required, depending
on the number of enclosures to be installed. The power cords conform to the IEC320
C13/C14 standards.
A minimum of four Fibre Channel ports that are attached to redundant fabrics are
required. For dual I/O group systems, a minimum of eight Fibre Channel ports are
required.
Fibre Channel ports: Fibre Channel (FC) ports are required only if you are using FC
hosts or clustered systems that are arranged as two I/O groups. You can use the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 with Ethernet-only cabling for
Internet Small Computer System Interface (iSCSI) hosts or use serial-attached SCSI
(SAS) cabling for hosts that are directly attached.
For the Lenovo Storage V3700 V2 XP system, up to two hosts can be directly connected
by using SAS ports 2 and 3 on each node canister, with SFF-8644 mini SAS HD cabling.
You must have a minimum of two Ethernet ports on the LAN, with four preferred for more
redundancy or iSCSI host access.
You must have a minimum of two Ethernet cable drops, with four preferred for more
redundancy or iSCSI host access. If you have two I/O groups, you must have a minimum
of four Ethernet cable drops. Ethernet port 1 on each node canister must be connected to
the LAN, with port two as optional.
LAN connectivity: Port 1 on each node canister must be connected to the same
physical local area network (LAN) or be configured in the same virtual LAN (VLAN) and
be on the same subnet or set of subnets.
Technician port: On the Lenovo Storage V3700 V2 and V3700 V2 XP models, Port 2
is the technician port, which is used for system initialization and service. Port 2 must
not be connected to the LAN until the system initialization or service is complete.
36 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The 10 Gb Ethernet (copper) ports of a Lenovo Storage V5030 system require a Category
6A shielded cable that is terminated with an 8P8C modular connector (RJ45 compatible
connector) to function at 10 Gb.
Verify that the default IP addresses that are configured on Ethernet port 1 on each of the
node canisters (192.168.70.121 on node 1 and 192.168.70.122 on node 2) do not conflict
with existing IP addresses on the LAN. The default mask that is used with these IP
addresses is 255.255.255.0, and the default gateway address that is used is
192.168.70.1.
You need a minimum of three IPv4 or IPv6 IP addresses for systems that are arranged as
one I/O group and minimum of five if you have two I/O groups. One is for the clustered
system and is used by the administrator for management, and one for each node canister
for service access as needed.
A minimum of one and up to eight IPv4 or IPv6 addresses are needed if iSCSI-attached
hosts access volumes from the Lenovo Storage V Series.
At least two 0.6-meter (1.96 feet), 1.5-meter (4.9 feet), or 3-meter (9.8 feet) 12 Gb
mini-SAS cables are required for each expansion enclosure. The length of the cables
depends on the physical rack location of the expansion enclosure relative to the control
enclosures or other expansion enclosures.
38 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Lenovo Storage V5030
The Lenovo Storage V5030 supports up to 20 expansion enclosures per I/O group in two
SAS chains of 10. Up to 40 expansion enclosures can be supported in a two I/O group
configuration. To install the cables, complete the following steps:
1. By using the supplied SAS cables, connect the control enclosure to first expansion
enclosure by using the first chain:
a. Connect SAS port 1 of the left node canister in the control enclosure to SAS port 1 of
the left expansion canister in the first expansion enclosure.
b. Connect SAS port 1 of the right node canister in the control enclosure to SAS port 1 of
the right expansion canister in the first expansion enclosure.
2. To connect a second expansion enclosure, use the supplied SAS cables to connect the
control enclosure to expansion enclosure by using the second chain:
a. Connect SAS port 2 of the left node canister in the control enclosure to SAS port 1 of
the left expansion canister in the second expansion enclosure.
b. Connect SAS port 2 of the right node canister in the control enclosure to SAS port 1 of
the right expansion canister in the second expansion enclosure.
3. To connect additional expansion enclosures, alternate connecting them between chain
one and chain two to keep the configuration balanced:
a. Connect SAS port 2 of the left canister in the previous expansion enclosure to SAS
port 1 of the left expansion canister in the next expansion enclosure.
b. Connect SAS port 2 of the right canister in the previous expansion enclosure to SAS
port 1 of the right expansion canister in the next expansion enclosure.
4. Repeat the steps until all expansion enclosures are connected.
The Lenovo Storage V3700 V2 and V3700 V2 XP models support a single I/O group only and
can migrate from external storage controllers only. The Lenovo Storage V5030 supports up to
two I/O groups that form a cluster over the FC fabric. The Lenovo Storage V5030 also
supports full virtualization of external storage controllers.
The advised SAN configuration consists of a minimum of two fabrics that encompass all host
ports and any ports on external storage systems that are to be virtualized by the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030. The Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030 ports must have the same number of cables that are connected, and they must be
evenly split between the two fabrics to provide redundancy if one of the fabrics goes offline
(planned or unplanned).
40 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Zoning must be implemented after the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030,
hosts, and optional external storage systems are connected to the SAN fabrics. To enable the
node canisters to communicate with each other in band, create a zone with only the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 WWPNs (two from each node canister) on each
of the two fabrics.
If an external storage system is to be virtualized, create a zone in each fabric with the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 worldwide port names (WWPNs) (two from
each node canister) with up to a maximum of eight WWPNs from the external storage
system. Assume that every host has a Fibre Channel connection to each fabric. Create a
zone with the host WWPN and one WWPN from each node canister in the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 systems in each fabric.
Important: It is critical that only one initiator host bus adapter (HBA) is in any zone.
For load balancing between the node ports on the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030, alternate the host Fibre Channel ports between the ports of the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030.
A maximum of eight paths through the SAN are allowed from each host to the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030. Hosts where this number is exceeded are not
supported. The restriction limits the number of paths that the multipathing driver must resolve.
A host with only two HBAs must not exceed this limit with the correct zoning in a dual fabric
SAN.
Maximum ports or WWPNs: The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
support a maximum of 16 ports or WWPNs from a virtualized external storage system.
Figure 2-4 shows how to cable devices to the SAN. Optionally, ports 3 and 4 can be
connected to SAN fabrics to provide additional redundancy and throughput. Refer to this
example as the zoning is described.
Figure 2-4 Lenovo Storage V3700 V2, V3700 V2 XP and V5030 SAN cabling and zoning diagram
Similar zones must be created for all other hosts with volumes on the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 I/O groups.
Verify the interoperability with which the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
connects to SAN switches or directors by following the requirements that are provided at
these web pages:
Lenovo Storage V3700 V2 and V3700 V2 XP
http://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v3700
v2/6535/documentation
Lenovo Storage V5030
http://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v5030
/6536/documentation
Switches or directors are at the firmware levels that are supported by the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030.
Important: The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 port login maximum
that is listed in the restriction document must not be exceeded. The document is available
at this web page:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v3700.doc/svc_webg
etstartovr_21pax3.html
Connectivity issues: If any connectivity issues occur between the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 ports and the Brocade SAN switches or directors at 8 Gbps,
see this web page for the correct setting of the fillword port config parameter in the
Brocade operating system:
https://ibm.biz/Bdrb4g
42 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2.3 FC direct-attach planning
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 can be used with a direct-attach
Fibre Channel host configuration. The advised configuration for direct attachment is at least
one Fibre Channel cable from the host that is connected to each node of the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 to provide redundancy if one of the nodes goes offline,
as shown in Figure 2-5.
Figure 2-5 Lenovo Storage V3700 V2, V3700 V2 XP and V5030 FC direct-attach host configuration
Figure 2-6 Lenovo Storage V5030 FC direct-attach host configuration to I/O groups
Verify direct-attach interoperability with the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 and the supported server operating systems by following the requirements that are
provided at these sites:
http://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v3700v2/
6535/documentation
http://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v5030/65
36/documentation
44 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2.4 SAS direct-attach planning
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 allow SAS host attachment by
using an optional SAS card that must be installed in both node canisters. In addition, the
Lenovo V3700 V2 XP has two onboard SAS ports for host attachment. The SAS expansion
ports cannot be used for host attachment. Figure 2-7, Figure 2-8, and Figure 2-9 show the
SAS ports that can be used for host attachment in yellow for each Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 models.
Inserting cables: You can insert the cables upside down despite the keyway. Ensure that
the blue tag on the SAS connector is underneath when you insert the cables.
Use Ethernet port 1 to access the management graphical user interface (GUI), the service
assistant GUI for the node canister, and iSCSI host attachment. Port 2 can be used for the
management GUI and iSCSI host attachment.
Each node canister in a control enclosure connects over an Ethernet cable from Ethernet
port 1 of the canister to an enabled port on your Ethernet switch or router. Optionally, you can
attach an Ethernet cable from Ethernet port 2 on the canister to your Ethernet network.
Configuring IP addresses: No issue exists with the configuration of multiple IPv4 or IPv6
addresses on an Ethernet port or with the use of the same Ethernet port for management
and iSCSI access.
However, you cannot use the same IP address for management and iSCSI host use.
Table 2-1 shows possible IP configuration options of the Ethernet ports on the Lenovo
Storage V3700 V2,V3700 V2 XP, and V5030 systems.
Table 2-1 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 IP address configuration options per node canister
Lenovo Storage V3700 V2, V3700 V2 XP, and Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 management node canister 1 V5030 partner node canister 2
IPv4/6 management address Ethernet port 1 IPv4/6 service address Ethernet port 1
IPv4/6 management address Ethernet port 2 IPv4/6 iSCSI address Ethernet port 2
Technician port: On the Lenovo Storage V3700 V2 and V3700 V2 XP models, port 2
serves as the technician port, which is used for system initialization and service. Port 2
must not be connected to the LAN until the system initialization or service is complete.
46 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2.5.1 Management IP address considerations
Because Ethernet port 1 from each node canister must connect to the LAN, a single
management IP address for the clustered system is configured as part of the initial setup of
the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems.
The management IP address is associated with one of the node canisters in the clustered
system and that node then becomes the configuration node. If this node goes offline (planned
or unplanned), the management IP address fails over to the other node’s Ethernet port 1.
For more clustered system management redundancy, you need to connect Ethernet port 2 on
each of the node canisters to the LAN, which allows for a backup management IP address to
be configured for access, if necessary.
Figure 2-10 shows a logical view of the Ethernet ports that are available for the configuration
of the one or two management IP addresses. These IP addresses are for the clustered
system and associated with only one node, which is then considered the configuration node.
If two connections per host are used, multipath software is also required on the host. If an
iSCSI host is deployed, it also requires multipath software. All node canisters must be
configured and connected to the network so that any iSCSI hosts see at least two paths to
volumes. Multipath software is required to manage these paths.
Various operating systems are supported by the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030. For more information about various configurations supported, check the Lenovo
interoperability matrix web page at the following address:
https://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v3700v2
/6535/documentation
https://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v5030/6
536/documentation
48 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2.7 Miscellaneous configuration planning
During the initial setup of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems,
the installation wizard asks for various information that needs to be available during the
installation process. Several of these fields are mandatory to complete the initial
configuration.
Collect the information in the following checklist before the initial setup is performed. The date
and time can be manually entered, but to keep the clock synchronized, use a Network Time
Protocol (NTP) service:
Document the LAN NTP server IP address that is used for the synchronization of devices.
To send alerts to storage administrators and to set up Call Home to Lenovo for service and
support, you need the following information:
Name of the primary storage administrator for Lenovo to contact, if necessary.
Email address of the storage administrator for Lenovo to contact, if necessary.
Phone number of the storage administrator for Lenovo to contact, if necessary.
Physical location of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems
for Lenovo service (for example, Building 22, first floor).
Simple Mail Transfer Protocol (SMTP) or email server address to direct alerts to and
from the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030.
For the Call Home service to work, the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 systems must have access to an SMTP server on the LAN that can forward
emails to the default Lenovo service address.
Email address of local administrators that must be notified of alerts.
IP address of Simple Network Management Protocol (SNMP) server to direct alerts to,
if required (for example, operations or Help desk).
After the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 initial configuration, you might
want to add more users who can manage the system. You can create as many users as you
need, but the following roles generally are configured for users:
Security Admin
Administrator
CopyOperator
Service
Monitor
The user in the Security Admin role can perform any function on the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030.
The user in the Administrator role can perform any function on the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 systems, except manage users.
User creation: The Security Admin role is the only role that has the create users function.
Limit this role to as few users as possible.
The user in the CopyOperator role can view anything in the system, but the user can
configure and manage only the copy functions of the FlashCopy capabilities.
The user in the Monitor role can view object and system configuration information but cannot
configure, manage, or modify any system resource.
It allows for troubleshooting and management tasks, such as checking the status of the
storage server components, updating the firmware, and managing the storage server.
The GUI also offers advanced functions, such as FlashCopy, Volume Mirroring, Remote
Mirroring, and Easy Tier. A command-line interface (CLI) for the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 systems also are available.
This section describes system management by using the GUI and CLI.
Supported web browsers: Follow this link to find more information about supported
browsers and to check the latest supported levels:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v3700.doc/svc_webb
rowserreqmts_3rdhu7.html
Complete the following steps to open the management GUI from any web browser:
1. Browse to one of the following locations:
– http(s)://host name of your cluster/
– http(s)://cluster IP address of your cluster/
(An example is https://192.168.70.120.)
2. Use the password that you created during system setup to authenticate with the superuser
or any additional accounts that you created. The default user name and password for the
management GUI is shown:
– User name: superuser
– Password: passw0rd
50 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: The 0 character in the password is the number zero, not the letter O.
For more information, see Chapter 3, “Graphical user interface overview” on page 75.
After you complete the initial configuration that is described in 2.10, “Initial configuration” on
page 56, the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 Systems overview window
opens, as shown in Figure 2-12.
You can set up the initial Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems by
using the process and tools that are described in 2.9, “First-time setup” on page 52.
Before you set up the initial Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems,
ensure that the system is powered on.
Power on: See the following information to check the power status of the system:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v3700.doc/v3700_sy
stem_leds.html
52 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Set up the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems by using the
technician Ethernet port:
1. Configure an Ethernet port on the personal computer to enable the Dynamic Host
Configuration Protocol (DHCP) configuration of its IP address and Domain Name System
(DNS) settings.
If you do not use DHCP, you must manually configure the personal computer. Specify the
static IPv4 address 192.168.0.2, subnet mask 255.255.255.0, gateway 192.168.0.1, and
DNS 192.168.0.1.
2. Locate the Ethernet port that is labeled T on the rear of the node canister.
On the Lenovo Storage V3700 V2and V3700 V2 XP systems, the second Ethernet port is
also used as the technician port, as shown in Figure 2-13 and Figure 2-14.
The Lenovo Storage V5030 system uses a dedicated technician port, which is shown in
Figure 2-15.
3. Connect an Ethernet cable between the port of the personal computer that is configured in
step 2 and the technician port. After the connection is made, the system automatically
5. If you experience a problem when you try to connect due to a change in system states,
wait 5 - 10 seconds and try again.
6. Click Next, as shown in Figure 2-17.
54 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
7. Choose the first option to set up the node as a new system and click Next to continue to
the window that is shown in Figure 2-18.
8. Complete all of the fields with the networking details for managing the system and click
Next. When the task completes, as shown in Figure 2-19, click Close.
Note: The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 GUI show the CLI as
you go through the configuration steps.
9. The system takes approximately 10 minutes to reboot and reconfigure the Web Server as
shown in Figure 2-20 on page 56. After this time, click Next to proceed to the final step.
10.After you complete the initialization process, disconnect the cable between the personal
computer and the technician port as instructed in Figure 2-21. Reestablish the connection
to the customer network and click Next to be redirected to the management address that
you provided to configure the system initially.
If you completed the initial setup, that wizard automatically redirects you to the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 GUI. Otherwise, complete the following steps to
complete the initial configuration process:
1. Start the service configuration wizard by using a web browser on a workstation and point it
to the system management IP address that was defined in Figure 2-18 on page 55.
56 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. Enter a new secure password twice for the superuser user as shown in Figure 2-22 and
click Log in.
3. Verify the prerequisites in the Welcome window as shown in Figure 2-23 and click Next.
4. Accept the license agreement after reading it carefully as shown in Figure 2-24 on
page 58 and click Next.
5. Change the password for superuser from the default as shown in Figure 2-25, then click
Apply and Next.
58 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6. You will see password successfully changed message as shown in Figure 2-26.
7. In the System Name window, enter the system name as shown in Figure 2-27 and click
Apply and Next .
Note: Use the chsystem command to modify the attributes of the clustered system. This
command can be used any time after a system is created.
8. In the next window, the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 GUI provide
help and guidance about additional licenses that are required for certain system functions.
A license must be purchased for each enclosure that is attached to, or externally managed
by, the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. For each of the functions,
enter the number of enclosures as shown in Figure 2-28 and click Apply and Next.
Encryption license: The encryption feature that is available on the Lenovo Storage
V3700 V2 XP and Lenovo Storage V5030 systems use a special licensing system that
differs from the licensing system for the other features. Encryption requires a license
key that can be activated in step 10.
9. Two options are available for configuring the date and time. Select the required method
and enter the date and time manually or specify a network address for a Network Time
Protocol (NTP) server. After this selection, the Apply and Next option becomes active, as
shown in Figure 2-29. Click Apply and Next.
60 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10.If you purchased an Encryption License for a Lenovo V3700 V2 XP or Lenovo Storage
V5030 system, select Yes as shown in Figure 2-30. One license is required for each
control enclosure. Therefore, in a V5030 configuration with two I/O groups, two license
keys are required.
11. The easiest way to activate the encryption license is to highlight each enclosure that you
want to activate the license for and choose Actions → Activate License Automatically
and enter the authorization code that came with the purchase agreement for encryption.
This action retrieves and applies a license key from ibm.com as shown in Figure 2-31.
12.If automatic activation cannot be performed, for example, if the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 systems are behind a firewall that prevents it from accessing the
internet, choose Actions → Activate License Manually. Follow these steps:
a. Go to this web page:
https://www.ibm.com/storage/dsfa
b. Select Storwize. Enter the machine type (6535 or 6536), serial number, and machine
signature of the system. You can obtain this information by clicking Need Help.
c. Enter the authorization codes that were sent with your purchase agreement for the
encryption function.
14.After entering the system location, click Next to set up the contact person for the system
as shown in Figure 2-33, then click Apply and Next.
62 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
15.You can configure your system to send email reports to field support if an issue is detected
that requires hardware replacement. This function is called Call Home. When this email is
received, field support automatically opens a problem report and contacts you to verify
whether replacements parts are required.
Call Home: When Call Home is configured, the Lenovo Storage V3700 V2, V3700 V2
XP, and V5030 automatically create a Support Contact with one of the following email
addresses, depending on the country or region of installation:
US, Canada, Latin America, and Caribbean Islands: callhome1@de.ibm.com
All other countries or regions: callhome0@de.ibm.com
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 can use Simple Network
Management Protocol (SNMP) traps, syslog messages, and Call Home email to notify you
and the Field Support Center when significant events are detected. Any combination of
these notification methods can be used simultaneously.
To set up Call Home, you need the location details of the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030, Storage Administrator details, and at least one valid SMTP
server IP address as shown in Figure 2-34.
Note: If you do not want to configure Call Home now, you can defer it using the
check-box in the GUI and come back to it later via Settings → Notifications.
In our setup, we chose to set up the support assistance later as it is covered extensively in
Chapter 12, “RAS, monitoring, and troubleshooting” on page 625.
17.The Summary window for the contact details, system location, email server, Call Home,
and email notification options is shown in Figure 2-36 on page 65.
64 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 2-36 Setup wizard: Summary
18.After you click Finish, the web browser is redirected to the landing page of management
GUI as shown in Figure 2-37.
Note: Adding a second I/O group (via second controller enclosure) is only supported on
the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 models.
After the hardware is installed, cabled, zoned, and powered on, a second control enclosure is
visible from the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 GUI, as shown in
Figure 2-38.
66 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Complete the following steps to use the management GUI to configure the new enclosure:
1. In the main window, click Actions in the upper-left corner and select Add Enclosures.
Alternatively, you can click the available control enclosure as shown in Figure 2-39.
2. If the control enclosure is configured correctly, the new control enclosure is identified in the
next window, as shown in Figure 2-40.
5. When the new enclosure is added, the storage that is provided by the internal drives is
available to use as shown in Figure 2-42.
68 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6. After the wizard completes the addition of the new control enclosure, the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 show the management GUI that contains two I/O
groups, as shown in Figure 2-43.
Figure 2-43 Lenovo Storage V Series GUI with two I/O groups
3. Select the expansion enclosure and click Actions → Identify to turn on the identify LEDs
of the new enclosure, if required. Otherwise, click Next.
4. The new expansion enclosure is added to the system as shown in Figure 2-46. Click
Finish to complete the operation.
70 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
5. After the expansion enclosure is added, the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 show the management GUI that contains two enclosures, as shown in Figure 2-47.
Figure 2-47 Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 GUI with two enclosures in a
single I/O group
The management IP and service IP addresses can be changed within the GUI as shown in
Chapter 3, “Graphical user interface overview” on page 75.
IBM Service Assistant (SA) Tool is a web-based GUI that is used to service individual node
canisters, primarily when a node has a fault and it is in a service state. A node cannot be
active as part of a clustered system while the node is in a service state. The SA Tool is
available even when the management GUI is not accessible. The following information and
tasks are included:
Status information about the connections and the node canister
Basic configuration information, such as configuring IP addresses
Service tasks, such as restarting the Common Information Model object manager
(CIMOM) and updating the worldwide node name (WWNN)
Details about node error codes and hints about how to fix the node error
Important: Service Assistant Tool can be accessed by using the superuser account only.
You must access Service Assistant Tool under the direction of Lenovo Support only.
The Service Assistance GUI is available by using a service assistant IP address on each
node. The SA GUI is accessed through the cluster IP addresses by appending service to the
cluster management URL. If the system is down, the only other method of communicating
with the node canisters is through the SA IP address directly. Each node can have a single
To open the SA GUI, enter one of the following URLs into any web browser:
http(s)://cluster IP address of your cluster/service
http(s)://service IP address of a node/service
When you access SA by using the <cluster address>/service, the configuration node
canister SA GUI login window opens, as shown in Figure 2-48.
The SA interface can view status and run service actions on other nodes and the node where
the user is connected.
72 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
After you are logged in, you see the Service Assistant Tool Home window, as shown in
Figure 2-49.
The current canister node is displayed in the upper-left corner of the GUI. As shown in
Figure 2-49, the current canister node is node 1. To change the canister, select the relevant
node in the Change Node section of the window. You see that the details in the upper-left
corner change to reflect the new canister.
The SA GUI provides access to service procedures and shows the status of the node
canisters. We advise that you perform these procedures only if you are directed to use them
by IBM Support.
For more information about how to use the SA Tool, see this web page:
https://ibm.biz/BdjSJq
JavaScript: You must enable JavaScript in your browser. For Mozilla Firefox, JavaScript is
enabled by default and requires no additional configuration. For more information about
configuring your web browser, go to this web page:
https://ibm.biz/BdjS9Z
We suggest that each user has an account that is not shared with someone else. The
default user accounts need to be unavailable for remote access, or the passwords need to
be changed from the default password and known only to the system owner or kept
secured for emergency purposes only. This approach helps to identify the personnel who
are working on the device and to track all of the important changes in the systems. The
Superuser account must be used for initial configuration only.
76 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. After a successful login, the System pane displays the Dashboard with all relevant details
of your system. Shown in Figure 3-2.
The System pane is an important user interface. In the remaining chapters, we do not explain
how to access it each time.
Dynamic View
System Menu
Performance Meter
78 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-4 Settings menu
The middle of the window shows a component model of the existing configuration. Hovering
the mouse cursor over each component and its part highlights that part and provides a
pop-up menu with a description that identifies important parameters and functions of this
element. To see the rear of the component, you can dynamically rotate them as in a typical
360° view. Right-clicking a component or its part opens its menu with actions, which are
normally available from the dynamic menu on the left or from the Actions Button in the upper
left corner. Figure 3-5 shows the component model.
The bottom of the window shows the performance indicator. It gives you information how your
machine performs right now. Information covers only the external tasks for attached hosts.
Figure 3-6 shows the performance indicator. How the performance internally is, shows the
System statistic page as in Figure 3-7 on page 80.
On the right upper area you get more information about the health of your system, as the
Event button shows it in Figure 3-8,
80 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-9 Suggested Tasks
The bottom of the window shows five status indicators. Clicking any of them provides more
detailed information about the existing configuration, situation or status of the solution. Click
any of these function icons to expand them and minimize them as required, or to switch
between different types of information, for example, virtual or allocated storage. In an error or
warning situation, those indicators are extended by the status alerts icon in the upper-right
corner See Figure 3-8 on page 80.
3.1.3 Navigation
Navigating in the management tool is simple. You can click with the cursor on one of the eight
function icons and display a sub menu of options. You can move the cursor to an option and
select it. Figure 3-11 on page 82 shows how to access, for example, the Pools option.
82 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Similarly, if you want to select multiple items that are not in sequential order, click the first
item, press and hold the Ctrl key, and click the other items that you need (Figure 3-13).
Another option for selecting volumes is selecting by mask. To select all volumes that have
“V5000” in their name, click Filter, type V5000, and press Enter. All volumes with “V5000” in
their name display. After you filter the volumes, you can easily select all of the displayed
volumes or a subset by using the Ctrl key or Shift key technique that was explained
previously (Figure 3-14).
Help
Another useful interface feature is the integrated help function. You can access help for
certain fields and objects by moving the mouse cursor over the question mark (?) icon
(Figure 3-16) next to the field. Panel-specific help is available by clicking Need Help or by
using the Help link in the upper-right corner of the GUI.
84 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3.2 Overview pane
The welcome pane of the GUI changed from the well-known former Overview pane to the
new System pane, as shown in Figure 3-17. Clicking Overview in the upper-right corner of
the System pane opens a modified Overview pane with similar functionality as in previous
versions of the software.
See 3.1.2, “System pane layout” on page 77 to understand the structure of the pane and how
to navigate to various system components and manage them more efficient and quickly.
The option that was known as System Details is integrated into the device overview on the
general System pane, which is available after login or when you click System from the
Monitoring menu. The details are shown in 3.3.2, “System details” on page 89.
86 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3.3.1 System overview
The System option on the Monitoring menu provides a general overview about your Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 systems, including the depiction of all devices
in a rack directly connected to it. See Figure 3-19.
When you hover a mouse pointer over a specific component in an enclosure, a pop-up
window indicates the details of disk drives in the unit. See Figure 3-20 for the details of Drive
0 in an enclosure.
By right-clicking and selecting Properties, you see detailed technical parameters, such as
capacity, interface speed, rotation speed, and the drive status (online or offline). Click View
more details as shown in Figure 3-21 on the properties frame.
In an environment with multiple Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
systems, you can easily navigate the onsite personnel or technician to the correct device by
enabling the identification LED on the front pane. First, right-click the enclosure or drive that
you want to identify. Then, click Identify in the pop-up window that is shown in Figure 3-23
and wait for the confirmation from the technician that the device in the data center was
identified correctly.
Alternatively, you can use the command-line interface (CLI) to obtain the same results. Type
the following sequence of commands:
svctask chenclosure -identify yes 1 (or just chenclosure -identify yes 1)
svctask chenclosure -identify no 1 (or just chenclosure -identify no 1)
88 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
You can use the same CLI to obtain results for a specific controller or drive.
Each system that is shown in the dynamic system view in the middle of a System pane can
be rotated by 180° to see its rear side. Click the rotation arrow in the lower-right corner of the
device, as illustrated in Figure 3-24. In case of a malfunction, the arrow appears in red.
Affected areas are marked in yellow. See arrow. Hover over it with your mouse to get more
detailed information.
The output is shown in Figure 3-26 on page 90. By using this menu, you can also power off
the machine.
Additionally, from the System pane, you can get an overview (View) of the hardware,
available ports and status for Fibre Channel (FC) and serial-attached SCSI (SAS) ports and
Drives (Figure 3-27).
90 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
By selecting, for example, Fibre Channel Ports, you can see the list and status of available
FC ports with their speed and worldwide port names (WWPNs) in Figure 3-28.
3.3.3 Events
The Events option, which is selected from the Monitoring menu (Figure 3-18 on page 86),
tracks all informational, warning, and error messages that occur in the system. You can apply
various filters to sort them or export them to an external CSV file. A CSV file can be created
from the information that is included in the Events list. Figure 3-29 shows the display after you
click Events from the Monitoring menu.
3.3.4 Performance
The Performance pane reports the general system statistics that relate to processor (CPU)
use, host and internal interfaces, volumes, and MDisks. You can switch between MBps or
IOPS, or even navigate to the statistics at the node level. The Performance pane might be
useful when you compare the performance of each node in the system if problems exist after
a node fail over occurs. See Figure 3-30.
Figure 3-30 Performance statistics of the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems
The performance statistics in the GUI shows, by default, the last 5 minutes of data. To see the
details of each sample, click the graph and select the time stamp, as shown in Figure 3-31.
As mentioned before, the previous charts represent 5 minutes of the data stream. For
in-depth storage monitoring and performance statistics of your system with historical data,
use the IBM SmartCloud Virtual Storage Center.
92 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3.3.5 Background Task
Use the Background Tasks page to view and manage current tasks that are running on the
system. The Background Tasks page displays all long running tasks that are currently in
progress on the system. Tasks, such as volume synchronization, array initialization, and
volume formatting, can take some time to complete. The Background Tasks page displays
these tasks and their progress. After the task completes, the task is automatically deleted
from the display. If a task fails with an error, select Monitoring → Events to determine the
problem. Figure 3-32 shows an example of a Flash Copy Operation.
* The total capacity values assumes that all of the storage pools in the system use the same
extent size.
Volumes are created from the extents that are available in the pool. You can add MDisks to a
storage pool at any time, either to increase the number of extents that are available for new
volume copies or to expand existing volume copies.
Place the cursor over the Pools function icon and click to display the Pools menu options
(Figure 3-33).
94 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
External Storage Shows all pools and their volumes that are created from the systems
that connect to the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 externally and that are integrated in the system repository. It
does not show any internal pools or volumes. This type of storage is
also called external virtualization.
MDisks by Pools Provides the list of all managed disks (MDisks) that are either
internally or externally connected and associated with one of the
defined pools. It also lists all unassigned MDisks separately.
System Migration Offers the migration wizard to import data from image-mode MDisks to
a specific pool. It is useful when you migrate data non-disruptively to
the hosts from old external storage to the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030.
3.4.1 Pools
If you plan to add storage to an existing pool, use the main Pools view. Right-click an existing
pool (or create a pool in advance and add the storage) and select Add Storage as shown in
Figure 3-34.
In the next window, you can either select Internal or Internal custom. Internal custom gives
you the chance to select the Raid type, Drive class and use of a spare drive. See Figure 3-35
on page 96.
Figure 3-36 shows the window if you select Internal. Choose the default (Quick) settings to
create a new MDisk for the pool.
Figure 3-37 on page 97 shows you the window if you selected the Internal Custom storage
panel.
96 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-37 Internal Custom storage selection
You can choose from the available drive classes (depending on the installed drives) and
RAID sets.
A number of administration tasks benefit from being able to define and work with a part of a
pool. For example, the system supports VMware vSphere Virtual Volumes, sometimes
98 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
referred to as VVols, that are used in VMware vCenter and VASA applications. Before a child
pool can be used for Virtual Volumes for these applications, the system must be enabled for
Virtual Volumes.
A child pool is an object that is similar to a storage pool, and a child pool can be used
interchangeably with a storage pool. A child pool supports volume copy and migration.
However, limitations and restrictions exist for child pools:
The maximum capacity cannot exceed the parent pool’s size.
The capacity can be allocated at creation (thick) or flexible (thin).
You must always specify the parent storage pool. The child pool does not own any
MDisks.
Child pools can also be created by using the GUI.
The maximum number of child pools in one parent pool is 127.
You are restricted to migrating image-mode volumes to a child pool.
Volume extents cannot be migrated out of the child pool.
You cannot shrink capacity smaller than the real capacity.
You can view the list of child pools from the Pools menu option by clicking the plus sign (+) of
a parent pool, as shown in Figure 3-41.
In addition, you can choose a different icon (Figure 3-42 on page 100) that represents this
pool.
To change the icon, use the pen sign as shown in Figure 3-43.
When the pools are defined and the volumes are assigned, the pool shows one of the
following operational states:
Online The storage pool is online and available. All of the MDisks in the
storage pool are available.
Degraded path One or more nodes in the clustered system cannot access all of the
MDisks in the pool. A degraded path state is most likely the result of
the incorrect configuration of either the storage system or the FC
fabric. However, hardware failures in the storage system, FC fabric, or
node can also be a contributing factor to this state.
Degraded ports One or more 1220 errors were logged against the MDisks in the
storage pool. The 1220 error indicates that the remote FC port was
excluded from the MDisk. This error might cause reduced
performance on the storage system and usually indicates a hardware
problem with the storage system.
To fix this problem, you must resolve any hardware problems on the
storage system and fix the 1220 errors in the event log. To resolve
these errors in the log, select Troubleshooting → Recommended
Actions in the management GUI.
This action displays a list of unfixed errors that are in the event log. For
these unfixed errors, select the error name to begin a guided
maintenance procedure to resolve the errors. Errors are listed in
100 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
descending order with the highest priority error listed first. Resolve the
highest priority errors first.
Offline The storage pool is offline and unavailable. No nodes in the system
can access the MDisks. The most likely cause is that one or more
MDisks are offline or excluded.
Important: In this view, volumes from child pools are shown the same way as volumes
from standard pools. The relationships between the child and parent pools are not visible.
Click Actions in the table header, or right-click a specific drive to take the drive offline, show
its properties, update firmware of a single drive (or all under Actions) or mark the drive as
Unused, Candidate, or Spare.
If you search for external storage and you are not in the replication layer you will get the
following warning in the GUI as shown in Figure 3-45 on page 102.
Clicking the External Storage option opens the window as shown in Figure 3-46. It provides
the list of all externally connected (storage area network (SAN)- and iSCSI attached) disk
systems to the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. The system supports
also iSCSI connections to systems that are used as external storage. Unlike Fibre Channel
connections, you need to manually configure the connections between the source system
and these target external storage systems. Direct attachment between the system and
external storage systems is not supported and requires Ethernet switches between the
system and the external storage. To avoid a single point of failure, a dual switch configuration
is recommended. For full redundancy, a minimum of two paths between each initiator node
and target node must be configured with each path on a separate switch. In addition, extra
paths can be configured to increase throughput if both initiator and target nodes support more
ports. The system supports a maximum of four paths per node.
When the new external storage system is zoned correctly to the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030, run the Discover storage procedure either from the Actions menu
in the table header or by right-clicking any of the existing MDisks in the list (Figure 3-46).
A new storage controller (external storage system) is listed automatically when the SAN
zoning is configured, but typically without detecting disk drives (Figure 3-47 on page 103).
102 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-47 Figure 3-47Automatically detected new external storage system
By right-clicking a newly detected storage system, you can rename the controller’s default
name, in our case, the IBM System Storage FlashSystem 840, to reflect the real type of the
storage device. We suggest that you use a simple naming convention, which in our case is
FlashSystem 840 (Figure 3-49).
After the new external storage system is named correctly, detect all disks that are configured
on that external storage, in our case, System Storage FlashSystem 840. You can also
discover storage from the CLI by using the svctask detectmdisk command or detectmdisk.
Figure 3-50 on page 104 shows details about detected managed disks.
All newly discovered disks are always interpreted in an unmanaged mode. You must assign
them to the specific pool to be able to operate them.
Important: The MDisks are not physical disk drives, but storage arrays that are configured
on external systems.
If you add a managed disk that contains existing data to a managed disk group, you lose
the data that it contains. The image mode is the only mode that preserves its data.
104 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-51 List of managed disks that are sorted within pools
All disks that are not yet assigned to any pool are listed in the Unassigned MDisks section.
This section is always at the top of the list, even if you sort the list by pool name (clicking the
Name header of the table). Right-click a specific disk to open a window where you can assign
selected unmanaged disks to the pool.
From the same pane, you can define a new storage pool by clicking Create Pool in the
upper-left corner of the table (highlighted in Figure 3-51). The wizard window opens and you
need to specify pool parameters, such as Pool Name, Extent Size, and Warning Threshold.
You can directly select Unmanaged MDisks that you want to include in the pool, or skip this
task and add MDisks later.
Note: All sort functions in the header of the table apply to MDisks within pools. You cannot
sort volumes based on specific criteria across all pools.
To migrate existing data, use the storage migration wizard to guide you through the
procedure. This wizard is available by selecting Pools → System Migration as shown in
Figure 3-52 on page 106.
The migration of external volumes to the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 systems is one of the key benefits and features of external storage virtualization that
are provided by this product. Therefore, we dedicate a whole chapter to this topic. See
Chapter 7, “Storage migration” on page 323 for detailed steps of the migration process.
Administrators can migrate data from the external storage system to the system that uses
either iSCSI connections, serial-attached SCSI connections, and Fibre Channel or Fibre
Channel over Ethernet connections. To use Fibre Channel connections, the system must
have the optional Fibre Channel host interface adapter installed.
Note: Before migrating storage, ensure that all host operations are stopped, all the
appropriate changes are made to the environment based on the connection type, and the
storage that is being migrated is configured to use the device.
At any time, you can pause the running migration processes or create a new one. No license
for External Virtualization is required to migrate from old storage to your new Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030.
106 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
HyperSwap volume is automatically sent to both copies so that either
site can provide access to the volume if the other site becomes
unavailable. HyperSwap volumes are supported on Lenovo storage
systems (for example, Lenovo storage V5030 system) that contain
more than one I/O group.
Custom Custom volumes create volumes that are based on user-defined
customiztion rather than taking the standard default settings for each
of the options under quick volume creation.
Thin-provisioned When you create a volume, you can designate it as thin-provisioned. A
thin-provisioned volume has a virtual capacity and a real capacity.
Virtual capacity is the volume storage capacity that is available to a
host. Real capacity is the storage capacity that is allocated to a
volume copy from a storage pool.
In a fully allocated volume, the virtual capacity and real capacity are
the same. In a thin-provisioned volume, the virtual capacity can be
much larger than the real capacity.
Compressed This volume is a special type of volume where data is compressed and
thin-provisioned at the same time. Any compressed volume is a
thin-provisioned volume by default, and no option is available to
change this characteristic. Data within the compressed volume is
compressed as it is written to disk. This design saves additional space
on the storage drive so that you can store more data within the same
storage system.
Change volumes Change volumes are used in Global Mirror relationships where cycling
mode is set to Multiple. Change volumes can also be used between
HyperSwap volume copies, and other relationship types, to
automatically maintain a consistent image of a secondary volume
when a relationship is being re synchronized. Change volumes create
periodic point-in-time-copies of the source volumes and replicate them
to the secondary site. Using change volumes lowers bandwidth
requirements by only addressing the average throughput and not the
peak.
Each volume copy is created from a set of extents in a storage pool. By using volume
mirroring, a volume can have two physical copies. Each volume copy can belong to a
different storage pool, and each copy has the same virtual capacity as the volume. In the
management GUI, an asterisk (*) indicates the primary copy of the mirrored volume. The
primary copy indicates the preferred volume for read requests.
Select the Volumes function icon to display the Volumes menu options (Figure 3-53 on
page 108).
The wizard opens and the list of volume options is displayed (Figure 3-55 on page 109).
108 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-55 Create Volumes wizard
The description of each type of volume and the procedures for how to effectively create these
volumes are described in Chapter 6, “Volume configuration” on page 269.
In addition to the volume creation, other direct volume functions are available:
Mapping and unmapping volumes to hosts
Renaming, shrinking, or expanding existing volumes
Modify Mirror Synchronisation Rate
Space savings → Estimate compression savings
Migrating to a different pool
Defining a volume copy
All of these tasks are available when you select a specific volume and click Actions
(Figure 3-56 on page 110). Not all options are shown.
When you move a volume to another I/O group (a different Lenovo Storage V5030 system),
be sure that the correct host zoning is in place. The target host must have access to both
systems: source and target. This function is only available on the Lenovo Storage V5030.
110 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-57 Listing volumes by host
3.6.1 Hosts
This option provides an overview about all hosts that are connected (zoned) to the system,
detected, and configured to be ready for storage allocation. This overview shows the
following information about the hosts:
The name of the host as defined in the management GUI
The type of the host
Its access status
The number of ports that is used for host mapping
Whether host mapping is active or not
From the same pane, you can create a new host, rename a host, delete a host, or modify a
host mapping. The output of the menu selection is shown in Figure 3-59.
112 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
For example, when you click Add Host in a pane header, a wizard opens where you define
either a Fibre Channel host or an iSCSI host (Figure 3-60).
To rename multiple hosts in a single step, mark all hosts that you want by using the Ctrl or
Shift key, right-click, and then from the opened menu, select Rename. The window that is
shown in Figure 3-61 opens.
Many of the actions that are described are available from different menus. For example, you
can select Volumes and its option Volumes by Hosts, where you can also rename hosts.
This flexibility is one of the advantages of the enhanced, redesigned management GUI.
The systems use internal protocols to manage access to the volumes and ensure consistency
of the data. Host objects that represent hosts can be grouped in a host cluster and share
access to volumes. New volumes can also be mapped to a host cluster, which simultaneously
maps that volume to all hosts that are defined in the host cluster. Each host cluster is
identified by a unique name and ID, the names of the individual host objects within the cluster,
and the status of the cluster. A host cluster can contain up to 128 hosts. However, a host can
be a member of only one host cluster. The management GUI displays the status of each host
cluster. A host cluster can have one of the following statuses:
Online All hosts in the host cluster are online.
Host degraded All hosts in the host cluster are either online or degraded.
Host cluster degraded At least one host is offline and at least one host is either online or
degraded.
Offline All hosts in the host cluster are offline (or the host cluster does not contain any
hosts).
By default, hosts within a host cluster inherit all shared volume mappings from that host
cluster, as if those volumes were mapped to each host in the host cluster individually. Hosts in
a host cluster can also have their own private volume mappings that are not shared with other
hosts in the host cluster. With shared mapping, volumes are mapped on a host cluster basis.
The volumes are shared by all of the hosts in the host cluster, if there are no Small Computer
System Interface (SCSI) LUN conflicts among the hosts. Volumes that contain data that is
needed by other hosts are examples of a shared mapping. If a SCSI LUN conflict occurs, a
shared mapping is not created. SCSI LUN conflicts can occur if multiple volumes are mapped
with the same SCSI LUN ID or if same volume is mapped to multiple SCSI LUN IDs. The
system does not allow a volume to be mapped more than once to the same host. With private
mapping, individual volumes are directly mapped to individual hosts. These volumes are not
114 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
shared with any other hosts in the host cluster. A host can maintain the private mapping of
some volumes and share other volumes with hosts in the host cluster. The SAN boot volume
for a host would typically be a private mapping.
This overview shows hosts with active, inactive, or degraded ports. You can delete or add a
port, or modify its characteristics. Also, in this pane, you can create a new host or rename the
existing one.
To perform any of the tasks that are shown in Figure 3-64 on page 116, click Actions and
select a menu item.
To delete multiple ports, select them by using the Ctrl key or Shift key and click Delete.
From this window, you can view the host properties. Or, you can obtain the list of mapped
volumes or work with port definitions. Right-click the specific host and select Properties
(Host) from the opened menu. A window similar to the one in Figure 3-66 on page 117 opens.
116 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-66 Host properties
With enabled details, you can modify host name, host type, I/O group assignment, or iSCSI
Challenge Handshake Authentication Protocol (CHAP) Secret by clicking Edit and then
Save, as shown in Figure 3-66.
Figure 3-67 on page 118 shows the Copy Services menu functions.
In this section, we briefly describe how to navigate in the Copy Services menu.
3.7.1 FlashCopy
IBM FlashCopy is a function that you use to create a point-in-time copy of one of your
volumes. This function might be helpful when you back up a data or test applications. These
copies can be cascaded one on another, read from, written to, and even reversed.
118 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
If you need to create a FlashCopy of an additional volume, right-click the volume and the list
of available functions displays. You can perform several tasks, such as initiate a new
snapshot, and clone or back up a volume.
Clicking the volume name opens the window that is shown in Figure 3-69. You can click the
tabs at the top of the window to display additional information, such as the hosts that the
volume or FlashCopy volume is mapped to and its dependent MDisks.
Click Consistency Group to open the window that is shown in Figure 3-70 on page 120.
FlashCopy relationships can be placed into a consistency group. You can also use start and
stop commands against the FlashCopy consistency group from this window by right-clicking
the relationship.
When any FlashCopy consistency group is available, either empty or with existing
relationships, you can move an existing relationship to that group. Right-click a relationship
and select Move to Consistency Group as shown in Figure 3-71.
Other actions on the same menu include Remove from Consistency Group, Start (resume) or
Stop that FlashCopy operation, Rename Mapping (rename a target volume or FlashCopy
mapping), and Delete Mapping.
From the menu, select the appropriate group (in our case, the only one available) and confirm
the selection (Figure 3-72 on page 121).
120 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-72 Assigning the consistency group
The result of the operation is similar to the result that is shown in Figure 3-73.
In a single mapping, the source and destination cannot be on the same volume. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a consistency group. These groups of mappings can be triggered at
the same time, enabling multiple volumes to be copied at the same time, which creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files are on separate disks.
If a consistency group (ID or Name) is not specified, the mapping is assigned to the default
group 0, which is a special group that cannot be started as a whole. Mappings in this group
can be started only on an individual basis.
An example of the wizard for FlashCopy mapping creation is shown in Figure 3-75 on
page 123. Select source and target volumes from the wizard.
122 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-75 Selecting volumes for FlashCopy mappings
You can select the Snapshot (copy-on-write), Clone (replica of the volume without effect on
original one), or Backup (data recovery) type of relationship. When selected, you can specify
whether you also want to add the mapping to the consistency group.
The menu provides the options to create Metro Mirror, Global Mirror, or Global Mirror with
Changed Volumes:
3.7.5 Partnerships
Click Partnerships to open the window that is shown in Figure 3-77. You can use this
window to set up a new partnership, or delete an existing partnership for remote mirroring
with another Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems. To create a
partnership, click Create Partnership. A new window displays. When you select the
partnership type, for example, Fibre Channel, the window expands to a more detailed view,
as shown in Figure 3-77.
Clicking an existing partnership opens a window, as shown in Figure 3-78 on page 125. From
this window, you can also set the background copy rate. This rate specifies the bandwidth, in
124 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Mbps, that is used by the background copy process between the clusters (Figure 3-78). In our
case, we configured the partnership only on one side. You can see it in the State row. It
shows Partially Configured: Local, which is an indication that the configuration was only
configured on one side. If you see this message, go to the second system, and configure the
Create Partnership settings there, too.
Click Create User to open the pane that is shown in Figure 3-81. Use this pane to specify the
name and password of the user, and load the SSH key (if the SSH key was generated). SSH
key is not required for CLI access, and you can choose to use either SSH or a password for
CLI authentication.
126 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3.8.2 Audit Log option
Click Audit Log to open the window that is shown in Figure 3-82. The cluster maintains an
audit log of successfully run commands and displays the users that performed particular
actions at certain times.
You can filter audit log records by date or within a specific time frame (Figure 3-83).
3.9.1 Notifications
It is important to correct any issues that are reported by your system as soon as possible.
Configure your system to send automatic notifications when a new event is reported. To avoid
monitoring for new events that use the management GUI, select the type of event that you
want to be notified about, for example, restrict notifications to events that require immediate
action.
You can use email, Simple Network Management Protocol (SNMP), or syslog types of
notifications. If your system is within warranty, or if you use a hardware maintenance
agreement, configure your Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems to
send email events to Lenovo directly if an issue that requires hardware replacement is
detected. This mechanism is called Call Home.
128 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
When an event is received, Lenovo automatically opens a problem report. If appropriate,
Lenovo contacts you to verify whether replacement parts are required. The configuration
window for e-mail notifications is shown in Figure 3-85.
The procedure for how to enable e-mail notifications is described in Chapter 12, “RAS,
monitoring, and troubleshooting” on page 625.
3.9.2 Network
Click Network to open the window that is shown in Figure 3-86. You can update the network
configuration, set up iSCSI definitions, and view information about the Fibre Channel
connections.
When you click Fibre Channel Connectivity (Figure 3-87 on page 130), useful information is
displayed. In this example, we click ‘All nodes, storage systems, and hosts’ from the menu
and then select Show Results to display the details. Other options that are available from the
menu include displaying Fibre Channel details for a host, clusters, nodes, or storage systems.
3.9.3 Security
The different security features are described below.
Remote authentication
With security and its directory services, the user can remotely authenticate to the controller
firmware without the need for a local account. Therefore, when you log on, you authenticate
with your domain user ID and password rather than a locally created user ID and password.
The access pane to configure remote authentication is shown in Figure 3-88 on page 131.
130 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-88 Configuring remote authentication
The detailed steps to configure remote logon are described at the following web pages:
http://ibm.biz/Bd4Cr5
https://ibm.biz/BdjSLS
Encryption
On the panel that is shown in Figure 3-89, you can enable or disable the encryption function
on an Lenovo Storage V5030. The panel shows that no USB drives that contain encryption
keys were detected. These are no longer needed if you have an external Key Management
Server. Figure 3-89 shows four available external Key Management Server. If there is no
external Server available, you need the USB keys to be able to en/decrypt your data on
power on.
To use a signed certificate, first generate and download a request for a certificate that is
based on the values that are specified on the Secure Communication page. Submit this
request to the certificate authority to receive a signed certificate and then install it by using the
Secure Communication page. Before you create a request for either type of certificate,
ensure that your current browser does not restrict the type of keys that are used for
certificates. Certain browsers limit the use of specific key types for security and compatibility.
On Figure 3-90, you can see the details about the security certificates.
The Update Certificate panel opens, as shown in Figure 3-91 on page 133.
132 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-91 Update Certificate panel
3.9.4 System
The System menu provides the following options:
Set the system date and time
Manage licenses
Upgrade System
Virtual Volumes (VVols)
IP Quorum
I/O Groups
DNS
The Date and Time window opens (Figure 3-92 on page 134) when you select Date and
Time from the System menu. You can add a Network Time Protocol (NTP) server if available.
You can also update the license information for specific features, as shown in Figure 3-93.
To upgrade your controller firmware, use the procedure that is described in Chapter 12, “RAS,
monitoring, and troubleshooting” on page 625.
Virtual Volume (VVol) is a tape volume that resides in a tape volume cache of a virtual tape
server (VTS). VVol is a new feature that was introduced in controller firmware 7.6. With this
new functionality, users can create volumes on your system directly from a VMware vCenter
server.
On the VVOL page, you can enable or disable the functionality, as shown in Figure 3-94 on
page 135. Before you can enable VVol, you must set up an NTP server. See the Date and
Time settings to set up the NTP server.
134 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 3-94 Activating VVol
In some HyperSwap configurations, IP quorum applications can be used at the third site as
an alternative to third-site quorum disks. No Fibre Channel connectivity at the third site is
required to use an IP quorum application as the quorum device. The IP quorum application is
a Java application that runs on a host at the third site. The IP network is used for
communication between the IP quorum application and node canisters in the system. If you
currently have a third-site quorum disk, you must remove the third site before you use an IP
quorum application. The round trip time limitations from 80 micro seconds for a IP quorum are
still existent. Figure 3-95 shows where you can download the Java application.
For ports within an I/O group, you can enable virtualization of Fibre Channel ports that are
used for host I/O operations. With N_Port ID virtualization (NPIV), the Fibre Channel port
consists of both a physical port and a virtual port. When port virtualization is enabled, ports do
not come up until they are ready to handle I/O, which improves host behavior around node
unpends. In addition, path failures due to an offline node are masked from hosts. The target
Domain Name System (DNS) translates IP address to host names. You can create, delete, or
change domain name servers, which manage names of resources that are located on
external networks.
You can have up to two DNS servers that are configured on the system. To configure DNS for
the system, enter a valid IP address and name for each server. Both IPv4 and IPv6 address
formats are supported. Figure 3-97 shows the DNS setup Window.
3.9.5 Support
Support assistance enables support personnel to access the system to complete
troubleshooting and maintenance tasks. You can configure either local support assistance,
136 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
where support personnel visit your site to fix problems with the system, or remote support
assistance. Both local and remote support assistance uses secure connections to protect
data exchange between the support center and system. To enable Support assistance you
need to enable an email Server. More access controls can be added by the system
administrator. Figure 3-98 shows the Support assistance panel.
If support assistance is configured on your system, you can either automatically or manually
upload new support packages to the support center to help analyze and resolve errors on the
system. You can select individual logs to either download to review or send directly to the
support center for analysis.
For more information, see Chapter 12, “RAS, monitoring, and troubleshooting” on page 625.
138 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
4
Storage pools can be configured through the Initial Setup wizard when the system is first
installed, as described in Chapter 2, “Initial configuration” on page 35. They can also be
configured after the initial setup through the management GUI, which provides a set of
presets to help you configuring different Redundant Array of Independent Disks (RAID) types.
The recommended configuration presets configure all drives into RAID arrays based on drive
class and protect them with the correct number of spare drives. Alternatively, you can
configure the storage to your own requirements. Selections include the drive class, the
number of drives to configure, whether to configure spare drives and optimization for
performance or capacity.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 storage systems provide an
Internal Storage window for managing all internal drives. The Internal Storage window can be
accessed by opening the System window, clicking the Pools option and then clicking Internal
Storage, as shown in Figure 4-1.
140 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-2 Internal Storage window
The right side of the Internal Storage window lists the selected type of internal disk drives. By
default, the following information is listed:
Logical drive ID
Drive capacity
Current type of use (unused, candidate, member, spare, or failed)
Status (online, offline, and degraded)
MDisk name that the drive is a member of
Enclosure ID that the drive is installed in
Slot ID of the enclosure in which the drive is installed
The default sort order is by enclosure ID. This default can be changed to any other column by
left-clicking the column header. To toggle between ascending and descending sort order,
left-click the column header again. By hovering over the header names such as Drive ID, it
you display a brief description of the items within that column.
Additional columns can be included by right-clicking the gray header bar of the table, which
opens the selection panel, as shown in Figure 4-3 on page 142. To restore the default column
options, select Restore Default View.
The overall internal storage capacity allocation indicator is shown in the upper-right corner.
The Total Capacity shows the overall capacity of the internal storage that is installed in the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 storage systems. The MDisk Capacity
shows the internal storage capacity that is assigned to the MDisks. The Spare Capacity
shows the internal storage capacity that is used for hot spare disks.
The percentage bar that is shown in Figure 4-4 indicates how much capacity is allocated.
142 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-5 Internal drive actions menu
Depending on the status of the selected drive, the following actions are available.
Take Offline
The internal drives can be taken offline if a problem on the drive is identified. A confirmation
window opens, as shown in Figure 4-6. The default selection is to only take a drive
offline if a spare drive is available, which is strongly recommended and avoids
redundancy loss in the array. Click OK to take the drive offline.
If the drive fails (as shown in Figure 4-7 on page 144), the MDisk (from which the failed drive
is a member) remains online and a hot spare is automatically reassigned.
If sufficient spare drives are not available and a drive must be taken offline, the second option
for no redundancy (Take the drive offline even if redundancy is lost on the array)
must be selected. This option results in a degraded storage pool due to the degraded MDisk,
as shown in Figure 4-8.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 storage systems prevent the drive
from being taken offline if it can result in data loss. A drive cannot be taken offline (as shown
in Figure 4-9) if no suitable spare drives are available and based on the RAID level of the
MDisk, no sufficient redundancy will be available. Click Close to return to the Internal Storage
panel.
Figure 4-9 Internal drive offline not allowed because of insufficient redundancy
144 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Example 4-1 shows how to use the chdrive command-line interface (CLI) command to set
the drive to failed.
Example 4-1 The use of the chdrive command to set the drive to failed
chdrive -use failed driveID
chdrive -use failed -allowdegraded driveID
Mark as
The internal drives in the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 storage
systems can be assigned to the following usage roles by right-clicking the drives and
selecting the Mark as option, as shown in Figure 4-10:
Unused: The drive is not in use, and it cannot be used as a spare.
Candidate: The drive is available for use in an array.
Spare: The drive can be used as a hot spare, if required.
Identify
Use the Identify action to turn on the LED light so that you can easily identify a drive that must
be replaced or that you want to physically troubleshoot. The panel that is shown in
Figure 4-12 appears when the LED is on. Click Turn LED off when you are finished to turn
the drive LED off and return to the Internal Storage panel.
Example 4-2 shows how to use the chenclosureslot command to turn on and turn off the
drive LED.
Example 4-2 The use of the chenclosureslot command to turn on and turn off the drive LED
chenclosureslot -identify yes/no -slot slot enclosureID
Upgrade
From this option, you can easily upgrade the drive firmware. You can use the GUI to upgrade
individual drives or upgrade all drives for which updates are available. For more information
about upgrading drive firmware, see 12.4.2, “Updating the drive firmware” on page 663 and
this web page:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/lenov
o_vseries.html
146 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Dependent Volumes
Clicking Dependent Volumes shows the volumes that depend on the selected drive.
Volumes depend on a drive only when their underlying MDisks are in a degraded or
inaccessible state and when the removal of more hardware causes the volume to go offline.
This condition is true for any RAID 0 MDisk because it has no redundancy, or if the
associated MDisk is already degraded.
Use the Dependent Volumes option before you perform any drive maintenance to determine
which volumes are affected.
Important: A lack of listed dependent volumes does not imply that no volumes are using
the drive.
Figure 4-13 shows an example if no dependent volumes are detected for a specific drive. If a
dependent volume is identified it will be listed within this panel. When you have volumes listed
as dependent you can also check volume saving and throttle by selecting the volume and
clicking Actions. Click Close to return to the Internal Storage panel.
Example 4-3 shows how to view dependent volumes for a specific drive by using the CLI.
Example 4-3 Command to view dependent virtual disks (VDisks) for a specific drive
lsdependentvdisks -drive driveID
Properties
Clicking Properties in the Actions menu or double-clicking the drive provides the vital product
data (VPD) and the configuration information, as shown in Figure 4-14 on page 148. The
Show Details option in the bottom-left of the Properties panel was selected to show more
information.
If the Show Details option is not selected, the technical information section is reduced, as
shown in Figure 4-15.
A tab for the Drive Slot is available in the Properties panel (as shown in Figure 4-16 on
page 149) to obtain specific information about the slot of the selected drive. The Show Details
148 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
option is also applicable to this tab, if you do not select it, the Fault LED information
disappears from the panel. Click Close to return to the Internal Storage panel.
Example 4-4 shows how to use the lsdrive command to display the configuration
information and drive VPD.
Example 4-4 The use of the lsdrive command to display configuration information and drive VPD
IBM_Storwize:ITSO V5000:superuser>lsdrive 1
id 1
status online
error_sequence_number
use member
UID 5000cca05b1d97b0
tech_type sas_hdd
capacity 278.9GB
block_size 512
vendor_id IBM-E050
product_id HUC156030CSS20
FRU_part_number 01AC594
FRU_identity 11S00D5385YXXX0TGJ8J4P
RPM 15000
firmware_level J2G9
FPGA_level
mdisk_id 0
mdisk_name MDisk_01
member_id 5
enclosure_id 1
slot_id 1
node_id
node_name
Customize Columns
Click Customize Columns in the Actions menu to add or remove several columns that are
available in the Internal Storage window.
To restore the default column options, select Restore Default View, as shown in Figure 4-17.
Figure 4-18 on page 151 provides an overview of how storage pools, MDisks, and volumes
are related. The numbers in the figure represents the following components:
Hosts (1)
Volumes (5)
Pools (4)
External MDisks (0)
Arrays (2)
150 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
This panel is available by browsing to Monitoring → System and clicking Overview on the
upper-right corner of the panel. You can also identify the name of each resource by hovering
over the elements on the Overview window.
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 organize storage into pools to ease
storage management and make it more efficient. All MDisks in a pool are split into extents of
the same size and volumes are created from these available extents. The extent size is a
property of the storage pool and when an MDisk is added to a pool the size of the extents that
composes it will be based on the attribute of the pool to which the MDisk was added.
Storage pools can be further divided into sub-containers named as child pools. Child pools
inherit the properties of the parent pool and can also be used to provision volumes.
Storage pools are managed either through the Pools panel or the MDisks by Pool panel. Both
panels allow you to execute the same actions; however, actions on child pools can be
performed only through the Pools panel. To access the Pools panel browse to Pools →
Pools, as shown in Figure 4-19.
To create a new storage pool, you can use one of the following alternatives:
Navigate to Pools → Pools and click Create, as shown in Figure 4-20.
Navigate to Pools → MDisks by Pools and click Create Pool, as shown in Figure 4-21.
Both the alternatives open the dialog box shown in Figure 4-22 on page 153.
152 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-22 Create Pool dialog box
Note: If encryption is enabled, you can additionally select whether the storage pool is
encrypted. The encryption setting of a storage pool is selected at creation time and cannot
be changed later. By default, if encryption is enabled, encryption is selected.
If advanced pool settings are enabled, you can additionally select an extent size at the time of
the pool creation, as shown in Figure 4-23.
Note: Every storage pool created through the GUI has a default extent size of 1 GB. The
size of the extent is selected at creation time and cannot be changed later. If you want to
specify a different extent size at the time of the pool creation, browse to Settings → GUI
Preferences and select Advanced pool settings.
In the Create Pool dialog box, enter the pool name and click Create. The new pool is created
and is included in the list of storage pools with zero bytes, as shown in Figure 4-24 on
page 154.
Figure 4-25 shows the list of available actions for storage pools being accessed through the
Pools panel.
154 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Rename
Selecting Rename at anytime allows you to modify the name of a storage pool, as shown in
Figure 4-26. Enter the new name and click Rename.
Modify threshold
The storage pool threshold refers to the percentage of storage capacity that must be in use
for a warning event to be generated. The threshold is especially useful when using
thin-provisioned volumes that are configured to expand automatically. The threshold can be
modified by selecting Modify Threshold and entering the new value, as shown in
Figure 4-27. The default threshold is 80%. Warnings can be disabled by setting the threshold
to 0%.
Add storage
Selecting Add Storage starts the wizard to assign storage to the pool. For a detailed
description of this wizard, see 4.3.1, “Assigning managed disks to storage pools” on
page 163.
Edit Throttle
You can create, modify, and remove throttles for pools by using the management GUI or the
command-line interface. Throttling is a mechanism to control the amount of resources that
are used when the system is processing I/Os on a specific pool. If a throttle is defined, the
system either processes the I/O, or delays the processing of the I/O to free resources for
more critical I/O.
There are two parameters that can be defined through the Edit Throttle option:
Bandwidth limit defines the maximum amount of bandwidth the pool can process before
the system delays I/O processing for this pool.
IOPS limit defines the maximum I/O operations per second the pool can process before
the system delays I/O processing for this pool
If the pool does not have throttle settings configured, selecting Edit Throttle displays a dialog
box with blank fields as shown in Figure 4-28 on page 156. Define the limits and click Create.
For a pool that already has defined throttle settings, selecting Edit Throttle displays a different
dialog box, in which the current bandwidth and IOPS limits will be displayed, as shown in
Figure 4-29. You can either change or remove the current bandwidth and IOPS limits by
modifying the values and clicking Save or clicking Remove to disable a limitation.
As a default, when the View All Throttles panel is opened through the Pools window, it
displays throttle information related to pools, but through the same panel you are allowed to
select different objects as shown in Figure 4-31 on page 157. Selecting a different category
displays the throttle information for that specific selection.
156 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-31 Selecting specific throttle information
Delete
Pools can only be deleted through the GUI if no volumes are assigned to the pool. If the pool
has any volumes within it the option is not available. Selecting Delete immediately deletes the
pool without additional confirmation.
Through the CLI, you can delete a pool and all of its contents by using the -force parameter.
However, all volumes and host mappings are deleted and you cannot recover them.
Important: After you delete the pool through the CLI, all data that is stored in the pool is
lost except for the image mode MDisks. The image mode MDisk volume definition is
deleted, but the data on the imported MDisk remains untouched.
After deleting a pool, all of the managed or image mode MDisks in the pool return to the
unmanaged status.
Properties
Selecting Properties displays information about the storage pool as shown in Figure 4-32.
Additional information is available by clicking View more details and by hovering over the
elements on the window, as shown in Figure 4-33 on page 158. Click Close to return to the
Pools panel.
Customize columns
Selecting Customize Columns in the Actions menu allows you to include additional
information fields in the Pools panel as shown in Figure 4-34.
158 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
4.2.3 Child storage pools
A child storage pool is a storage pool created within a storage pool. The storage pool in
which the child storage pool is created is called parent storage pool.
Unlike a parent pool, a child pool does not contain MDisks; its capacity is provided exclusively
by the parent pool in the form of extents. The capacity of a child pool is set at creation time,
but can be nondisruptively modified later. The capacity must be a multiple of the parent pool
extent size and must be smaller than the free capacity of the parent pool.
Child pools are useful when the capacity allocated to a specific set of volumes must be
controlled.
Child pools inherit most properties from their parent pools and these cannot be changed. The
inherited properties include:
Extent size
Easy Tier setting
Encryption setting, but only if the parent pool is encrypted
Enter the name and the capacity of the child pool and click Create, as shown in Figure 4-36
on page 160.
Note: You cannot create an encrypted child pool from an unencrypted parent pool if the
parent pool contains any unencrypted array or an MDisk that is not self-encrypting and
there are nodes in the system that do not support software encryption (e.g. do not have
encryption license enabled).
An encrypted child pool created from an unencrypted parent pool reports as unencrypted if
the parent pool contains any unencrypted arrays. Remove these arrays to ensure that the
child pool is fully encrypted.
After the child pool is created it is listed in the Pools panel under its parent pool, as shown in
Figure 4-37. Toggle the arrow sign in the left of the storage pool name to either show or hide
the child pools.
160 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Actions on child storage pools
All actions supported for parent storage pools are supported for child storage pools, with the
exception of Add Storage. Child pools additionally support the Resize action.
To select an action right-click the child storage pool, as shown in Figure 4-38. Alternatively,
select the storage pool and click Actions.
Resize
Selecting Resize allows you to increase or decrease the capacity of the child storage pool, as
shown in Figure 4-39. Enter the new pool capacity and click Resize.
Note: You cannot shrink a child pool below its real capacity. Thus, the new size of a child
pool needs to be larger than the capacity used by its volumes.
Delete
Deleting a child pool is a task quite similar to deleting a parent pool. As with a parent pool, the
Delete action is disabled if the child pool contains volumes. After deleting a child pool the
extents that were being occupied return to the parent pool as free capacity.
Arrays are created from internal storage using RAID technology to provide redundancy and
increased performance. The system supports two types of RAID: traditional RAID and
distributed RAID. Arrays are assigned to storage pools at creation time and cannot be moved
between storage pools. It is not possible to have an array that does not belong to any storage
pool.
MDisks are managed through the MDisks by Pools panel. To access the MDisks by Pools
panel browse to Pools → MDisks by Pools, as shown in Figure 4-40 on page 163.
162 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-40 MDisks by Pools panel
The panel lists all the MDisks available in the system under the storage pool to which they
belong.
Arrays are created and assigned to a storage pool at the same time.
To assign MDisks to a storage pool navigate to Pools → MDisks by Pools and choose one
of the following options:
Option 1: Select Add Storage on the right side of the storage pool, as shown in
Figure 4-41 on page 164. The Add Storage button is shown only when the pool has no
capacity assigned or when the pool capacity usage is over the warning threshold.
Option 2: Right-click the pool and select Add Storage, as shown in Figure 4-42.
Alternatively, select the a pool and click Actions.
Option 3: Select Assign under a specific drive class or external storage controller, as
shown in Figure 4-43 on page 165.
164 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-43 Add storage: option 3
Both options 1 and 2 start the configuration wizard shown in Figure 4-44.
Option 3 starts the quick internal wizard for the selected drive class only, as shown in
Figure 4-45.
This configuration combines two drive classes, belonging to two different tiers of storage
(Nearline and Enterprise). This is the default option and takes advantage of the Easy Tier
functionality. However, this can be adjusted by setting the number of drives of different
classes to zero as shown in Figure 4-47 on page 167.
Note: If any drive class is not compatible with the drives being assigned that drive class
cannot be selected.
166 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-47 Quick configuration wizard with a zeroed storage class
If you are adding storage to a pool with storage already assigned, the existing storage is also
taken into consideration, with some properties being inherited from existing arrays for a given
drive class. Drive classes incompatible with the classes already in the pool are disabled as
well.
When you are satisfied with the presented configuration click Assign as shown in Figure 4-48
on page 168. The array MDisks are then created and initializes on the background.
Tip: It is advised to use the advanced configuration only when the quick configuration
suggested does not fit your business requirements.
Figure 4-49 on page 169 shows an example with 6 drives ready to be configured as RAID 6.
Click Summary to see the list of MDisks arrays to be created. To return to the default
settings, select the refresh button next to the pool capacity and to create and assign the
arrays, click Assign.
168 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-49 Advanced internal custom configuration
When multiple physical disks are set up to use the RAID technology, they are in a RAID
array. The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 provide multiple, traditional
RAID levels:
RAID 0
RAID 1
RAID 5
RAID 6
RAID 10
RAID technology can provide better performance for data access, high availability for the
data, or a combination. RAID levels define a trade-off between high availability, performance,
and cost.
The RAID concept must be extended to disk rebuild time because of increasing physical disk
capacity.
Distributed RAID (DRAID) addresses those points and it is available for the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 in two types:
Distributed RAID 5 (DRAID 5)
Distributed RAID 6 (DRAID 6)
Distributed RAID reduces the recovery time and the probability of a second failure during
rebuild. Just like traditional RAID, a distributed RAID 5 array can lose one physical drive and
survive. If another drive fails in the same array before the bad drive is recovered, the MDisk
and the storage pool go offline as they are supposed to. So, distributed RAID does not
change the general RAID behavior.
Note: Although Traditional RAID is still supported and is the default choice in the GUI, the
recommendation is to use Distributed RAID 6 whenever possible.
Figure 4-50 on page 171 shows an example of a distributed RAID with 10 disks. The physical
disk drives are divided into multiple packs. The reserved spare capacity (which is marked in
yellow) is equivalent to two spare drives, but the capacity distributed across all physical disk
drives. The data is distributed across a single row. For simplification, not all packs are shown
in Figure 4-50 on page 171.
170 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-50 Distributed RAID 6
Figure 4-51 on page 172 shows a single drive failure in the distributed RAID 6 (DRAID 6)
environment. Physical disk 3 failed and the RAID 6 algorithm is using the spare capacity for a
single spare drive in each pack for rebuild (which is marked in green). All disk drives are
involved in the rebuild process, which significantly reduces the rebuild time. For simplification,
not all packs are shown in Figure 4-51 on page 172.
The usage of multiple drives improves the rebuild process, which is up to 10 times faster than
traditional RAID. This speed is even more important when you use large drives.
The conversion from traditional RAID to distributed RAID is possible by using volume
mirroring or volume migration. Mixing traditional RAID and distributed RAID in the same
storage pool is also possible.
Example
The same number of disks can be configured by using traditional or distributed RAID. In our
example, we use 6 disk drives and assign those disks as RAID 6 to a single pool.
Figure 4-52 shows the setup for a traditional RAID 6 environment. The pool consists of one
MDisk, with 5 disk drives. The spare drive is not listed in this summary.
Figure 4-53 shows the setup for a distributed RAID 6 environment. The pool consists of a
single MDisk with 6 disk drives. The spare drives are included in this summary.
172 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
4.3.4 RAID configuration presets
RAID configuration presets are used to configure internal drives. They are based on the
advised values for the RAID level and drive class. Each preset has a specific goal for the
number of drives per array and the number of spare drives to maintain redundancy.
For the best performance with solid-state drives (SSDs), arrays with the same number of
drives are recommended, which is the same design for traditional RAID arrays.
Table 4-1 describes the presets that are used for Flash drives for the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 storage systems.
Flash RAID instances: In all Flash RAID instances, drives in the array are balanced
across enclosure chains, if possible.
To choose an action select the array and click Actions, as shown in Figure 4-54 on page 175.
Alternatively, right-click the array.
174 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-54 Available actions on arrays
Swap drive
Selecting Swap Drive allows the user to replace a drive in an array with another drive. The
other drive needs to have a use of Candidate or Spare. This action can be used to replace a
drive that is expected to fail soon.
Figure 4-55 shows the dialog box that opens. Select the member drive to be replaced and the
replacement drive.
After defining the disk to be removed and the disk to be added, click Swap as shown in
Figure 4-56.
Figure 4-57 shows the dialog box that opens when you select Set Spare Goal. Define the
number of required spares and click Save.
Figure 4-58 shows the dialog box that opens when you select Set Rebuild Areas Goal.
Define the representative number of required spares that will compose the rebuild area and
click Save.
Delete
Selecting Delete removes the array from the storage pool and deletes it.
Remember: An array does not exist outside of a storage pool. Therefore an array cannot
be removed from the pool without being deleted.
If there are no volumes using extents from the array the deletion command runs immediately
without additional confirmation. If there are volumes using extents from the array, you are
prompted to confirm the action as shown in Figure 4-59 on page 177. Click Yes to migrate the
volumes or No to cancel the deletion process.
176 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-59 MDisk deletion confirmation panel
Confirming the action starts the migration of the volumes to extents from other MDisks that
remain in the pool; after the action completes the array is removed from the storage pool and
deleted.
Note: Ensure that you have enough available capacity remaining in the storage pool to
allocate the data being migrated from the removed array, otherwise the command fails.
Drives
Selecting Drives shows information about the drives that are included in the array, as shown
in Figure 4-60.
Figure 4-60 Panel showing the drives that are members of an MDisk
To choose an action right-click the external MDisk, as shown in Figure 4-61 on page 178.
Alternatively, select the external MDisk and click Actions.
Assign
This action is available only for unmanaged MDisks. Selecting Assign opens the dialog box
shown in Figure 4-62. This action acts only on the selected MDisk or MDisks.
Important: If you need to preserve existing data on an unmanaged MDisk do not assign it
to a storage pool because this action deletes the data on the MDisk. Use Import instead.
Modify tier
Selecting Modify Tier allows the user to modify the tier to which the external MDisk is
assigned, as shown in Figure 4-63 on page 179. This setting is adjustable because the
system cannot detect the tiers associated with external storage automatically. Enterprise
Disk (Tier 2) is the option selected by default.
178 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-63 Modifying external MDisk tier
This option is available only when encryption is enabled. Selecting Modify Encryption allows
the user to modify the encryption setting for the MDisk, as shown in Figure 4-64.
For example, if the external MDisk is already encrypted by the external storage system,
change the encryption state of the MDisk to Externally encrypted. This stops the system
from encrypting the MDisk again if the MDisk is part of an encrypted storage pool.
Import
This action is available only for unmanaged MDisks. Importing an unmanaged MDisk allows
the user to preserve the data on the MDisk, either by migrating the data to a new volume or
by keeping the data on the external system.
Selecting Import allows you to choose one of the following migration methods:
Import to temporary pool as image-mode volume does not migrate data from the
source MDisk. It creates an image-mode volume that has a direct block-for-block
translation of the MDisk. The existing data is preserved on the external storage system,
but it is also accessible from the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
systems.
If this method is selected the image-mode volume is created in a temporary migration pool
and presented through the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. Choose
the extent size of the temporary pool and click Import, as shown in Figure 4-65 on
page 180.
The MDisk is imported and listed as an image-mode MDisk in the temporary migration
pool, as shown in Figure 4-66.
180 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-67 Corresponding image-mode volume
The image-mode volume can then be mapped to the original host mode. The data is still
physically present on the physical disk of the original external storage controller system
and no automatic migration process is currently running. If needed, the image-mode
volume can be migrated manually to another storage pool using volume migration or
volume mirroring later.
Migrate to an existing pool starts by creating an image-mode volume as the first
method. However, it then migrates the data from the image-mode volume onto another
volume in the selected storage pool. After the migration process completes the
image-mode volume and temporary migration pool are deleted.
If this method is selected, choose the storage pool to hold the new volume and click
Import, as shown in Figure 4-68. Only pools with sufficient free extent capacity are listed.
The data migration begins automatically after the MDisk is imported successfully as an
image-mode volume. You can check the migration progress by navigating to Pools →
System Migration, as shown in Figure 4-69 on page 182.
After the migration completes, the volume is available in the chosen destination pool, as
shown in Figure 4-70. This volume is no longer an image-mode volume; it is a normal
striped volume.
At this point all data has been migrated from the source MDisk and the MDisk is no longer
in image mode, as shown in Figure 4-71 on page 183. The MDisk can be removed from
the temporary pool and used as a regular MDisk to host volumes.
182 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-71
Alternatively, import and migration of external MDisks to another pool can be done by
selecting Pools → System Migration. Migration and the system migration wizard are
described in more detail in Chapter 7, “Storage migration” on page 323.
Include
The system can exclude an MDisk with multiple I/O failures or persistent connection errors
from its storage pool to ensure these errors do not interfere with data access. If an MDisk has
been automatically excluded, run the fix procedures to resolve any connection and I/O failure
errors. Drives used by the excluded MDisk with multiple errors might require replacing or
reseating.
After the problems have been fixed, select Include to add the excluded MDisk back into the
storage pool.
Remove
In some cases you may want to remove external MDisks from storage pools to reorganize
your storage allocation. Selecting Remove removes the MDisk from the storage pool. After
the MDisk is removed it goes back to unmanaged. If there are no volumes in the storage pool to
which this MDisk is allocated the command runs immediately without additional confirmation.
If there are volumes in the pool, you are prompted to confirm the action, as shown in
Figure 4-72. Click Yes to migrate the volumes or No to cancel the deletion process.
Confirming the action starts the migration of the volumes to extents from other MDisks that
remain in the pool; when the action completes the MDisk is removed from the storage pool
and returns to unmanaged.
Important: The MDisk being removed must remain accessible to the system while all data
is copied to other MDisks in the same storage pool. If the MDisk is unmapped before the
migration finishes all volumes in the storage pool go offline and remain in this state until the
removed MDisk is connected again.
Discover storage
The Discover storage option in the upper left of the MDisks by Pools window is useful if
external storage controllers are in your environment. (For more information, see Chapter 11,
“External storage virtualization” on page 607). The Discover storage action starts a rescan of
the Fibre Channel network. It discovers any new MDisks that were mapped to the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 storage systems and rebalances MDisk access
across the available controller device ports.
This action also detects any loss of controller port availability and updates the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 configuration to reflect any changes.
When external storage controllers are added to the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030 environment, the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
automatically discover the controllers, and the logical unit numbers (LUNs) that are presented
by those controllers are listed as unmanaged MDisks.
However, if you attached new storage and the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 did not detect it, you might need to use the Discover storage option before the system
detects the new LUNs. If the configuration of the external controllers is modified afterward,
the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 might be unaware of these
configuration changes. Use Detect MDisk to rescan the Fibre Channel network and update
the list of unmanaged MDisks.
184 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 4-73 Discover storage action
Note: The Discover storage action is asynchronous. Although the task appears to be
finished, it might still be running in the background.
Rename
MDisks can be renamed by selecting the MDisk and clicking Rename from the Actions menu.
Enter the new name of your MDisk (as shown in Figure 4-74) and click Rename.
Properties
The Properties action for an MDisk shows the information that you need to identify it. In the
MDisks by Pools window, select the MDisk and click Properties from the Actions menu.
Alternatively, right-click the MDisk and select Properties. For additional information related to
the selected MDisk, click View more details as shown in Figure 4-76.
Logical unit numbers (LUNs) that are presented by external storage systems to Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 are discovered as unmanaged MDisks. Initially,
the MDisk is not a member of any storage pools, which means that it is not used by the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 storage systems.
186 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
To learn more about external storage, see Chapter 11, “External storage virtualization” on
page 607.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support the following host
attachment protocols:
16 Gb FC or 10 Gb iSCSI/FC over Ethernet (FCoE) as an optional host interface
12 Gb SAS (standard host interface)
1 Gb or 10 Gb iSCSI (standard host interface, depending on the model)
In this chapter, we assume that your hosts are connected to your FC, SAS, or Internet
Protocol (IP) network and you completed the steps that are described in Chapter 2, “Initial
configuration” on page 35. Follow basic zoning recommendations to ensure that each host
has at least two network adapters, that each adapter is on a separate network (or at minimum
in a separate zone), and that each adapter is connected to both canisters. This setup ensures
four paths for failover and failback.
For SAS attachment, ensure that each host has at least one SAS host bus adapter (HBA)
connection to each Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 canister. Further
information about configuring SAS attached hosts is provided in 2.4, “SAS direct-attach
planning” on page 45.
Before you map the newly created volumes on the host of your choice, preparation goes a
long way toward ease of use and reliability. Several steps are required on a host in
preparation for mapping new Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 volumes
to the host. Use the Lenovo Storage interoperability matrix to check the code levels that are
supported to attach your host to your storage. The Lenovo interoperability matrix is a web tool
that checks the interoperation of host, storage, switches, and multipathing drivers. The
interoperability matrix can be obtained at the following addresses:
https://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v3700v2
/6535/documentation
https://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v5030/6
536/documentation
190 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
This chapter focuses on Windows and VMware. If you want to attach any other hosts, for
example, IBM AIX, Linux, or an Apple system, you can find the required information at the
following address:
http://systemx.lenovofiles.com/mobile/help/topic/com.lenovo.storage.v3700.doc/leno
vo_vseries.html
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support 16 Gb Fibre Channel ports
and supports Fibre Channel direct attachment on all 16 Gb ports.
Note: Be careful about the maximum length of the Fibre Channel links in this configuration.
5.3.1 Windows 2008 R2 and 2012 R2: Preparing for Fibre Channel attachment
Complete the following steps to prepare a Windows 2008 R2 or Windows 2012 R2 host to
connect to an Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 by using Fibre Channel:
1. Ensure that the current O/S service pack and test fixes are applied to your server.
2. Use the current firmware and driver levels on your host system.
3. Install a host bus adapter (HBA) or HBAs on the Windows server by using the current
basic input/output system (BIOS) and drivers.
4. Connect the FC host adapter ports to the switches, or use direct connections.
5. Configure the switches (zoning).
6. Configure the HBA for hosts that run Windows.
7. Set the Windows time-out value.
8. Install the multipath module.
You can obtain the current supported levels by navigating from the following address:
https://ibm.biz/BdHKW8
192 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Host Adapter BIOS: Disabled (unless the host is configured for storage area network
(SAN) Boot)
Queue depth: 4
After you install the device driver and firmware, you must configure the HBAs. To perform this
task, either use the QLogic QConverge Console (QCC) command-line interface (CLI)
software or restart into the HBA BIOS, load the adapter defaults, and set the following values:
Host Adapter BIOS: Disabled (unless the host is configured for SAN Boot)
Adapter Hard Loop ID: Disabled
Connection Options: 1 (only point to point)
Logical unit numbers (LUNs) Per Target: 0
Port Down Retry Count: 15
After you install the device driver and firmware, you must configure the HBAs. To perform this
task, either use the Emulex HBAnyware software or restart into the HBA BIOS, load the
defaults, and set topology to 1 (10F_Port Fabric).
Lenovo Subsystem Device Driver DSM (SDDDSM) is the Lenovo multipath I/O solution that is
based on Microsoft MPIO technology. It is a device-specific module that supports Lenovo
storage devices on Windows hosts. The intent of MPIO is to provide better integration of a
multipath storage solution with the operating system, and it supports the use of multipath in
the SAN infrastructure during the startup process for SAN Boot hosts.
To ensure correct multipathing with the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030,
SDDDSM must be installed on Windows hosts. To install SDDDSM, complete the following
steps:
1. Go to the following SDDDSM download matrix to determine the correct level of SDDDSM
to install for Windows 2008 R2 or Windows 2012 R2, and then download the package:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v3700.doc/svc_w2km
ultipathovr_21osvf.html
2. Extract the package to your hard disk drive, and run setup.exe to install SDDDSM. A
command prompt window opens (Figure 5-2). Confirm the installation by entering Yes.
3. During the setup, SDDDSM also determines whether an older version is installed and
prompts you to upgrade to the current version.
194 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
4. After the setup completes, you are prompted to restart the system. Confirm this restart by
typing Yes and pressing Enter (Figure 5-3).
You successfully installed SDDDSM. To check whether SDDDSM is installed correctly, see
the following sections about Windows 2008 R2 and Windows 2012 R2.
Windows 2008 R2
To check the installed driver version, complete the following steps:
1. Select Start → All Programs → Subsystem Device Driver DSM → Subsystem Device
Driver DSM.
2. A command prompt opens. Run datapath query version to determine the version that is
installed (Example 5-1) for this Windows 2008 R2 host.
3. The worldwide port names (WWPNs) of the FC HBA are required to correctly zone
switches and configure host attachment on the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030. You can obtain the WWPNS by using vendor tools, the HBA BIOS, the native
Windows command line, or SDDDSM. This command can be used to determine the
worldwide port names (WWPNs) of the host. Run datapath query wwpn (Example 5-2)
and note the WWPNs of your host because you need them later.
If you need more detailed information about SDDDSM, see Multipath Subsystem Device
Driver User’s Guide, GC52-1309. The guide is available at this web page:
Windows 2012 R2
To check the installed driver version, complete the following steps:
1. First, select the Windows Start icon in the lower-left corner. See Figure 5-4.
2. Expand the view by clicking the down arrow that is shown in Figure 5-5.
Figure 5-5 Expand view to see all programs that are installed
196 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3. Search for the section Subsystem Device Driver DSM. See Figure 5-6.
Figure 5-6 Subsystem Device Driver DSM in the all programs menu
5. A command prompt opens. Run datapath query version to determine the version that is
installed. See Figure 5-8.
7. The WWPNs of the FC HBA are required to correctly zone switches and configure host
attachment on the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. You can obtain
the WWPNs by using vendor tools, the HBA BIOS, the native Windows command line, or
SDDDSM. This command can be used to determine the WWPNs of the host. Run
datapath query wwpn (Example 5-4) and document the WWPNs of your host because
you need them later.
The Windows host was prepared to connect to the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030, and you know the WWPNs of the host. The next step is to configure a host object
for the WWPNs by using the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 GUI. This
task is explained in 5.5.1, “Creating Fibre Channel hosts” on page 228.
SAN Boot hosts are beyond the intended scope of this book. For more information about that
topic, search for SAN Boot on the following web page:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v3700.doc/lenovo_vser
ies.html
Windows 2003: The examples focus on Windows 2008 R2 and Windows 2012, but the
procedure for Windows 2003 is similar. If you use Windows 2003, do not forget to install
Microsoft hotfix 908980. If you do not install it before you perform this procedure, preferred
pathing is not available. You can download this hotfix from the following address:
http://support.microsoft.com/kb/908980
5.3.2 Windows 2008 R2 and Windows 2012 R2: Preparing for iSCSI attachment
This section details iSCSI attachment.
https://datacentersupport.lenovo.com/us/en/products/storage/lenovo-storage/v5030/6
536/documentation
Install the driver by using Windows Device Manager or vendor tools. Also, check and update
the firmware level of the HBA by using the manufacturer’s provided tools. Always check the
readme file to see whether any Windows registry parameters must be set for the HBA driver.
Important: For converged network adapters (CNAs), which can support both FC and
iSCSI, it is important to ensure that the Ethernet networking driver is installed in addition to
the FCoE driver. You are required to install the Ethernet networking driver and the FCoE
driver before you configure iSCSI.
If you use a hardware iSCSI HBA, refer to the manufacturer’s documentation and the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 Lenovo Information Center for further details
and the latest information about the hardware and host OS configuration. The following
section describes how to configure iSCSI by using the software initiator.
In Windows 2008 R2 and 2012, the Microsoft iSCSI software initiator is preinstalled.
198 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Complete the following steps:
1. Enter iscsi in the search field of the Windows 2008 R2 Start window (Figure 5-9) and
click iSCSI Initiator.
2. For Windows 2012 R2, go to the all programs menu and enter iSCSI in the search field at
the top of the window. See Figure 5-10.
5. You can change the initiator name, or enable advanced authentication, but these actions
are out of the scope of our basic setup. By default, iSCSI authentication is not enabled.
More details are available at the Lenovo Information Center for the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 at the following address:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v3700.doc/lenovo_v
series.html
6. From the Configuration tab, you can change the initiator name, enable CHAP
authentication, and more. However, these tasks are beyond the scope of our basic setup.
CHAP authentication is disabled, by default. For more information, see the Microsoft
iSCSI Initiator Step-by-Step Guide:
http://technet.microsoft.com/en-us/library/ee338476%28v=ws.10%29.aspx
200 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Configuring Ethernet ports
We suggest that you use separate dedicated ports for host management and iSCSI. In this
case, we need to configure IPs on the iSCSI ports in the same subnet and virtual LAN (VLAN)
as the external storage that we want to attach to.
To configure Ethernet port IPs on Windows 2008 R2 and 2012 R2, complete the following
steps:
1. Go to Control Panel → Network and Internet → Network and Sharing Center. The
window that is shown in Figure 5-13 opens.
In this case, two networks are visible to the system. We use the first network to connect to
the server. It consists of a single dedicated Ethernet port for management. The second
network is our iSCSI network. It consists of two dedicated Ethernet ports for iSCSI. We
suggest that you use at least two dedicated ports for failover purposes.
202 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
4. If you use IPv6, select Internet Protocol Version 6 (TCP/IPv6) and click Properties.
Otherwise, select Internet Protocol Version 4 (TCP/IPv4) and click Properties to
configure an IPv4 address.
5. For IPv4, the window that is shown in Figure 5-16 opens. To manually set the IP, select
“Use the following address” and enter an IP address, subnet mask, and gateway. Set the
DNS server address, if required. Click OK to confirm.
6. Repeat the previous steps for each port that you want to configure for iSCSI attachment.
Important: Subsystem Device Driver DSM (SDDDSM) is not supported for iSCSI
attachment. Do not follow the steps to install SDDDSM that you follow to install FC or SAS.
These basic steps are to prepare a Windows 2008 R2 or Windows 2012 R2 host for iSCSI
attachment. To configure Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 for iSCSI
connections, see 5.5.3, “Creating iSCSI hosts” on page 236.
https://datacentersupport.lenovo.com/us/en/products/storage/lenovo-storage/v5030/6
536/documentation
Install the driver by using Windows Device Manager or vendor tools. Also, check and update
the firmware level of the HBA by using the manufacturer’s provided tools. Always check the
readme file to see whether any Windows registry parameters must be set for the HBA driver.
204 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
You can obtain the host WWPNs through vendor tools or the HBA BIOS. However, the
easiest way is to connect the SAS cables to the ports on the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030, log on to the Storwize CLI through Secure Shell (SSH), and run
svcinfo lsssasportcandidate, as shown in Example 5-5.
We described the basic steps to prepare a Windows 2008 R2 and 2012 R2 host for SAS
attachment. For information about configuring SAS attachment on the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 side, see 5.5.5, “Creating SAS hosts” on page 243.
https://datacentersupport.lenovo.com/us/en/products/storage/lenovo-storage/v37
00v2/6535/documentation
https://datacentersupport.lenovo.com/us/en/products/storage/lenovo-storage/v50
30/6536/documentation
206 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. After you complete your ESXi installation, connect to your ESXi server by using the
vSphere web client and go to the Configuration tab.
3. Click Storage Adapters, and scroll down to your FC HBAs (Figure 5-17). Document the
WWPNs of the installed adapters for later use.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 are an active/active storage
device. The suggested multipathing policy is Round Robin. Round Robin performs static load
balancing for I/O. If you do not want the I/O balanced over all available paths, the Fixed policy
is supported also. This policy setting can be selected for every volume.
Set this policy after you attach the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
LUNs to the ESXi host. For more information, see Chapter 6, “Volume configuration” on
page 269. If you use an older version of VMware ESX (up to version 3.5), Fixed is the
suggested policy setting.
MRU selects the first working path, which is discovered at system start time. If this path
becomes unavailable, the ESXi/ESX host switches to an alternative path and continues to
use the new path while it is available. This policy is the default policy for LUNs that are
presented from an Active/Passive array. ESXi/ESX does not return to the previous path if, or
when, it returns. It remains on the working path until it, for any reason, fails.
Note: Beginning with VMware ESXi version 5.5, certain new features can be accessed
through the vSphere Web Client only. However, we do not demonstrate any of these
features. All of the following examples continue to focus on the use of the desktop client.
Connect to the ESXi server (or VMware vCenter) by using the VMware vSphere Client and
browse to the Configuration tab. Click Storage Adapters to see the HBA WWPNs, as
shown in Figure 5-18.
After all of these steps are completed, the ESXi host is prepared to connect to the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 . Go to 5.5.1, “Creating Fibre Channel hosts” on
page 228 to create the ESXi FC host in the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 GUI.
208 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Important: For converged network adapters (CNAs) that support both FC and iSCSI, it is
important to ensure that the Ethernet networking driver is installed in addition to the FCoE
driver. The Ethernet networking driver and the FCoE driver are required for the
configuration of iSCSI.
For more information about the hardware and host OS configuration, if you use a hardware
iSCSI HBA, see the manufacturer’s documentation and the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 Lenovo Information Center. The following section describes how to
configure iSCSI by using the software initiator.
Complete the following steps to prepare a VMware ESXi host to connect to a Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030by using iSCSI:
1. Ensure that the current firmware levels are applied on your host system.
2. Install VMware ESXi and load additional drivers if required.
3. Connect the ESXi server to your network. You need to use separate network interfaces for
iSCSI traffic.
4. Configure your network to fulfill your security and performance requirements.
The iSCSI initiator is installed by default on your ESXi server, but you must enable it. To
enable it, complete the following steps:
1. Connect to your ESXi server by using the vSphere Client. Go to Manage, and select
Networking (Figure 5-19).
210 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
4. Select one or more network interfaces that you want to use for iSCSI traffic and click Next
(Figure 5-22).
212 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
9. Check whether an iSCSI software adapter is available. Select Storage Adapters on the
Manage tab. See Figure 5-26.
10.Click the plus sign (+) to add an iSCSI software adapter. See Figure 5-27.
11. The Add Software iSCSI Adapter window opens. See Figure 5-28.
12.Click OK. A message displays that prompts you to configure the adapter after it is added.
14.Select Storage Adapters and scroll to the iSCSI Software Adapter (Figure 5-30).
Highlight it and you see the Adapter Details in the lower part of the window.
15.The iSCSI Software Adapter Properties window opens. Figure 5-31 shows that the initiator
is enabled by default. To change this setting, click Disable.
214 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
16.The VMware ESX iSCSI initiator is successfully enabled (Figure 5-32). Document the
initiator name for later use.
For iSCSI, extra configuration is required in the VMkernel port properties to enable path
failover. Each VMkernel port must map to one physical adapter port, which is not the default
setting. Complete the following steps:
1. Browse to the Configuration tab and select Networking. Click Properties next to the
vSwitch that you configured for iSCSI to open the window that is shown in Figure 5-33.
Figure 5-33 View the vSwitch properties with listed VMkernel ports
216 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3. Click the NIC Teaming tab. Select Override switch failover order and ensure that each
port is tied to one physical adapter port, as shown in Figure 5-35.
Figure 5-35 Configuring a VMkernel port to bind to a single physical adapter port
These basic steps are required to prepare a VMware ESXi host for iSCSI attachment. For
information about configuring iSCSI attachment on the Lenovo Storage V3700 V2, V3700 V2
XP, and V5030, see 5.5.3, “Creating iSCSI hosts” on page 236.
For more information about configuring iSCSI attachment on the VMware ESXi side, the
following white paper, which was published by VMware, is a useful resource:
http://ibm.biz/Bd4ND6
https://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v5030/6
536/documentation
The host WWPNs are not directly available through VMware vSphere Client. However, you
can obtain them by using vendor tools or the HBA BIOS. The method that is described in
5.3.3, “Windows 2012 R2: Preparing for SAS attachment” on page 204 also works.
These basic steps are required to prepare a VMware ESXi host for SAS attachment. For
information about configuring SAS attachment on the Lenovo Storage V3700 V2, V3700 V2
XP, and V5030. side, see 5.5.5, “Creating SAS hosts” on page 243.
For more information and guidance about attaching storage with VMware ESXi, the following
document, which was published by VMware, is a useful resource:
http://ibm.biz/Bd4ND5
Traditionally, should one node fail or be removed for some reason, the paths that are
presented for volumes from that node would go offline, and it is up to the native OS
multipathing software to failover from using both sets of WWPN to just those that remain
online. While this is exactly what multipathing software is designed to do, occasionally it can
be problematic, particularly if paths are not seen as coming back online for some reason.
Starting with V7.7.0, Lenovo storage V-series system can be enabled into NPIV mode. When
NPIV mode is enabled on the Lenovo storage V-series system, ports do not come up until
they are ready to service I/O, which improves host behavior around node unpends. In
addition, path failures due to an offline node are masked from host multipathing.
218 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
When NPIV is enabled on Lenovo storage V-series system nodes, each physical WWPN
reports up to three virtual WWPNs, as shown in Table 5-1.
Primary NPIV Port This is the WWPN that communicates with backend storage, and
might be used for node to node traffic. (Local or remote.)
Primary Host Attach Port This is the WWPN that communicates with hosts. It is a target port
only, and this is the primary port, so it represents this local node’s
WWNN.
Failover Host Attach Port This is a standby WWPN that communicates with hosts and is only
brought online on this node if the partner node in this I/O Group goes
offline. This is the same as the Primary Host Attach WWPN on the
partner node.
Figure 5-36 depicts the three WWPNs associated with an SVC port when NPIV is enabled.
Figure 5-36 Allocation of NPIV virtual WWPN ports per physical port
The failover host attach port (in pink) is not active at this time. Figure 5-37 on page 220
shows what happens when the second node fails. Subsequent to the node failure, the failover
host attach ports on the remaining node are active and have taken on the WWPN of the failed
node’s primary host attach port.
Note: Figure 5-37 on page 220 shows only two ports per node in detail, but the same
applies for all physical ports.
With V7.7.0 onwards, this all happens automatically when NPIV is enabled at a system level .
At this time, the failover only happens automatically between the two nodes in an I/O Group.
There is a transitional mode for compatibility with an earlier version during the transition
period as well.
The processes for enabling NPIV on a new Lenovo storage V-series system is slightly
different than on an existing system. For more information, see Lenovo Information Center:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/svc_i
cconfiguringnpiv.html
Note: NPIV is only supported for Fibre Channel protocol. It is not supported for FCoE
protocol or iSCSI.
220 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
5.4.2 Enabling NPIV on a new system
For v7.7.0 and later, a new system should have NPIV enabled by default. For any case where
it is not enabled by default and NPIV is wanted, then NPIV can be enabled on a new system
by running the following steps:
1. Run the lsiogrp command to list the I/O groups present in a system, as shown in
Example 5-6.
2. Run the lsiogrp command to view the status of N_Port ID Virtualization (NPIV), as shown
in Example 5-7.
3. If the resulting output is fctargetportmode enabled, as shown in Example 5-7, then NPIV
is enabled.
4. The virtual WWPNs can be listed using the lstargetportfc command, as shown in
Example 5-8 on page 222.
5. At this point you can configure zones for hosts using the primary host attach ports (virtual
WWPNs) of the Lenovo storage V-series ports, as shown in bold in the output of
Example 5-8.
6. If the status of fctargetportmode is disabled, run the chiogrp command to get into
transitional mode for NPIV, as shown in Example 5-9.
7. The transitional mode can be verified using the lsiogrp command, as shown in
Example 5-10.
222 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
site_id
site_name
fctargetportmode transitional
compression_total_memory 2047.9MB
8. In transitional mode, host I/O is permitted on primary ports and primary host attach
ports (virtual WWPN), as shown in Example 5-11 under the host_io_permitted column.
At this point, you can configure zones for hosts by using the primary host attach ports (virtual
WWPNs) of the Lenovo Storage V-series ports, as shown in bold in the output of
Example 5-8 on page 222.
Enabling N_Port ID Virtualization (NPIV) on an existing system requires that you complete
the following steps after meeting the prerequisites:
1. Audit your SAN fabric layout and zoning rules, because NPIV has stricter requirements.
Ensure that equivalent ports are on the same fabric and in the same zone. For more
information, see the topic about zoning considerations for N_PortID Virtualization in
Lenovo Information Center:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/sv
c_icconfiguringnpiv.html
2. Check the path count between your hosts and the Lenovo storage V series system to
ensure that the number of paths is half of the usual supported maximum. For more
information, see the topic about zoning considerations for N_Port ID Virtualization in
Lenovo Information Center:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/sv
c_icconfiguringnpiv.html
3. Run the lstargetportfc command to note down the primary host attach WWPNs (virtual
WWPNs), as shown in bold in Example 5-14.
Example 5-14 Using the lstargetportfc command to get primary host WWPNs (virtual WWPNs)
IBM_2145:ITSO:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted
virtualized
1 50050768021000EF 50050768020000EF 1 1 1 010200 yes no
2 50050768029900EF 50050768020000EF 1 1 000000 no yes
3 50050768022000EF 50050768020000EF 2 1 1 020200 yes no
4 5005076802A900EF 50050768020000EF 2 1 000000 no yes
5 50050768023000EF 50050768020000EF 3 1 1 0A83C0 yes no
6 5005076802B900EF 50050768020000EF 3 1 000000 no yes
7 50050768024000EF 50050768020000EF 4 1 1 011400 yes no
8 5005076802C900EF 50050768020000EF 4 1 000000 no yes
33 50050768021000F0 50050768020000F0 1 2 2 010300 yes no
34 50050768029900F0 50050768020000F0 1 2 000000 no yes
35 50050768022000F0 50050768020000F0 2 2 2 020300 yes no
36 5005076802A900F0 50050768020000F0 2 2 000000 no yes
37 50050768023000F0 50050768020000F0 3 2 2 011500 yes no
38 5005076802B900F0 50050768020000F0 3 2 000000 no yes
39 50050768024000F0 50050768020000F0 4 2 2 0A82C0 yes no
224 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
40 5005076802C900F0 50050768020000F0 4 2 000000 no yes
4. Include the primary host attach ports (virtual WWPNs) to your host zones.
5. Enable transitional mode for NPIV on Lenovo storage V-series system (Example 5-15).
6. Ensure that the primary host attach WWPNs (virtual WWPNs) now allows host traffic, as
shown in bold in Example 5-16.
Example 5-16 Host attach WWPNs (virtual WWPNs) permitting host traffic
IBM_2145:ITSO:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted
virtualized
1 50050768021000EF 50050768020000EF 1 1 1 010200 yes no
2 50050768029900EF 50050768020000EF 1 1 1 010201 yes yes
3 50050768022000EF 50050768020000EF 2 1 1 020200 yes no
4 5005076802A900EF 50050768020000EF 2 1 1 020201 yes yes
5 50050768023000EF 50050768020000EF 3 1 1 0A83C0 yes no
6 5005076802B900EF 50050768020000EF 3 1 1 0A83C1 yes yes
7 50050768024000EF 50050768020000EF 4 1 1 011400 yes no
8 5005076802C900EF 50050768020000EF 4 1 1 011401 yes yes
33 50050768021000F0 50050768020000F0 1 2 2 010300 yes no
34 50050768029900F0 50050768020000F0 1 2 2 010301 yes yes
35 50050768022000F0 50050768020000F0 2 2 2 020300 yes no
36 5005076802A900F0 50050768020000F0 2 2 2 020301 yes yes
37 50050768023000F0 50050768020000F0 3 2 2 011500 yes no
38 5005076802B900F0 50050768020000F0 3 2 2 011501 yes yes
39 50050768024000F0 50050768020000F0 4 2 2 0A82C0 yes no
40 5005076802C900F0 50050768020000F0 4 2 2 0A82C1 yes yes
7. Ensure that the hosts are using the NPIV ports for host I/O.
Remember:
8. After a minimum of 15 minutes has passed since entering transitional mode, change
the system to enabled mode by entering the command, as shown in Example 5-17 on
page 226.
Now NPIV has been enabled on the Lenovo storage V-series systems, and hosts should also
be using the virtualized WWPNs for I/O. At this point, the host zones can be amended
appropriately to use primary host attach port WWPNs (virtual WWPNs) only.
226 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. To create a host, click Add Host to start the wizard (Figure 5-39).
3. If you want to create a Fibre Channel host, continue with 5.5.1, “Creating Fibre Channel
hosts” on page 228. To create iSCSI hosts, go to 5.5.3, “Creating iSCSI hosts” on
page 236. To create SAS hosts, go to 5.5.5, “Creating SAS hosts” on page 243.
4. After you click Add Host, the host selection menu opens, as shown in Figure 5-40.
228 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. Enter a host name and click the Host port drop-down list to get a list of all known WWPNs
(Figure 5-42).
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 have the host port WWPNs
available if you prepared the hosts as described in 5.3, “Preparing the host operating
system” on page 191. If they do not appear in the list, scan for new disks in your operating
system and click Rescan in the configuration wizard. If they still do not appear, check your
SAN zoning, correct it, and repeat the scanning.
Note: You can enter WWPNs manually. However, if these WWPNs are not visible to the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030, the host object appears as offline
and it is unusable for I/O operations until the ports are visible.
4. If you want to add additional ports to your Host, click the plus sign (+).
5. Add all ports that belong to the host (Figure 5-44).
230 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Creating offline hosts: If you want to create hosts that are offline or not connected
currently, you can enter the WWPNs manually. Type them into the Host port (WWPN)
field and add them to the list. See Figure 5-45.
7. You can set the I/O Groups that your host can access. The host objects must belong to
the same I/O groups as the volumes that you want to map. Otherwise, these volumes are
not visible to the host. See Figure 5-47.
232 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support a maximum of
two nodes for each system. The two nodes are arranged as two I/O groups per cluster.
Due to the host object limit per I/O group, for maximum host connectivity, it is best to
create hosts that use single I/O groups.
9. Click Add Host, and the wizard creates the host (Figure 5-49).
11. Repeat steps 1 to 10 for all of your Fibre Channel hosts. After you add all of the Fibre
Channel hosts, create volumes and map them to the created hosts. See Chapter 6,
“Volume configuration” on page 269.
5.5.2 Configuring the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 for
FC connectivity
You can configure the FC ports on the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
for use for certain connections only. This capability is referred to as port masking. In a system
with multiple I/O groups and remote partnerships, port masking is a useful tool for ensuring
peak performance and availability.
In all cases, host I/O is still permitted, so the None option can be used to exclusively reserve
a port for host I/O.
A limit of 16 logins exists per node from another node before an error is logged. A
combination of port masking and SAN zoning can help you manage logins and provide
optimum I/O performance with local, remote, and host traffic.
234 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-51 Opening the network settings view
2. Select Fibre Channel Ports and the Fibre Channel Ports configuration view displays, as
shown in Figure 5-52.
3. To configure a port, right-click the port and select Modify Connection. The window that is
shown in Figure 5-53 opens.
In this example, we select None to restrict traffic on this port to host I/O only. Click Modify
to confirm the selection.
Note: This action configures Port 1 for all nodes. You cannot configure FC ports on a
per node basis.
Figure 5-54 Viewing FC connections between nodes, storage systems, and hosts
2. Enter a descriptive host name, type the iSCSI initiator name in the name field. Type the
iSCSI port information in Figure 5-56 on page 237 the iSCSI port field.. If you want to add
several initiator names to one host, repeat this step by clicking the plus sign (+).
236 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3. If you are connecting an HP-UX or TPGS host, select Advanced (Figure 5-55 on
page 236) and select the correct host type (Figure 5-56). Click Add.
4. You can set the I/O groups that your host can access.
Important: The host objects must belong to the same I/O groups as the volumes that
you want to map. Otherwise, the host cannot see these volumes. For more information,
see Chapter 6, “Volume configuration” on page 269.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support a maximum of four
nodes per system. These nodes are arranged as two I/O groups per cluster. Due to the
host object limit per I/O group, for maximum host connectivity, it is best to create hosts
that use single I/O groups.
6. Repeat these steps for every iSCSI host that you want to create. Figure 5-58 shows all of
the hosts after you create two Fibre Channel hosts and one iSCSI host.
Note: iSCSI hosts might show a Degraded status until the volumes are mapped. This
limitation relates to the implementation of iSCSI in the Lenovo Storage V3700 V2, V3700
V2 XP, and V5030. This status is not necessarily a problem with network connectivity or
the host configuration.
The iSCSI host is now configured on the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030. To provide connectivity, the iSCSI Ethernet ports must also be configured.
5.5.4 Configuring the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 for
iSCSI host connectivity
Complete the following steps to enable iSCSI connectivity:
1. Switch to the configuration Overview window and select Network (Figure 5-59 on
page 239).
238 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-59 Configuration: Network
2. Select iSCSI and the iSCSI Configuration window opens (Figure 5-60).
4. The Configuration window (Figure 5-60 on page 239) shows an overview of all of the
iSCSI settings for the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. You can
configure the iSCSI alias, Internet Storage Name Service (iSNS) addresses, Challenge
Handshake Authentication Protocol (CHAP), and the iSCSI IP address, which we also edit
in the basic setup.
5. Click Ethernet Ports to enter the iSCSI IP address (Figure 5-62). Repeat this step for
each port that you want to use for iSCSI traffic.
Note: We advise that you reserve at least one port for the management IP address.
Typically, reserve Port 1 for the management IP address and configure Port 2 for iSCSI.
In our example, we use Port 1 due to limited cabling.
6. After you enter the IP address for each port, click Modify to enable the configuration.
240 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
7. After the changes are successfully applied, click Close (Figure 5-63).
8. Under Actions, you can check the hosts that are enabled for iSCSI. See Figure 5-64.
11. Repeat the previous steps for all ports that need to be configured.
12.You can also configure iSCSI aliases, an iSNS name, and CHAP authentication. These
options are located in the iSCSI Configuration view. To access this view, click iSCSI in the
Network settings view, as shown in Figure 5-67.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 are now configured and ready for
iSCSI use. Document the initiator names of your storage canisters (Figure 5-60 on page 239)
because you need them later. To create volumes and map them to a host, go to Chapter 6,
“Volume configuration” on page 269.
242 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
5.5.5 Creating SAS hosts
To create a SAS host, complete the following steps:
1. From the main screen, follow the path Hosts → Hosts → Add Host, and the host
configuration wizard opens, as shown in Figure 5-68.
2. Enter a descriptive host name and click SAS under Host Connections option as shown in
Figure 5-69.
3. Click the Host port (WWPN) drop down and select the desired WWPN’s belonging to the
host from the drop-down as shown in Figure 5-70 on page 244.
4. If you previously prepared a SAS host, as described in 5.3, “Preparing the host operating
system” on page 191, the WWPNs that you recorded in this section appear. If they do not
appear in the list, verify that you completed all of the required steps and check your
cabling. Then, click Rescan in the configuration wizard. Ensure that the ends of the SAS
cables are aligned correctly.
Note: You can enter WWPNs manually. However, if these WWPNs are not visible to the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030, the host object appears as offline
and it is unusable for I/O operations until the ports are visible.
5. Under the Optional Fields section, you can set host type, the I/O groups that your host
can access, and the host cluster the host belongs if it is defined.
Important: Host objects must belong to the same I/O groups as the volumes that you
want to map. Otherwise, the volumes are not visible to the host.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support a maximum of four
nodes per system. These nodes are arranged as two I/O groups per cluster. Due to the
host object limit per I/O group, for maximum host connectivity, it is best to create hosts
that use single I/O groups.
You can choose the host type. If you use HP/UX, OpenVMS, or TPGS, you must configure
the host. Otherwise, the default option (Generic) is acceptable.
244 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6. Click Add Host and the wizard completes, as shown in Figure 5-71.
7. Click Close to return to the host view, which now lists your newly created host object, as
shown in Figure 5-72.
Figure 5-72 The hosts view lists the newly created host object
After all of the host objects are created, see Chapter 6, “Volume configuration” on page 269
to create volumes and map them to the hosts.
Volume mappings can either be shared or private. Shared mappings are volume mappings
that are shared among all the hosts that are in a host cluster. When a host cluster is created,
any common volume mappings become shared among all the hosts within the host cluster. If
a mapping is not common, it remains a private mapping for that host only. Private mappings
are mappings that are associated with an individual host.
A host cluster allows a user to create a group of hosts to form a cluster, which is treated as
one single entity, thus allowing multiple hosts to have access to the same set of volumes.
Volumes that are mapped to that host cluster, are assigned to all members of the host cluster
with the same SCSI ID.
By defining a host cluster, the user can map one or more volumes to the host cluster object.
As a result the volume, or set of volumes, in turn gets assigned to each individual host object
A host cluster is made up of individual hosts, and volumes can also be assigned to individual
hosts that make up the cluster. Even though a host is part of host cluster, volumes can still be
assigned to a particular host in a non-shared manner. A policy can be devised which might
pre-assign a standard set of SCSI IDs for volumes to be assigned to the host cluster, and
another set of SCSI IDs to be used for individual assignments to hosts.
Note: For example, SCSIs ID 0 - 100 can be used for individual host assignment, and
SCSI IDs higher than100 can be used for host cluster. By employing such a policy, wanted
volumes will not be shared, and others can be. For example, the boot volume of each host
can be kept private, while data and application volumes can be shared.
A typical use case is to define a host cluster containing all of the WWPNs belonging to the
hosts that are participating in a host operating system-based cluster, such as IBM PowerHA,
Microsoft Cluster Server (MSCS) and such.
This section describes the following host cluster operations using the GUI:
Creating a host cluster
Adding a member to the host cluster
Listing a host cluster members
Assigning a volume to the host cluster
Unmapping a volume from the host cluster
Removing a host cluster member
Removing the host cluster
Note: From V8.1.0 onwards, various Host Cluster related operations can also be done
using the GUI in addition to CLI.
Note: Host clusters enable you to create individual hosts and add them to a host cluster.
Care must be taken to make sure that no loss of access occurs when transitioning to host
clusters.
246 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-73 Host Clusters
3. Provide the name of the Host Cluster that you want to create as shown in Figure 5-75 on
page 248.
4. At this point, you can either select the list of hosts that are going to be the part of the host
cluster or you can defer that action. In the given example, we have chosen to defer the
action of adding hosts to Host Cluster definition as that part is covered in 5.6.2, “Adding a
member to a host cluster” on page 249.
5. Click Next and you will see host cluster creation completion screen as shown in
Figure 5-76.
6. Click Close and you will be able to see the host cluster you created in the list as shown
Figure 5-77.
248 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: A host cluster could be offline either due to no members present OR all the
members are offline. In our example it is offline as we have not added any members.
2. Click Add Host and you will get a selection window as shown in Figure 5-79.
4. Click Next and a summary of the hosts to be added will be shown as depicted in
Figure 5-81.
5. Click Add Hosts and hosts will be added to the host cluster definition as shown in
Figure 5-82 on page 251.
250 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-82 Hosts added to host cluster definition
6. Click Close and you will see host cluster definition will show online status as hosts are
added to a host cluster as shown in Figure 5-83.
2. Click View Hosts will list the host cluster members as shown in Figure 5-85 on page 252.
3. Click Close and you will be back on the Host Clusters pane as shown in Figure 5-86.
2. Right click on the desired host cluster and select Modify Shared Volume Mappings as
shown in Figure 5-88 on page 253.
252 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-88 Modify Shared Volume Mapping for Host Cluster
3. A window showing list of volumes currently mapped to the selected host cluster shows up
as shown in Figure 5-89.
Note: If there are no volumes mapped to the selected host cluster, the list will be empty.
4. Click Add Volume Mappings which will bring up a window listing the volumes which you
can select to assign to host cluster as shown in Figure 5-90 on page 254.
5. Select the list of volumes to be mapped to the host cluster as shown in Figure 5-91.
6. You can choose the SCSI LUN ID’s be assigned automatically by the system or manually
assign them. In this example, we chose the system assigned SCSI LUN ID’s.
254 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
7. Click Next. A summary screen with the list of volumes to be mapped will be shown as in
Figure 5-92.
8. Click Map Volumes and then the selected volumes will be mapped to the host cluster as
shown in Figure 5-93.
3. A window showing list of volumes mapped to the host cluster is shown in Figure 5-96.
4. Select the desired volume to be unmapped as shown in Figure 5-97 on page 257.
256 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-97 Selecting the volume to be unmapped
5. Click Remove Volume Mappings. A summary window listing the hosts will be shown as
in Figure 5-98.
Note: At this point, you can select any host from the list to keep the private mapping
between the selected host and the volume.
6. Click Next. A window will appear as shown in Figure 5-99 on page 258.
7. Click Remove Volumes. A window with Task Completed message will be shown as in
Figure 5-100.
2. Right click on the desired host cluster and select Remove Hosts as shown in
Figure 5-102 on page 259
258 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-102 Removing hosts from host cluster
3. A window will be shown with list of hosts that are members of the host cluster as shown in
Figure 5-103.
4. Select the host member that needs to be removed as shown in Figure 5-104 on page 260.
5. Select appropriate radio button to indicate whether you want to keep the mappings after
the host member is removed from the host cluster or to remove those mappings. In our
example we chose to Remove Mappings as shown in Figure 5-105.
Figure 5-105 Mapping selection during removal of host member from host cluster
Note: Select Keep Mappings to retain all the shared mappings in the host cluster as
private mappings for the selected hosts. Select Remove Mappings to remove all the
shared mappings if the host or hosts that is being removed no longer require access to
these volumes.
6. Click Next. A window informing removal for the selected host member from host cluster
will take place as shown in Figure 5-106 on page 261.
260 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-106 Confirmation window
Figure 5-107 Host member removal task from host cluster completed
8. Click Close.
2. Right click on the desired host cluster and select Delete Host Cluster as shown in
Figure 5-109 on page 262.
3. A dialog box will appear asking you to confirm the deletion of host cluster object along with
your selection on mappings as shown in Figure 5-110.
4. Select the desired mappings via the radio button. In our example we chose to Remove
Mappings as shown in Figure 5-111.
Warning: Selecting Remove Mappings to remove all the shared mappings if the host
or hosts that is being removed no longer require access to these volumes. So exercise
this option with caution otherwise the server or servers that are part of the host cluster,
will lose access to all the shared volumes.
5. Click Delete Host Cluster.You will see a window indicating the task completed
successfully as shown in Figure 5-112 on page 263.
262 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-112 Host Cluster deletion task completed
6. Click Close.
In Spectrum Virtualize, by default, no I/O throttling rate is set. However, I/O throttling can be
set at any of the following levels:
Host
Host clusters
Volume
MDisk group
When I/O throttling is set, the I/O rate is limited by queuing I/Os if it exceeds preset limits. I/O
throttling does not guarantee minimum performance. Internal I/Os such as FlashCopy,
Metro-Mirror, intra-cluster traffic are not throttled.
The following list illustrates some of the scenarios where I/O throttling can be beneficial:
An aggressive host hogging bandwidth of the Spectrum Virtualize system can be limited
by a throttle. For example, allow restricted I/Os from a data mining server than an
application server.
Restrict a group of hosts by their throttles. For example, department A gets more
bandwidth than department B.
Each volume can have a throttle defined. For example, a backup volume can have less
bandwidth than a production volume).
In this section, we illustrate the process of setting I/O throttle on already defined hosts and
host clusters.
2. Right click on the desired host and select Edit Throttle as shown in Figure 5-114.
3. Input the desired type of I/O throttle, either IOPS or Bandwidth. In our example we set up
IOPs throttle by entering the IOPS limit as shown in Figure 5-115 on page 265.
264 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 5-115 Setting IOPS throttle
Note: While defining a throttle for a host, you can either define a throttle in terms of
IOPS or bandwidth, but not both at the same time. If you want to have host throttle
defined for IOPS and bandwidth both, then you have to define them one after the other.
4. Click Create. A window will open indicating that the task of setting up the throttle has
completed as shown in Figure 5-116.
5. Click Close.
2. Right click on the desired host cluster and select Edit Throttle as shown in Figure 5-118.
3. Input the desired type of I/O throttle, either IOPS or Bandwidth. In our example we set up
IOPs throttle by entering the IOPS limit as shown in Figure 5-119.
4. Click Create. A window will open indicating the task of setting up the throttle completed as
shown in Figure 5-120.
5. Click Close.
266 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The following considerations have to be taken into account when defining the I/O throttle on
host or Host Clusters:
I/O throttle cannot be defined for host if it is a part of Host Cluster which already has an I/O
throttle defined at the Host Cluster level.
If the Host Cluster does not have an I/O throttle defined, its member hosts can have their
individual I/O throttles defined.
The mdiskgrp (storage pool) throttles for child pool and parent pool work independently.
If a volume has multiple copies then throttling would be done for the mdiskgrp (storage
pool) serving the primary copy. The throttling will not be applicable for the secondary pool
for mirrored volumes and stretched cluster implementations.
A host cannot be added to a Host Cluster if both of them have their individual throttles
defined.
A seeding host used for creating a Host Cluster cannot have a host throttle defined for it.
As a secondary issue, hosts often attempt to restart sending I/O to the preferred node
canister as soon as the ports come online but the node canister may still be unpending.
During this time, the node canister queues all incoming I/O until it has its configuration data
and then volumes start coming online.
To minimize the issues outlined here the Proactive Host Failover feature has been added
since V7.8.1. Due to the Proactive Host Failover feature, the host multipath driver will get
notification for node canister removal or node canister reboot during the planned
maintenance procedures of Lenovo Storage V3700 V2, V3700 V2 XP, and V5030.
Due to the notification received, the host will use the partner node canister for I/O and hence
the I/O does not need to be timed-out and retried.
Note: Proactive Host Failover is an internal feature of Lenovo storage V-series software.
There are no changes to the CLI or GUI for this feature.
The following points are worth noting in regards to Proactive Host Failover:
When a Lenovo storage V-series system knows a node canister is about to go down, it will
raise unit attentions to try and trigger host path failovers to surviving node canisters.
Requires the host to be tracking the preferred paths - usually requires ALUA support.
268 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6
The first part of this chapter provides a brief overview of Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 volumes, the classes of volumes available, and the topologies that they
are associated with. It also provides an overview of advanced customization available.
The second part describes how to create volumes by using the GUI’s Quick and Advanced
volume creation menus, and shows you how to map these volumes to defined hosts.
The third part provides an introduction to the new volume manipulation commands, which are
designed to facilitate the creation and administration of volumes used for IBM HyperSwap
topology.
Note: Advanced host and volume administration, such as volume migration and creating
volume copies, is described in Chapter 10, “Copy services” on page 451.
Note: A managed disk (MDisk) is a logical unit of physical storage. MDisks are either
Redundant Arrays of Independent Disks (RAID) from internal storage, or external physical
disks that are presented as a single logical disk on the SAN. Each MDisk is divided into
several extents, which are numbered, from 0, sequentially from the start to the end of the
MDisk. The extent size is a property of the storage pools that the MDisks are added to.
Volumes have two major modes: Managed mode and image mode. Managed mode volumes
have two policies: The sequential policy and the striped policy. Policies define how the
extents of a volume are allocated from a storage pool.
The type attribute of a volume defines the allocation of extents that make up the volume copy:
A striped volume contains a volume copy that has one extent allocated in turn from each
MDisk that is in the storage pool. This is the default option, but you can also supply a list of
MDisks to use as the stripe set as shown in Figure 6-1 on page 271.
270 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Attention: By default, striped volume copies are striped across all MDisks in the
storage pool. If some of the MDisks are smaller than others, the extents on the smaller
MDisks are used up before the larger MDisks run out of extents. Manually specifying
the stripe set in this case might result in the volume copy not being created.
If you are unsure if there is sufficient free space to create a striped volume copy, select
one of the following options:
Check the free space on each MDisk in the storage pool by using the lsfreeextents
command.
Let the system automatically create the volume copy by not supplying a specific
stripe set.
A sequential volume contains a volume copy that has extents that are allocated
sequentially on one MDisk.
Image-mode volumes are a special type of volume that have a direct relationship with one
MDisk.
An image mode MDisk is mapped to one, and only one, image mode volume.
The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of creating the image mode volume.
The reverse process is also supported, in which a managed mode volume can be migrated to
an image mode volume. If a volume is migrated to another MDisk, it is represented as being
in managed mode during the migration, and is only represented as an image mode volume
after it reaches the state where it is a straight-through mapping.
It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). The extent size that is chosen for
this specific storage pool must be the same as the extent size into which you plan to migrate
the data. All of the copy services functions can be applied to image mode disks. See
Figure 6-2.
Figure 6-3 on page 273 shows this mapping. It also shows a volume that consists of several
extents that are shown as V0 - V7. Each of these extents is mapped to an extent on one of
the MDisks: A, B, or C. The mapping table stores the details of this indirection.
Several of the MDisk extents are unused. No volume extent maps to them. These unused
extents are available for use in creating volumes, migration, expansion, and so on.
272 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-3 Simple view of block virtualization
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm:
If the set of MDisks from which to allocate extents contains more than one MDisk, extents
are allocated from MDisks in a round-robin fashion.
If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin
moves to the next MDisk in the set that has a free extent.
When a volume is created, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by choosing the next disk in a round-robin fashion. The
pseudo-random algorithm avoids the situation where the striping effect that is inherent in a
round-robin algorithm that places the first extent for many volumes on the same MDisk.
Placing the first extent of several volumes on the same MDisk can lead to poor performance
for workloads that place a large I/O load on the first extent of each volume, or that create
multiple sequential streams.
Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (LUNs) that are used as the image mode
volumes. Using Copy Services rather than the underlying disk controller copy services gives
better results.
Note: Having cache-disabled volumes makes it possible to use the native copy
services in the underlying RAID array controller for MDisks (LUNs) that are used as
image mode volumes. Consult Lenovo Support before turning off the cache for volumes
in production environment to avoid any performance degradation.
The two copies of the volume often are allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships.
It is serviced by an I/O Group, and has a preferred node.
Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be 0 or 1.
This feature provides a point-in-time copy function that is achieved by “splitting” a copy from
the volume. However, the mirrored volume feature does not address other forms of mirroring
that are based on remote copy, which is sometimes called IBM HyperSwap, that mirrors
volumes across I/O Groups or clustered systems. It is also not intended to manage mirroring
or remote copy functions in back-end controllers.
274 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-4 Volume mirroring overview
A second copy can be added to a volume with a single copy or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale.”
The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default synchronization rate or at a rate that is defined when the volume
is created or modified. The synchronization status for mirrored volumes is recorded on the
quorum disk.
If a two-copy mirrored volume is created with the format parameter, both copies are
formatted in parallel, and the volume comes online when both operations are complete with
the copies in sync.
If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.
If it is known that MDisk space (which is used for creating copies) is already formatted or if the
user does not require read stability, a no synchronization option can be selected that
declares the copies as synchronized (even when they are not).
To minimize the time that is required to resynchronize a copy that is out of sync, only the
256 kibibyte (KiB) grains that were written to since the synchronization was lost are copied.
This approach is known as an incremental synchronization. Only the changed grains must be
copied to restore synchronization.
Important: An unmirrored volume can be migrated from one location to another by adding
a second copy to the wanted destination, waiting for the two copies to synchronize, and
then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.
Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.
Figure 6-5 Data flow for write I/O processing in a mirrored volume
As shown in Figure 6-5, all the writes are sent by the host to the preferred node for each
volume (1). Then, the data is mirrored to the cache of the partner node in the I/O Group (2),
and acknowledgment of the write operation is sent to the host (3). The preferred node then
destages the written data to the two volume copies (4).
A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while it is reading from one copy, it is repaired by
using data from the other copy. This consistency check is performed asynchronously with
host I/O.
Important: Mirrored volumes can be taken offline if no quorum disk is available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.
Mirrored volumes use bitmap space at a rate of 1 bit per 256 KiB grain, which translates to
1 MiB of bitmap space supporting 2 TiB of mirrored volumes. The default allocation of bitmap
space is 20 MiB, which supports 40 TiB of mirrored volumes. If all 512 MiB of variable bitmap
space is allocated to mirrored volumes, 1 PiB of mirrored volumes can be supported.
Table 6-1 on page 277 shows you the bitmap space default configuration.
276 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Table 6-1 Bitmap space default configuration
Copy service Minimum Default Maximum Minimuma functionality
allocated allocated allocated when using the default
bitmap bitmap bitmap values
space space space
The sum of all bitmap memory allocation for all functions except FlashCopy must not exceed
552 MiB.
Therefore, the real capacity determines the quantity of MDisk extents that is initially allocated
to the volume. The virtual capacity is the capacity of the volume that is reported to all other
components (for example, FlashCopy, cache, and remote copy), and to the host servers.
The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value, or as a percentage of the
virtual capacity.
Thin-provisioned volumes can be used as volumes that are assigned to the host, by
FlashCopy to implement thin-provisioned FlashCopy targets, and with the mirrored volumes
feature.
The grain size is defined when the volume is created. The grain size can be 32 KiB, 64 KiB,
128 KiB, or 256 KiB. The default grain size is 256 KiB, which is the preferred option. If you
select 32 KiB for the grain size, the volume size cannot exceed 260 TiB. The grain size
cannot be changed after the thin-provisioned volume is created. Generally, smaller grain
sizes save space, but they require more metadata access, which can adversely affect
performance.
When using thin-provisioned volume as a FlashCopy source or target volume, use 256 KiB to
maximize performance. When using thin-provisioned volume as a FlashCopy source or target
volume, specify the same grain size for the volume and for the FlashCopy function.
Thin-provisioned volumes store user data and metadata. Each grain of data requires
metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned
volumes are less than the I/O rates that are obtained from fully allocated volumes.
The metadata storage used is never greater than 0.1% of the user data. The resource usage
is independent of the virtual capacity of the volume. If you are using the thin-provisioned
volume directly with a host system, use a small grain size.
278 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity enables a larger amount of data and metadata to be stored on
the volume. Thin-provisioned volumes use the real capacity that is provided in ascending
order as new data is written to the volume. If the user initially assigns too much real capacity
to the volume, the real capacity can be reduced to free storage for other uses.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and it must expand.
Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated.
To support the auto expansion of thin-provisioned volumes, the storage pools from which they
are allocated have a configurable capacity warning. When the used capacity of the pool
exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% is
specified, the event is logged when 20% of the free capacity remains.
Note: For Lenovo Storage V3700 V2 and V3700 V2 XP, only Lenovo Storage V5030
model supports compression.
Note: For Lenovo Storage V3700 V2, V3700 V2 XP and V5030, the HyperSwap
topology is supported only on Lenovo Storage V5030 system.
Virtual Volumes (VVols): The controller firmware V7.6.0 release also introduces Virtual
Volumes. These volumes are available in a system configuration that supports VMware
vSphere Virtual Volumes. These volumes allow VMware vCenter to manage system
objects, such as volumes and pools. The Spectrum Virtualize system administrators can
create volume objects of this class, and assign ownership to VMware administrators to
simplify management.
Note: From V7.4.0 onwards, it is possible to prevent accidental deletion of volumes, if they
have recently performed any I/O operations. This feature is called Volume protection, and it
prevents active volumes or host mappings from being deleted inadvertently. This is done
by using a global system setting. For more information, see Lenovo Information Center:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/sv
c_volprotection.html
280 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-7 Volumes
3. On the right pane, you will get a list of existing volumes, if any, as shown in Figure 6-9 on
page 282.
5. Depending on the topology of the system, the Create Volumes button will provide different
options. For standard topology, Create Volumes will pop up window with options to create
basic, mirrored or custom volume as shown in Figure 6-11.
282 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
For HyperSwap topology, Create Volumes option will pop up window with options to create
basic, HyperSwap or custom volume as shown in Figure 6-12.
Clicking any of the three choices in the Create Volumes window opens a drop-down window
where volume details can be entered. The example shown in Figure 6-13 on page 284 shows
a Basic volume to demonstrate this view.
Notes:
A Basic volume is a volume whose data is striped across all available managed disks
(MDisks) in one storage pool.
A Mirrored volume is a volume with two physical copies, where each volume copy can
belong to a different storage pool.
A Custom volume, in the context of this menu, is either a Basic or Mirrored volume with
customization from the default parameters
Volume Creation also provides, using the Capacity Savings parameter, the ability to
change the default provisioning of a Basic or Mirrored Volume to Thin-provisioned or
Compressed.
284 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6.3 Creating volumes using the Volume Creation
This section focuses on using the Volume Creation operation to create Basic and Mirrored
volumes in a system with standard topology. It also covers creating host-to-volume mapping.
As previously stated, Volume Creation is available on four different volume classes:
Basic
Mirrored
Custom
HyperSwap
Note: The ability to create HyperSwap volumes using the GUI simplifies creation and
configuration. This simplification is enhanced by the GUI using the mkvolume command.
Create a Basic volume by clicking the Basic icon as shown in Figure 6-14 on page 286. This
action opens an additional input window where you can define the following information:
Pool: The pool in which the volume is created (drop-down)
Quantity: The number of volumes to be created (numeric up/down)
Capacity: Size of the volume in units (drop-down)
Capacity Savings (drop-down):
– None
– Thin-provisioned
– Compressed
Name: Name of the volume (cannot start with a numeric)
I/O group
The Basic volume creation process is shown in Figure 6-14 on page 286.
An appropriate naming convention is recommended for volumes for easy identification of their
association with the host or host cluster. At a minimum, it should contain the name of the pool
or some tag that identifies the underlying storage subsystem. It can also contain the host
name that the volume is mapped to, or perhaps the content of this volume, for example, name
of applications to be installed.
When all of the characteristics of the Basic volume have been defined, it can be created by
selecting one of the following options:
Create
Create and Map to Host
Note: The Plus sign (+) icon highlighted in green in Figure 6-14, can be used to create
more volumes in the same instance of the volume creation wizard.
In this example, the Create option has been selected (the volume-to-host mapping can be
performed later). At the end of the volume creation, following confirmation window appears as
shown in Figure 6-15 on page 287.
286 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-15 Create Volume Task Completion window: Success
Success is also indicated by the state of the Basic volume being reported as formatting in the
Volumes pane as shown in Figure 6-16.
Notes:
Fully allocated volumes are automatically formatted through the quick initialization
process after the volume is created. This process makes fully allocated volumes
available for use immediately.
Quick initialization requires a small amount of I/O to complete, and limits the number of
volumes that can be initialized at the same time. Some volume actions, such as moving,
expanding, shrinking, or adding a volume copy, are disabled when the specified volume
is initializing. Those actions are available after the initialization process completes.
The quick initialization process can be disabled in circumstances where it is not
necessary. For example, if the volume is the target of a Copy Services function, the
Copy Services operation formats the volume.The quick initialization process can also
be disabled for performance testing, so that the measurements of the raw system
capabilities can take place without waiting for the process to complete.
Normally this is the primary copy (as indicated in the management GUI by an asterisk (*)). If
one of the mirrored volume copies is temporarily unavailable (for example, because the
storage system that provides the pool is unavailable), the volume remains accessible to
servers. The system remembers which areas of the volume are written and resynchronizes
these areas when both copies are available.
Note: Volume mirroring is not a true disaster recovery (DR) solution, because both copies
are accessed by the same node pair and addressable by only a single cluster, but it can
improve availability.
Generally, keep mirrored volumes on a separate set of physical disks (Pools). Leave the I/O
group option at its default setting of Automatic (see Figure 6-17 on page 289).
288 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-17 Mirrored Volume creation
Note: When creating a Mirrored volume by using this menu, you are not required to
specify the Mirrored Sync rate. It defaults to 2 MBps. Customization of this
synchronization rate can be done by using the Custom option.
Figure 6-19 Volume Creation with Capacity Saving option set to Compressed
290 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6.4 Mapping a volume to the host
After a volume is created, it can be mapped to a host:
1. From the Volumes menu, highlight the volume that you want to create a mapping for and
then select Actions from the menu bar.
Tip: An alternative way of opening the Actions menu is to highlight (select) a volume
and use the right mouse button.
2. From the Actions menu, select the Map to Host or Host Cluster option as shown in
Figure 6-20.
3. This action opens a Create Mapping window. In this window, indicate whether the volume
needs to be mapped to a host or host cluster, select the desired host or host cluster, and
whether you want system assigned SCSI ID or self assigned as shown in Figure 6-21 on
page 292.
4. Click Next. A window will open up listing already mapped volumes to that host along wit
the new volume will be mapped, as shown in Figure 6-22 with blue rectangle.
292 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
5. Click Map Volumes and the Modify Mappings window displays the command details, and
then a Task completed message as shown in Figure 6-23.
Work through these options to customize your Custom volume as wanted, and then commit
these changes by using Create as shown in Figure 6-24 on page 294.
294 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-25 Volume Location for thin-provisioned volume
2. Next, in the Volume Details subsection you can input the Quantity, Capacity (virtual),
Capacity Savings (choose Thin-provisioned from the drop-down menu), and Name of
the volume being created as shown in Figure 6-26.
The Thin Provisioning options are as follows (defaults are displayed in parentheses):
– Real capacity: (2%). Specify the size of the real capacity space used during creation.
– Automatically Expand: (Enabled). This option enables the automatic expansion of
real capacity, if more capacity is to be allocated.
– Warning threshold: (Enabled). Enter a threshold for receiving capacity alerts.
– Grain Size: (256 kibibytes (KiB)). Specify the grain size for real capacity. This option
describes the size of the chunk of storage to be added to used capacity. For example,
when the host writes 1 megabyte (MB) of new data, the capacity is increased by
adding four chunks of 256 KiB each.
Important: If you do not use the autoexpand feature, the volume will go offline after
reaching its real capacity.
The default grain size is 256 KiB. The optimum choice of grain size depends on volume
use type. For more information, see the “Performance Problem When Using EasyTier
With Thin Provisioned Volumes” topic:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc
/svc_spaceefficentvdisks_3r7ayd.html
If you are not going to use the thin-provisioned volume as a FlashCopy source or
target volume, use 256 KiB to maximize performance.
If you are going to use the thin-provisioned volume as a FlashCopy source or target
volume, specify the same grain size for the volume and for the FlashCopy function.
4. Next, in the General sub-section enter the caching mode as Enabled, Read-only or
Disabled as shown in Figure 6-28. Also enter unit device identifier (UDID) if this volume is
going to be mapped to an OpenVMS host.
296 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
5. Click Create to define the volume as shown in Figure 6-29.
2. Next, in the Volume Details subsection you can input the Quantity, Capacity (virtual),
Capacity Savings (choose Compressed from the drop-down menu), and Name of the
volume being created as shown in Figure 6-31.
3. Next, in the Compressed subsection enter the real either in terms of % of virtual capacity
or in GiB, expansion criteria, warning threshold and desired % of the virtual capacity at
which you should receive a warning as shown in Figure 6-32 on page 299.
298 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-32 Compressed details
4. Next, in the General sub-section enter the caching mode as Enabled, Read-only or
Disabled as shown in Figure 6-33. Also enter unit device identifier (UDID) if this volume is
going to be mapped to an OpenVMS host.
The progress of formatting and synchronization of a newly created Mirrored Volume can be
checked from the Running Tasks menu. This menu reports the progress of all currently
running tasks, including Volume Format and Volume Synchronization (Figure 6-36).
The Mirror Sync rate can be changed from the default setting under the Volume Location
subsection of the Create Volume window. This option sets the priority of copy
300 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
synchronization progress, enabling a preferential rate to be set for more important mirrored
volumes.
The summary shows you the capacity information and the allocated space. You can
customize the thin-provision settings or the mirror synchronization rate. After you create the
volume, the task completed window opens as shown in Figure 6-37.
The initial synchronization of thin-mirrored volumes is fast when a small amount of real and
virtual capacity is used.
When the HyperSwap topology is configured, the GUI uses mkvolume command to create
volumes instead of traditional mkvdisk command. This section describes the mkvolume
command that is used in HyperSwap topology. The GUI continues to use mkvdisk when all
other classes of volumes are created.
In this section, the new mkvolume command is described, and how the GUI uses this
command, when HyperSwap topology has been configured, rather than the “traditional”
mkvdisk command.
HyperSwap volumes are a new type of HA volumes that are supported by controller firmware.
They are built off two existing technologies:
Metro Mirror
Volume Mirroring
The GUI simplifies the complexity of HyperSwap volume creation by only presenting the
volume class of HyperSwap as a Volume Creation option after HyperSwap topology has
been configured.
In the following example, HyperSwap topology has been configured and the Volume
Creation window is being used to define a HyperSwap Volume as shown in Figure 6-39 on
page 303.
The capacity and name characteristics are defined as for a Basic volume (highlighted in blue
in the example) and the mirroring characteristics are defined by the Site parameters
(highlighted in red).
302 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-39 HyperSwap Volume creation with Summary of actions
The drop-downs help during creation, and the Summary (lower left of the creation window)
indicates the actions that are carried out when the Create option is selected. As shown in
Figure 6-39, a single volume is created, with volume copies in site1 and site2. This volume
is in an active-active (Metro Mirror) relationship with extra resilience provided by two change
volumes.
The command that is issued to create this volume is shown in Figure 6-40 on page 304, and
can be summarized as follows:
svctask mkvolume -name <name_of_volume> -pool <X:Y> -size <Size_of_volume> -unit
<units>
In addition, the lsvdisk and GUI functionality is available. The lsvdisk command now
includes volume_id, volume_name, and function fields to easily identify the individual VDisk
that make up a HyperSwap volume. These views are “rolled-up” in the GUI to provide views
that reflect the client’s view of the HyperSwap volume and its site-dependent copies, as
opposed to the “low-level” VDisks and VDisk-change-volumes.
As shown in the Figure 6-41, Volumes → Volumes shows the HyperSwap Volume
ITSO_HS_VOL with an expanded view opened by using the twisty (V) to reveal two volume
copies: ITSO_HS_VOL (site1) (Master VDisk) and ITSO_HS_VOL (site2) (Auxiliary VDisk). We
do not show the VDisk-Change-Volumes.
Likewise, the status of the HyperSwap volume is reported at a “parent” level. If one of the
copies is synchronized or not or offline, the HyperSwap volume reflects this state as shown in
Figure 6-42 on page 305.
304 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-42 Parent volume reflects state of copy volume
6.7.1 Mapping newly created volumes to the host using the wizard
This section involves continue to map the volume that was created in 6.3, “Creating volumes
using the Volume Creation” on page 285. We assume that you followed that procedure and
are on the Volumes pane showing list of volumes as shown in Figure 6-43.
306 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-44 Select Map to Host or Host Cluster
2. Select a host or a host cluster to which the new volume should be attached as shown in
Figure 6-45.
Note: At this point you can let system assign a SCSI ID or choose it to assign it manually
via Self Assign radio button. In this example, we chose to let system assign a SCSI ID.
3. Click Next. You will be shown a summary window with the volume to be mapped along
with already existing volumes mapped to the host as shown in Figure 6-46 on page 308.
4. Click Map Volumes. A window indicating task of volume mapping completed will be
shown as in Figure 6-47.
5. After the task completes, the wizard returns to the Volumes window.
6. You can verify the host mapping by clicking on Hosts from the main panel and selecting
Hosts as shown in Figure 6-48 on page 309.
308 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-48 Hosts from main panel
7. Right click on the host to which the volume was mapped and select Modify Volume
Mappings as shown in Figure 6-49
8. A window will pop up showing volumes currently mapped to the selected host as shown in
Figure 6-50 on page 310.
The host is now able to access the volumes and store data on them. See 6.8, “Migrating a
volume to another storage pool” on page 310 for information about discovering the volumes
on the host and making additional host settings, if required.
Multiple volumes can also be created in preparation for discovering them later, and customize
mappings.
The migration process itself is a low priority process that does not affect the performance of
the Lenovo storage V-series systems. However, it moves one extent after another to the new
storage pool, so the performance of the volume is affected by the performance of the new
storage pool after the migration process.
310 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. Right click on the desired volume and select Migrate to Another Pool as shown in
Figure 6-52.
3. The Migrate Volume Copy window opens. If your volume consists of more than one copy,
select the copy (from the menu shown in Figure 6-53) that you want to migrate to another
storage pool. If the selected volume consists of one copy, this selection menu is not
available.
4. Select the new target storage pool as shown in Figure 6-54 on page 312.
5. Click Migrate and the volume copy migration starts as shown in Figure 6-55. Click Close
to return to the Volumes pane.
Depending on the size of the volume, the migration process takes some time, but you can
monitor the status of the migration in the running tasks bar at the bottom of the GUI as
shown in Figure 6-56.
312 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
After the migration is completed, the volume is shown in the new storage pool. Figure 6-57
shows that it was moved from the Pool0 to the Pool2.
The volume copy has now been migrated without any host or application downtime to the new
storage pool. It is also possible to migrate both volume copies to other pools online.
Another way to migrate volumes to another pool is by performing the migration using the
volume copies, as described in 6.9, “Migrating volumes using the volume copy feature” on
page 313.
Note: Migrating a volume between storage pools with different extent sizes is not
supported. If you need to migrate a volume to a storage pool with a different extent size,
use volume copy features instead.
2. Select the desired pool into which a new copy will be created as shown in Figure 6-59
314 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3. You will get a confirmation window as shown in Figure 6-60.
4. Wait until the copies are synchronized. Then, change the role of the copies and make the
new copy of the primary copy as shown in Figure 6-61.
Figure 6-61 Making the new copy in a different storage pool as primary
6. Split or delete the old copy from the volume as shown in Figure 6-63.
7. Ensure that the new copy is in the target storage pool as shown in Figure 6-64 on
page 317.
316 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-64 Verifying the new copy in the target storage pool
This migration process requires more user interaction, but it offers some benefits, for
example, if you migrate a volume from a tier 1 storage pool to a lower performance tier 2
storage pool. In step 1, you create the copy on the tier 2 pool. All reads are still performed in
the tier 1 pool to the primary copy. After the synchronization, all writes are destaged to both
pools, but the reads are still only done from the primary copy.
Now you can switch the role of the copies online (step 3), and test the performance of the
new pool. If you are done testing your lower performance pool, you can split or delete the old
copy in tier 1, or switch back to tier 1 in seconds, in case tier 2 pool did not meet your
performance requirements.
At the other extreme, a streaming video application generally issues a small amount of I/O,
but it transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle that is based on IOPS does not achieve much, so it is better to use an
MBps throttle.
An I/O governing rate of 0 does not mean that zero IOPS (or MBps) can be achieved. It
means that no throttle is set.
Note:
I/O governing does not affect FlashCopy and data migration I/O rates.
I/O governing on a Metro Mirror and Global Mirror secondary volume does not affect the
rate of data copy from the primary volume.
318 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 6-65 Edit Throttle
2. A window will come up where you can set the throttle either in terms of IOPS or bandwidth
(MBps) or both. In our example, we set the throttle on IOPS as shown in Figure 6-66.
3. Click Create and you will get a task completed window as shown in Figure 6-67 on
page 320
320 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. A window will come up where you can remove the throttle that was defined. In our
example, we remove the throttle on IOPS by clicking on Remove as shown in Figure 6-69.
The data migration from other storage systems to the Lenovo Storage V3700 V2, V3700 V2
XP, and V5030 consolidate storage and enable the benefits of the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 functionalities across all the volumes, such as the intuitive GUI,
internal virtualization, thin provisioning and FlashCopy.
There are multiple reasons to use the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
migration features:
To redistribute workload within a clustered system across the disk subsystem
To move workload onto newly installed storage
To move workload off old or failing storage, ahead of decommissioning it
To migrate data from an older disk subsystem to Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030
To migrate data from one disk subsystem to another disk subsystem
Command-line interface (CLI): For more information about the command-line interface
setup, see Appendix A, “CLI setup and SAN Boot” on page 761.
Manually migrating data: For more information about migrating data manually, see
Chapter 11, “External storage virtualization” on page 607.
To ensure interoperation and compatibility across all the elements that connect to the Storage
Area Network (SAN) fabric, check the proposed configuration with the Lenovo interoperability
matrix. The interoperability matrix can confirm whether a solution is supported and provide
recommendations for hardware and software levels. Interoperability matrix validates the
components within a single storage solution. To confirm the interoperation between multiple
storage solutions, it is necessary to request separate validation for each of them.
https://download.lenovo.com/storage/lenovo_storage_v5030_8_1_x.xls
https://download.lenovo.com/storage/lenovo_storage_v3700v2_8_1_x.xls
324 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
7.3 Storage migration wizard
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 storage migration wizard simplifies
the migration task. The wizard features intuitive panels that guides users through the entire
process.
The difference between Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 models is that
the Lenovo Storage V3700 V2 and Lenovo Storage V3700 V2 XP models can perform data
migration only, external storage controllers cannot be virtualized on them. Lenovo Storage
V5030 can be used to migrate data and externally virtualize storage from external storage
controllers. For more information about external virtualization, see Chapter 11, “External
storage virtualization” on page 607.
Fibre Channel migration to a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 require
the purchase of a pair of the optional 16 Gb FC adapter cards.
SAS migration on the Lenovo Storage V3700 V2 and Lenovo Storage V5030 systems require
the purchase of a pair of optional SAS adapter cards. SAS migration on Lenovo Storage
V3700 V2 can be performed without an adapter card by using the onboard SAS host
attachment ports.
Table 7-1 Comparison of Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 models for storage migration
SAS device adapter With 12 G SAS cards Yes (onboard ports) With 12 G SAS cards
(DA) migration
326 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Important: If you are migrating volumes from another IBM Storwize for Lenovo products,
be aware that the target Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems
need to be configured at the replication layer for the source to discover the target system
as a host. The default layer setting for the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 is storage.
Ensure that the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems are
configured as a replication layer system. Enter the following command:
If you do not enter this command, you cannot add the target system as a host on the
source storage system or see source volumes on the target system.
To change the source IBM Storwize for Lenovo system to the storage layer, enter the
following command:
For more information about layers and how to change them, see Chapter 10, “Copy
services” on page 451.
Ensure that all systems are running a software level that enables them to recognize the other
nodes in the cluster. Also, ensure that the systems use Fibre Channel adapters at the same
speed. To avoid performance bottlenecks, do not use a combination of 8 Gbps and 16 Gbps
links.
Examples of how to connect an IBM Storwize V5000 for Lenovo to a Lenovo Storage V5030
system are shown in Figure 7-1 and Figure 7-2 on page 329.
328 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 7-2 Fibre Channel connections using switches between the systems
Cable the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 directly to the external
storage system that you want to migrate. Depending on the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 models, the cabling differs slightly. The Lenovo Storage V3700 V2
and Lenovo Storage V5030 need four SAS cables (two SAS cables per node canister) that
are connected to the optional SAS card. The Lenovo Storage V3700 V2 needs four SAS
cables (two SAS cables per node canister) that are connected to SAS port 2 and SAS port 3.
The IBM Storwize V3500 for Lenovo or IBM Storwize V3700 for Lenovo source systems
require two cables per node canister. Each canister must be connected to each Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 canister. On the IBM Storwize V3500 or V3700
for Lenovo, you can use SAS ports 1, 2, or 3. Do not use SAS port 4.
Examples of how to connect a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 to the
IBM Storwize V3500/V3700 for Lenovo are shown in Figure 7-3 on page 330, Figure 7-4 on
page 330, and Figure 7-5 on page 331.
Figure 7-4 Connecting SAS cables from an IBM Storwize V3500 or V3700 for Lenovo to a Lenovo Storage V3700 V2 XP system
330 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 7-5 Connecting SAS cables from an IBM Storwize V3500 or V3700 for Lenovo to a Lenovo
Storage V5030 system
IBM Storwize V5000 for Lenovo source systems require two cables per node canister. Each
canister must be connected to each Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
canister. On the IBM Storwize V5000 for Lenovo, you must use SAS port 1 or 2. Do not use
SAS port 3 or 4.
Examples of how to connect a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 to an
IBM Storwize V5000 for Lenovo are shown in Figure 7-6, Figure 7-7 on page 332, and
Figure 7-8 on page 332.
Figure 7-6 Connecting SAS cables from an IBM Storwize V5000 for Lenovo system to a Lenovo
Storage V3700 V2 system
Figure 7-8 Connecting SAS cables from an IBM Storwize V5000 for Lenovo system to a Lenovo
Storage V5030 system
Migration considerations and configurations can vary depending on the type of system to be
migrated or virtualized. You can use an iSCSI attachment to migrate data from an IBM
Storwize for Lenovo and Lenovo Storage V series system to Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 systems. Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 do
not support iSCSI connections to migrate data from IBM Storwize V3500 for Lenovo.
332 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
You can use any available Ethernet port to establish iSCSI connectivity between the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 systems and the backend storage controller.
Note: If you are using onboard Ethernet ports on a Lenovo Storage V3700 V2 or Lenovo
Storage V3700 V2 XP system, ensure that the onboard Ethernet port 2 on the system is
not configured to be used as the technician port.
To avoid performance bottlenecks, the iSCSI initiator and target systems must use Ethernet
ports at the same speed. Do not use a combination of Ethernet links that run at different
speeds.
For full redundancy and increased throughput, use two or more Ethernet switches. Similarly
numbered Ethernet ports on each node of each system must be connected to the same
switch. They must also be configured on the same subnet or VLAN.
Figure 7-9 shows iSCSI connections between a Lenovo Storage V5030 system (iSCSI
initiator) and an IBM Storwize V3700 for Lenovo system (iSCSI target).
Figure 7-9 iSCSI connections between a Lenovo Storage V5030 system (iSCSI initiator) and an IBM
Storwize V3700for Lenovo system (iSCSI target)
The System Migration panel provides access to the storage migration wizard and displays the
migration progress information. Click Start New Migration to begin the storage migration
wizard. Figure 7-11 shows the System Migration panel.
334 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Important:
You might receive a warning message as shown in Figure 7-12 that indicates that no
externally attached storage controllers were found if you did not configure your zoning
correctly (or if the layer was incorrectly set if another IBM Storwize for Lenovo system is
attached). Click Close and correct the problem before you start the migration wizard
again.
The subsequent panels in the migration wizard, as shown in Figure 7-14 on page 337,
direct you to remove the host zoning to the external storage and create zones between
the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 and the external storage.
However, these steps must be performed before you start the wizard. For the list of
instructions to complete before you start the data migration wizard, see “Preparing the
environment for migration” on page 337 and “Mapping storage” on page 337.
Figure 7-12 Error message that is displayed when no external storage is found
Restrictions
Confirm that the following conditions are met:
You are not using the storage migration wizard to migrate cluster hosts, including clusters
of VMware hosts and Virtual I/O Servers (VIOS).
You are not using the storage migration wizard to migrate SAN Boot images.
If you identify that any of the restriction options cannot be selected, the migration must be
performed outside of this wizard because more steps are required. For more information, see
the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 Lenovo Information Center at this
web page:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/lenov
o_vseries.html
The VMware vSphere Storage vMotion feature might be an alternative for migrating VMware
clusters. For more information, see this web page:
http://www.vmware.com/products/vsphere/features/storage-vmotion.html
For more information about migrating SAN Boot images, see Appendix A, “CLI setup and
SAN Boot” on page 761.
336 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Prerequisites
Confirm that the following prerequisites apply:
Ensure that the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030, existing storage
system, hosts, and Fibre Channel ports are physically connected to the SAN fabrics.
If VMware ESX hosts are involved in the data migration, ensure that the VMware ESX
hosts are set to allow volume copies to be recognized. For more information, see the
VMware ESX product documentation at this web page:
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html?
If all options can be selected, click Next. In all other cases, the button will not get available
and the data must be migrated without the use of this wizard.
Mapping storage
Follow the instructions that are shown in the Map Storage panel that is shown in Figure 7-15
on page 338 and click Next. Record all of the details carefully because the information is
used in later panels. Table 7-2 on page 338 shows an example table for you to capture the
information that relates to the external storage system LUs.
SCSI ID: Record the SCSI ID of the LUs to which the host is originally mapped. Certain
operating systems do not support the change of the SCSI ID during the migration.
338 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Table 7-3 shows an example table to capture host information.
After all of the data is collected and the tasks are performed in the Map Storage section, click
Next. The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 run the discover devices task
and sequentially shows the Migrating MDisks panel.
Migrating MDisks
Select the MDisks from the existing storage system to migrate and click Next. Figure 7-16
shows the Migrating MDisks panel.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 run the Import MDisks task and
sequentially shows the Configuring Hosts panel.
MDisk selection: Select only the MDisks that are applicable to the current migration plan.
After the current migration completes, you can start another migration to migrate any
remaining MDisks.
Note: This step is optional. You can bypass it by selecting Next and moving to “Mapping
volumes to hosts” on page 341.
Follow this step of the wizard to select or configure new hosts as required. Figure 7-17 shows
the Configure Hosts (optional) panel. If hosts are defined, they are listed in the panel as
shown in Figure 7-19 on page 341. If no hosts are defined, they can be created by selecting
the Add Host option.
Select your connection type, name the host and assign the ports (in this case, Fibre Channel
WWPNs). In the advanced settings, assign the I/O group ownership and host type as shown
in Figure 7-18 on page 341. Click Add to complete the task. For more information about I/O
group assignment, see Chapter 5, “Host configuration” on page 189.
340 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 7-18 The details to add a host are complete
The host is listed in the original Configure Hosts (optional) panel, as shown in Figure 7-19.
Click Next to display the Map Volumes to Host (optional) panel.
Note: This step is optional. You can bypass it by selecting Next and moving to “Selecting a
storage pool” on page 344.
Use this step of the wizard to select volumes that were migrated from the external storage
system to the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 and map them to hosts.
Hold Ctrl and click on the volume names to select multiple volumes. Click Map to Host to
The image mode volumes are listed, the names are assigned automatically by the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 storage systems and can be changed to reflect
more meaningful names to the user by selecting the volume and clicking Rename in the
Actions menu.
Names: The names of the image mode volumes must begin with a letter. The name can be
a maximum of 63 characters. You can use the following valid characters:
Uppercase letters (A - Z)
Lowercase letters (a - z)
Digits (0 - 9)
Underscore (_)
Period (.)
Hyphen (-)
Blank space
Select from the host list the hosts to which the imported volumes will be mapped as shown in
Figure 7-21 on page 343, and click Next.
342 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 7-21 Modify host mappings
Note: If you select Host Clusters in the Create Mapping panel you need to ensure that the
host cluster has consistent access to I/O groups. If each host in the host cluster does not
have access to the same I/O groups, mappings to volumes will fail.
A confirmation screen is displayed with the task summary as shown in Figure 7-22 on
page 344. Click Map Volumes to finish the task and return to the Map Volumes to Hosts
(optional) panel.
Figure 7-23 shows that the host mappings are in place for the chosen volumes. Click Next.
Figure 7-23 Map Volumes to Hosts panel with completed host mappings
Note: This step is optional. You can bypass it by avoiding a pool selection, clicking Next
and moving to “Finishing the storage migration wizard” on page 345.
344 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
To continue with the storage migration wizard, select a storage pool to migrate the imported
volumes to, as shown in Figure 7-24. Click Next to proceed to the last panel of the storage
migration wizard. The process uses the volume mirroring function that is included within the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030.
Then, select Actions → Finalize as shown in Figure 7-27. Alternatively, right-click the
selected volumes and click Finalize.
346 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
You are asked to confirm the number of volume migrations that you want to finalize as shown
in Figure 7-28. Verify that the volume names and the number of migrations are correct and
click OK.
When the finalization completes, the data migration to the Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 is completed. The zoning can be removed and the external storage system
can be retired.
This chapter focuses on advanced host and volume administration topics. The first part of it
describes the following host administration topics:
8.1, “Advanced host administration” on page 350
8.2, “Adding and deleting host ports” on page 367
The second part of the chapter consists of the following volume-related tasks:
8.3, “Advanced volume administration” on page 373
8.4, “Volume properties and volume copy properties” on page 386
8.5, “Advanced volume copy functions” on page 390
8.6, “Volumes by storage pool” on page 399
8.7, “Volumes by host” on page 400
Select a host and click Actions (as shown in Figure 8-3 on page 351) or right-click the host to
show the available actions.
350 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-3 Actions menu on the Hosts panel
As shown in Figure 8-3, several actions are associated with host mapping. For more
information, see 8.1.1, “Modifying volume mappings” on page 351 and 8.1.2, “Unmapping
volumes from a host” on page 354.
Select the volume that you need to map to the host. Also indicate whether you would like to
let the system assign the SCSI ID or you want to assign the SCSI ID by yourself. In this
example, RHEL_2_VOL_3 volume is being mapped with the user supplied SCSI ID as shown
in Figure 8-6 on page 353.
352 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-6 Selecting the volume and SCSI ID for mapping to a host
Click Next. As shown in Figure 8-7, a window opens where the user can provide the SCSI ID
to be used for the mapping. The right hand pane side also shows the current SCSI ID’s being
used for mapping other volumes to the same host.
Important: The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 automatically assign
the lowest available SCSI ID if none is specified. However, you can set a SCSI ID for the
volume. The SCSI ID cannot be changed while the volume is assigned to the host.
354 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-10 Hosts
2. From the right hand side pane, select the host for which you want to unmap a volume, and
then click Action as shown in Figure 8-11.
3. Click Modify Volume Mapping. A window with a list of volumes currently mapped to the
selected host will be shown as in Figure 8-12 on page 356.
4. Select the volume which you want to unmap and then click Remove Volume Mappings
as shown in Figure 8-13.
Note: To unmap multiple volumes click and hold the shift key and then select each
consecutive volume in the window. If the multiple volumes you want to unmap are not
consecutive, then click the Ctrl key and then select each volume as desired.
5. A window listing the volume that will be unmapped will be as shown in Figure 8-14 on
page 357.
356 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-14 Remove volume mapping summary window
6. Click Remove Volumes and a task completion confirmation window will open as shown in
Figure 8-15.
2. Select the host that needs to be renamed and click Action as shown in Figure 8-17 on
page 359.
358 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-17 Selecting Rename action for host
3. A window will open where you can enter the new name as shown in Figure 8-18.
4. Type in the new host name and click Rename as shown in Figure 8-19
5. The selected host will be renamed and a task completion confirmation window will be
shown as in Figure 8-20 on page 360.
6. Click Close.
360 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-21 Hosts
2. Select the host that needs to be renamed and click Action as shown in Figure 8-22 on
page 362.
3. Verify the number of hosts you are removing, along with confirmation to remove the host
even if the host has mapped volumes, as shown in Figure 8-23.
Note: If a host has volumes mapped, then to remove the host, you will have to check the
box for Remove the hosts even if volumes are mapped to them. These volumes will
no longer be accessible to the hosts to force the action.
4. A task completion window will open as shown in Figure 8-24 on page 363.
362 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-24 Host removal task completion
5. Click Close.
Overview
To open the Host Details panel, select the host. From the Actions menu, click Properties.
You also can highlight the host and right-click to access the Actions menu, as shown in
Figure 8-25 on page 364.
Figure 8-26 shows the Overview tab of the Host Details panel. Select the Show Details
check box in the lower-left corner of the window to see more information about the host.
364 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Host ID: Host object identification number.
Status: The current host object status. This value can be Online, Offline, or Degraded.
Host type: The type of host can be Generic, Generic (hidden secondary volumes), HP/UX,
OpenVMS, Target Port Group Support (TPGS), and VMware Virtual Volume (VVOL).
Number of Fibre Channel (FC) ports: The number of host Fibre Channel ports.
Number of Internet SCSI (iSCSI) ports: The number of host iSCSI names or host iSCSI
qualified names (IQN) IDs.
Number of serial-attached SCSI (SAS) ports: The number of host SAS ports.
I/O group: The I/O group from which the host can access a volume (or volumes).
iSCSI Challenge Handshake Authentication Protocol (CHAP) secret: The CHAP
information if it exists or if it is configured.
To change the host properties, click Edit. Several fields can be edited, as shown in
Figure 8-27.
For the host type, choose one of these values: Generic, Generic (hidden secondary
volumes), HP/UX, OpenVMS, TPGS, or VVOL.
After you change any host information, click Save to apply your changes.
Mapped Volumes
Figure 8-28 on page 366 shows the Mapped Volumes tab, which provides an overview of the
volumes that are mapped to the host. This tab provides the following information:
SCSI ID
Volume name
Unique identifier (UID)
Caching I/O group ID
Port Definitions
Figure 8-29 on page 367 shows the Port Definitions tab, which shows the configured host
ports and their status. This tab provides the following information:
Name: The worldwide port names (WWPNs) (for SAS and FC hosts) or iSCSI Qualified
Name (IQN) for iSCSI hosts
Type: Port type
Status: Current port status
Number of nodes that are logged in: Lists the number of Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 node canisters that each port (initiator port) is logged in to
366 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-29 Host port definitions
Note: You can also add additional ports for the host using this window.
2. Select the type of port that you want to add. In this example, we chose Fibre Channel port
as shown in Figure 8-31.
3. A window opens with a drop-down for you to choose the desired WWPN to be added as
shown in Figure 8-32 on page 369.
368 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-32 Drop-down of WWPNs
Note: If the WWPN does not show in the drop-down list, click Rescan and try again. If the
port does not show up even after the rescanning, then check the zoning.
4. Select the desired WWPN and click Add Port to List as shown in Figure 8-33.
5. The selected port will be shown under Port Definitions as shown in Figure 8-34 on
page 370.
Note: If the selected port is not the desired one, then you can click on the red cross to
delete it from the selection.
6. Click Add Ports to List. A task completion window will be shown as in Figure 8-35.
7. The Host Details window will now show the ports defined for the host, including the
recently added one as shown in Figure 8-36 on page 371.
370 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-36 Port Definitions after adding a port
3. Verify the number of ports and the port to be deleted as shown in Figure 8-39.
4. Click Delete. A window indicating port deletion task completed will be shown as in
Figure 8-40 on page 373.
372 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-40 Port deletion task completed
5. A window will be shown with the current ports for the selected host as shown in
Figure 8-41.
This panel lists all configured volumes on the system and provides the following information:
Name: Shows the name of the volume. If a twisty sign (>) appears before the name, two
copies of this volume exist. Click the twisty sign (>) to expand the view and list the volume
copies, as shown in Figure 8-43.
State: Provides the status information about the volume, which can be online, offline, or
degraded.
Synchronized: For for mirrored volumes, whether the copies are synchronized or not.
Pool: Shows in which storage pool the volume is stored. The primary copy, which is
marked with an asterisk (*), is shown unless you expand the volume copies.
UID: The volume unique identifier.
Host mappings: Shows whether a volume has host mapping: Yes when host mapping
exists and No when no hosting mappings exist.
374 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Capacity: The disk capacity that is presented to the host. If a blue volume is listed before
the capacity, this volume is a thin-provisioned volume. Therefore, the listed capacity is the
virtual capacity, which might be larger than the real capacity on the system.
Global Mirror Change Volume: Indicates whether a volume is a change volume for a
Global Mirror relationship or not.
Tip: Right-click anywhere in the blue title bar to customize the volume attributes that are
displayed. You might want to add useful information, such as the caching I/O group and
the real capacity.
To create a volume, click Create Volumes and complete the steps as described in Chapter 6,
“Volume configuration” on page 269.
Right-clicking or selecting a volume and opening the Actions menu shows the available
actions for a volume, as shown in Figure 8-44 on page 376.
Depending on the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 model capabilities
and volume types, the following volume actions are available:
Rename (8.3.4, “Renaming a volume” on page 378)
Map to Host or Host Cluster (8.3.2, “Other actions are available for copies of volumes. For
more information, see 8.5, “Advanced volume copy functions” on page 390.Unmapping
volumes from all hosts” on page 377)
Shrink (8.3.5, “Shrinking a volume” on page 378)
Expand (8.3.6, “Expanding a volume” on page 380)
Modify capacity savings (Choose between none, Thin Provisioning, and Compression.)
Modify mirror synchronization rate (Set the synchronization rate value. For more
information, see 8.4, “Volume properties and volume copy properties” on page 386.)
376 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Cache mode (Choose between Enabled, Read Only, and Disabled.)
Modify open VMS unit device identifier (UDID)
Unmap all hosts (8.3.2, “Other actions are available for copies of volumes. For more
information, see 8.5, “Advanced volume copy functions” on page 390.Unmapping volumes
from all hosts” on page 377)
View mapped hosts (8.3.3, “Viewing which host is mapped to a volume” on page 378)
Modify I/O group (only applicable to multiple I/O group systems)
Space Savings (only for compressed volumes)
Migrate to another pool (8.3.7, “Migrating a volume to another storage pool” on page 380)
Export to image mode (8.3.8, “Exporting to an image mode volume” on page 381)
Duplicate (8.3.10, “Duplicating a volume” on page 383)
Add volume copy (8.3.11, “Adding a volume copy” on page 385)
Enable access to stale copy (Available for IBM HyperSwap volumes if the copy is not
up-to-date and inaccessible but contains consistent data from an earlier time)
Edit Throttle
View All Throttles
Delete (8.3.9, “Deleting a volume” on page 383)
Volume Copy Actions (see 8.5, “Advanced volume copy functions” on page 390)
Modify Properties
Properties (8.4, “Volume properties and volume copy properties” on page 386)
8.3.2 Other actions are available for copies of volumes. For more information, see 8.5, “Advanced volume
copy functions” on page 390.Unmapping volumes from all hosts
To remove all host mappings from a volume, select Unmap All Hosts from the Actions menu.
This action removes all host mappings, which means that no hosts can access this volume.
Confirm the number of mappings to remove, and click Unmap, as shown in Figure 8-45.
Important: Ensure that the required procedures are run on the host OS before you run the
unmapping procedure.
To remove a mapping, highlight the host and click Unmap from Host. If several hosts are
mapped to this volume (for example, in a cluster), only the selected host is removed.
Click Reset to reset the name field to the original name of the volume. Click Rename to apply
the changes. Click Close to close the panel.
378 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
the Actions menu. Enter the new volume size or the value by which to shrink the volume, as
shown in Figure 8-48.
Important: Before you shrink a volume, ensure that the host OS supports this capability. If
the OS does not support shrinking a volume, log disk errors and data corruption can occur.
Click Shrink to start the process. Click Close when the task completes to return to the
Volumes panel.
Run the required procedures on the host OS after the shrinking process.
Important: For volumes that contain more than one copy, you might receive a
CMMVC6354E error. Check the Running tasks window and wait for the copy to
synchronize. If you want the synchronization process to complete more quickly, increase
the rate by increasing the Mirror Sync Rate value in the Actions menu. When the copy is
synchronized, resubmit the shrink process.
Similar errors might occur if other tasks, for example, volume expand or format operations,
are running on the volume. The solution is to wait until these operations finish, then restart
the shrink process.
If the task completion dialog stays open, review the results of the operation and click Close to
return to the Volumes panel.
Run the required procedures on the host OS to use the full available space.
Note: You can expand the capacity of volumes in Metro Mirror and Global Mirror
relationships that are in consistent_synchronized state if those volumes are using
thin-provisioned or compressed copies.
380 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
8.3.8 Exporting to an image mode volume
Image mode provides a direct block-for-block translation from a managed disk (MDisk) to a
volume with no virtualization. An image mode MDisk is associated with one volume only. This
feature can be used to export a volume to a non-virtualized disk and to remove the volume
from storage virtualization.
Note: Among the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 families, this
feature is available only on the Lenovo Storage V5030 storage system.
Select the volume that you want. From the Actions menu, choose Export to Image Mode, as
shown in Figure 8-50.
The Export to Image Mode wizard opens and displays the available MDisks. Select the MDisk
to which to export the volume, and click Next, as shown in Figure 8-51 on page 382.
Select a storage pool into which the image-mode volume is placed after the migration
completes, as shown in Figure 8-52.
Click Finish to start the migration. After the task is complete, check the results and click
Close to return to the Volumes panel.
Important: Use image mode to import or export existing data into or out of the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030. Migrate data from image mode MDisks to
other storage pools to benefit from storage virtualization.
382 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
For more information about importing volumes from external storage, see Chapter 7,
“Storage migration” on page 323 and Chapter 4, “Storage pools” on page 139.
Click Delete to remove the selected volume or volumes from the system. After the task
completes, click Close to return to the Volumes panel.
Important: You must force the deletion if the volume has host mappings or if the volume is
used in FlashCopy mappings. To be cautious, always ensure that the volume has no
association before you delete it.
Important: Duplicating a volume does not duplicate the volume data. The duplicating task
creates a volume with the same preset and volume parameters as the source volume.
Duplicating mirrored and image-mode volumes is not supported.
The Duplicate Volume window, which is shown in Figure 8-55, can be used to change the
name of the new volume. By default, a sequence integer is appended to the name of the
volume to be duplicated.
Click Duplicate to start the process. If the task completion dialog stays on the window, review
the process results and click Close.
384 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
8.3.11 Adding a volume copy
If a volume consists of only one copy, you can add a second mirrored copy of the volume.
This second copy can be generic or thin-provisioned.
You can also use this method to migrate data across storage pools with different extent sizes.
To add a second copy, select the volume and click Actions → Add Volume Copy, as shown
in Figure 8-56.
Select the storage pool in which to create the copy. Select capacity savings, between None,
Thin-provisioned or Compressed. Click Add, as shown in Figure 8-57 on page 386.
The copy is created after you click Add and data starts to synchronize as a background task.
If the task completion dialog stays on the window, review the results and click Close.
Now, the volume that is named APP1_VOL has two volume copies, which are stored in two
separate storage pools (Figure 8-58).
To open the advanced view of a volume, select Properties from the Actions menu. Click
View more details to show the full list of volume properties, as shown in Figure 8-59 on
page 387.
386 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-59 Volume details overview
To open the advanced view of a volume copy, select the copy of the volume that you want and
click Actions → Properties, as shown in Figure 8-60.
The Properties panel opens. Click View more details to show the full list of the volume copy
properties, which is shown in Figure 8-61 on page 389.
388 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-61 Volume copy properties
Modify the values if needed and click OK. After the task completes, check the results of the
operation and click Close to return to the Volumes panel.
Select the desired volume having multiple copies. Click on the twisty (>) to show all the
copies. Then, select a volume copy. Open the Actions menu to display the following volume
copy actions (Figure 8-63).
390 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Create a volume from this copy
Split into a new volume (8.5.2, “Splitting into a new volume” on page 393)
Make primary (8.5.1, “Volume copy: Make Primary” on page 391)
Validate volume copies (8.5.3, “Validate Volume Copies option” on page 394)
Delete (8.5.4, “Delete volume copy option” on page 397)
Modify properties (8.4, “Volume properties and volume copy properties” on page 386)
Properties (8.4, “Volume properties and volume copy properties” on page 386)
Each volume has a primary and a secondary copy, and the asterisk indicates the primary
copy. The two copies are always synchronized, which means that all writes are destaged to
both copies, but all reads are always performed from the primary copy. The maximum
configurable number of copies per volume is two. The roles of the copies can be changed.
To accomplish this task, select the secondary copy. Then, click Actions → Make Primary.
Usually, it is a preferred practice to place the volume copies on storage pools with similar
performance because the write performance is constrained if one copy is placed on a
lower-performance pool.
If you require high read performance, you can place the primary copy in a solid-state drive
(SSD) pool or an externally virtualized Flash System and then place the secondary copy in a
normal disk storage pool. This action maximizes the read performance of the volume and
guarantees that a synchronized second copy is in your less expensive disk pool. You can
migrate online copies between storage pools. For more information about how to select the
copy that you want to migrate, see 8.3.7, “Migrating a volume to another storage pool” on
page 380.
Click Make Primary and the role of the Copy 1 is changed to Primary, as shown in
Figure 8-66.
If the task completion dialog stays on the window, check the process output and click Close.
The volume copy feature is also a powerful option for migrating volumes, as described in
8.5.5, “Migrating volumes by using the volume copy features” on page 398.
392 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
8.5.2 Splitting into a new volume
If the two-volume copies are synchronized, you can split one of the copies to a new volume
and map this volume to another host. From a storage point of view, this procedure can be
performed online, which means that you can split one copy from the volume and create a
copy from the remaining volume without affecting the host. However, if you want to use the
split copy for testing or backup, you must ensure that the data inside the volume is consistent.
Therefore, the data must be flushed to storage to make the copies consistent.
For more information about flushing the data, see your operating system documentation. The
easiest way to flush the data is to shut down the hosts or application before a copy is split.
In our example, volume APP1_VOL has two copies: Copy 0 is primary and Copy 1 is secondary.
To split a copy, click Split into New Volume (Figure 8-67) on any copy and the remaining
secondary copy automatically becomes the primary for the source volume.
Figure 8-68 shows the Split Volume Copy panel to specify a name for the new volume.
As shown in Figure 8-69, the copy appears as a new volume that is named
APP1_VOL_SPLITTED_VOL (as specified during the split process). The new volume can be
mapped to a host.
Figure 8-69 Volumes: New volume from the split copy operation
Important: If you receive error message code CMMVC6357E while you are splitting a volume
copy, click the Running Tasks icon to view the synchronization status. Then, wait for the
copy to synchronize and repeat the splitting process.
394 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-70 Actions menu: Validate Volume Copies
The validation process runs as a background process and might take time, depending on the
volume size. You can check the status in the Running Tasks window, as shown in Figure 8-73
on page 397.
396 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 8-73 Validate Volume Copies: Running Tasks
Confirm the deletion process by clicking Yes. Figure 8-75 shows the copy deletion warning
panel.
If the task completion dialog is still open after the copy is deleted, review the results of the
operation and click Close to return to the Volumes panel.
The easiest way to migrate volume copies is to use the migration feature that is described in
8.3.7, “Migrating a volume to another storage pool” on page 380. By using this feature, one
extent after another is migrated to the new storage pool. However, the use of volume copies
provides another way to migrate volumes if the storage pool extent sizes differ.
This migration process requires more user interaction with the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 GUI, but it offers benefits. For example, we look at migrating a
volume from a tier 1 storage pool to a lower-performance tier 2 storage pool.
In step 1, you create the copy on the tier 2 pool, while all reads are still performed in the tier 1
pool to the primary copy. After the synchronization, all writes are destaged to both pools, but
the reads are still only from the primary copy.
Because the copies are fully synchronized, you can switch their roles online (step 3), and
analyze the performance of the new pool. After you test your lower performance pool, you
can split or delete the old copy in tier 1 or switch back to tier 1 in seconds if the tier 2 storage
pool did not meet your requirements.
398 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
8.6 Volumes by storage pool
To see the layout of volumes within pools, click Volumes by Pool, as shown in Figure 8-77.
The left pane is called the pool filter. The storage pools are displayed in the pool filter. For
more information about storage pools, see Chapter 4, “Storage pools” on page 139.
In the upper right, you see information about the pool that you selected in the pool filter. The
following information is also shown:
Pool icon: Because storage pools can have different characteristics, you can change the
storage pool icon by clicking it. For more information, see 4.2, “Working with storage
pools” on page 150.
Pool name: The name that was entered when the storage pool was created. Click it to
change the name, if needed.
Pool details: Shows you the information about the storage pools, such as the status,
number of managed disks, and Easy Tier status.
Volume allocation: Shows you the amount of capacity that is allocated to volumes from
this storage pool.
Also, you can create volumes from this panel. Click Create Volumes to open the Volume
Creation panel. The steps are described in Chapter 6, “Volume configuration” on page 269.
Selecting a volume and opening the Actions menu or right-clicking the volume shows the
same options as described in 8.3, “Advanced volume administration” on page 373.
400 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The Volumes by Host panel opens, as shown in Figure 8-80.
The host filter is in the left pane of the view. Selecting a host shows its properties in the right
pane, such as the host name, number of ports, host type, and the I/O group to which it has
access.
The right pane, next to the host name, shows icons for Fibre Channel, iSCSI and SAS
connectivity. Depending on the type of host connectivity, the respective icon will be
highlighted and the other icons will be grayed out.
The volumes that are mapped to this host are listed in the table in lower-right part of the
panel.
You can create a volume from this panel. Click Create Volumes to open the same wizard as
described in Chapter 6, “Volume configuration” on page 269.
Selecting a volume and opening the Actions menu or right-clicking the volume shows the
same options as described in 8.3, “Advanced volume administration” on page 373.
All of these issues deal with data placement, relocation capabilities or data volume reduction.
Most of these challenges can be managed by having spare resources available, by moving
data and by using data mobility tools or operating systems features (such as host level
mirroring) to optimize storage configurations.
However, all of these corrective actions are expensive in terms of hardware resources, labor,
and service availability. Relocating data among the physical storage resources that
dynamically or effectively reduces the amount of data, transparently to the attached host
systems, is becoming increasingly important.
404 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
savings in operational costs. However, the current acquisition cost per GB for flash is higher
than for Enterprise serial-attached Small Computer System Interface (SCSI) (SAS) and
Nearline (NL) SAS.
Enterprise SAS drives replaced the old SCSI drives. They are common in the storage market.
They are offered in various capacities, spindle speeds and form factors. Nearline SAS is the
low-cost, large-capacity storage drive class, which is commonly offered at 7200 rpm spindle
speed.
It is critical to choose the correct mix of drives and the correct data placement to achieve
optimal performance at the lowest cost. Maximum value can be achieved by placing “hot”
data with high I/O density and low response time requirements on Flash. Enterprise class
disks are targeted for “warm” and Nearline for “cold” data that is accessed sequentially and at
lower rates.
In this section is described the Easy Tier disk performance optimization function of the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. It also describes how to activate the
Easy Tier process for both evaluation purposes and for automatic extent migration.
Easy Tier reduces the I/O latency for hot spots, but it does not replace storage cache. Easy
Tier and storage cache solve a similar access latency workload problem, but these methods
weigh differently in the algorithmic construction based on “locality of reference,” recency and
frequency. Because Easy Tier monitors I/O performance from the extent end (after cache), it
is able to pick up the performance issues that cache cannot solve and complement the overall
storage system performance.
In general, the storage environment I/O is monitored on volumes and the entire volume is
always placed inside one appropriate storage tier. Determining the amount of I/O on single
extents is too complex for monitoring I/O statistics, to move them manually to an appropriate
storage tier and to react to workload changes.
Easy Tier is a performance optimization function that overcomes this issue because it
automatically migrates (or moves) extents that belong to a volume between different storage
tiers, as shown in Figure 9-1 on page 406. Because this migration works at the extent level, it
is often referred to as sublogical unit number (LUN) migration.
You can enable Easy Tier for storage on a volume basis. It monitors the I/O activity and
latency of the extents on all volumes that are enabled for Easy Tier over a 24-hour period.
Based on the performance log, it creates an extent migration plan and dynamically moves
high activity or hot extents to a higher disk tier within the same storage pool. It also moves
extents in which the activity rate dropped off (or cooled) from higher disk tier managed disks
(MDisks) back to a lower tier MDisk.
To enable the migration between MDisks with different tier levels, the target storage pool must
consist of MDisks with different characteristics. These pools are named as multi-tiered
storage pools. Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 Easy Tier is optimized to
boost the performance of storage pools that contain Flash, Enterprise and Nearline drives.
To identify the potential benefits of Easy Tier in your environment before you install higher
MDisk tiers (such as Flash), you can enable the Easy Tier monitoring on volumes in
single-tiered storage pools. Although the Easy Tier extent migration is not possible within a
single-tiered pool, the Easy Tier statistical measurement function is possible. Enabling Easy
Tier on a single-tiered storage pool starts the monitoring process and logs the activity of the
volume extents.
IBM Storage Tier Advisor Tool (STAT) is a no-cost tool that helps you analyze this data. If you
do not have a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030, use the Disk Magic tool
to get a better idea about the required number of different drive types that are appropriate for
your workload.
Easy Tier is available for all the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 internal
volumes and volumes on external virtualized storage subsystems (V5030).
406 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 9-2 shows single-tiered storage pools which include one type of disk tier attribute.
Each disk, ideally, has the same size and performance characteristics. Multi-tiered storage
pools are populated with two or more different disk tier attributes, high-performance flash
drives, enterprise SAS drives, and Nearline drives.
A volume migration occurs when the complete volume is migrated from one storage pool to
another storage pool. An Easy Tier data migration moves only extents inside the storage pool
to different performance attributes.
By default, Easy Tier is enabled on any pool that contains two or more classes of disk drives.
The Easy Tier function manages the extent migration:
Promote
Moves the candidate hot extent to a higher performance tier.
Warm demote:
– Prevents performance overload of a tier by demoting a warm extent to a lower tier.
– Triggered when bandwidth or I/O per second (IOPS) exceeds a predefined threshold.
Cold demote
Coldest extent moves to a lower tier.
Expanded or cold demote
Demote appropriate sequential workload to the lowest tier to better use nearline
bandwidth.
Swap
Note: Extent migrations occur only between adjacent tiers within the same pool.
408 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
I/O Monitoring
Data Placement Advisor
Data Migration Planner
Data Migrator
The four main processes and the flow between them are described in the following sections.
Easy Tier permits large block I/Os and considers only I/Os up to 64 KB as migration
candidates.
IOM is an efficient process and adds negligible processing impact to the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 node canisters.
This process also identifies extents that must be migrated back to a lower tier.
This rate equates to around 3 TB a day that is migrated between disk tiers. Figure 9-5 shows
the Easy Tier Data Migrator flow.
Easy Tier considers migration scenarios and scenarios where large amounts of data need to
be rebalanced in its internal algorithms.
Easy Tier accelerated mode was introduced in controller firmware version 7.5. Easy Tier
accelerated mode allows the system to cope with migration situations where the user needs
to speed up the Easy Tier function temporarily.
Normal Easy Tier migration speed is 12 GB every 5 minutes for all functions, except cold
demote, which is 1 GB every 10 minutes.
Accelerated mode allows an Easy Tier migration speed of 48 GB every 5 minutes with no limit
on cold demotes and no support for warm demotes.
You enable Easy Tier accelerated mode from the command line by using chsystem
-easytieracceleration on/off.
410 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: Accelerated mode is not intended for day-to-day Easy Tier traffic. Turn on
accelerated mode when necessary. Because Easy Tier accelerated mode can increase the
workload on the system temporarily, use Easy Tier accelerated mode during periods of
lower system activity.
Note: At the time of the creation of a new pool the default Easy Tier status is shown on the
pool properties as Balanced, but the pool will not benefit from the Storage Pool Balancing
feature without multiple MDisks within it.
It assesses the extents in a storage tier and balances them automatically across all MDisks
within that tier. Storage Pool Balancing moves the extents to achieve a balanced workload
distribution and avoid hotspots. Storage Pool Balancing is an algorithm that is based on
MDisk IOPS usage, which means that it is not capacity-based but performance-based. It
works on a 6-hour performance window.
When a new MDisk is added to an existing storage pool, Storage Pool Balancing can
automatically balance the extents across all MDisks in the pool, if required.
Volume mirroring: Volume mirroring can have different workload characteristics for
each copy of the data because reads are normally directed to the primary copy and
writes occur to both copies. Therefore, the number of extents that Easy Tier migrates
probably differs for each copy.
412 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Easy Tier works with all striped volumes, including these types of volumes:
– Generic volumes
– Thin-provisioned volumes
– Mirrored volumes
– Thin-mirrored volumes
– Global and Metro Mirror sources and targets
Easy Tier automatic data placement is not supported for image mode or sequential
volumes. I/O monitoring for these volumes is supported, but you cannot migrate extents
on these volumes unless you convert image or sequential volume copies to striped
volumes.
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 create volumes or volume
expansions by using extents from MDisks from the Enterprise and Nearline tier. Extents
from MDisks in the Flash tier are used if Enterprise space and Nearline space are not
available.
When a volume is migrated out of a storage pool that is managed with Easy Tier,
Automatic Data Placement Mode is no longer active on that volume. Automatic Data
Placement is also turned off while a volume is migrated, even if it is between pools that
both have Easy Tier Automatic Data Placement enabled. Automatic Data Placement for
the volume is re-enabled when the migration is complete.
Flash drive performance depends on block size. (Small blocks perform better than large
blocks.) Easy Tier measures I/O blocks that are smaller than 64 KB, but it migrates the
entire extent to the appropriate disk tier.
As extents are migrated, the use of smaller extents makes Easy Tier more efficient.
The first migration starts about 1 hour after Automatic Data Placement Mode is enabled. It
takes up to 24 hours to achieve optimal performance.
In the current Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 Easy Tier
implementation, it takes about two days before hotspots are considered moved from tier to
tier, which prevents hotspots from being moved from a fast tier if the workload changes
over a weekend.
If you run an unusual workload over a longer period, Automatic Data Placement can be
turned off and turned on online to avoid data move.
Depending on which storage pool and which Easy Tier configuration is set, a volume copy
can have the Easy Tier states that are shown in Table 9-1 on page 414.
On Multi-tiered On Activec
When a storage pool changes from single-tiered to multi-tiered, Easy Tier is enabled by
default for the pool and on all volume copies inside this pool. The current release of Easy Tier
supports up to three tiers of storage (Flash, Enterprise, and Nearline).
In this example, we create a pool that contains Enterprise and Nearline MDisks.
414 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 9-7 Selecting create in the Pools panel
2. Provide a name for the new pool and click Create. Encryption option will be available if the
system has an encryption license enabled, click Enable if you want to enable pool
encryption. If you navigate to Settings → GUI Preferences and click General, the
Advanced pool settings can be selected, which allows you to define the extent size
during a pool creation as shown in Figure 9-8.
4. To show the pool properties, select the pool and select Properties from the Actions menu.
Alternatively, right-click the pool and select Properties. Clicking View more details in the
bottom-left of the panel will display additional pool information. No storage is assigned to
the pool at the time of its creation and the Easy Tier default status is set to Balanced as
shown in Figure 9-10 on page 417.
416 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 9-10 Pool properties panel
418 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6. The Assign Storage to Pool panel offers two options to configure the storage into the pool:
Quick Internal or Advanced Internal Custom. Figure 9-12 shows the Quick Internal panel.
The Quick Internal panel provides a recommended configuration that is based on the
number and type of installed drives. You can use the Advanced Internal Custom panel to
configure the specific drive types, Redundant Array of Independent Disks (RAID) levels,
spares, stripe width, and array width.
In the following steps the Advanced Internal Custom option is used to create a
single-tiered storage pool and then a multi-tiered storage pool by including another drive
class to the single-tiered one. Each drive class needs to be included separately.
420 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
8. From the Advanced Internal Custom panel, select the required drive class, RAID type,
number of spares, stripe width and array width. Click Assign to add the storage to the
pool, as shown in Figure 9-14.
422 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10.Repeat steps 7 and 8 to add a second drive class as shown in Figure 9-16.
11. Select the pool to which the second drive class was added. Select Properties from the
Actions Menu and click View more details in the Properties panel. With two different tiers
the Easy Tier status is automatically changed to Active (Figure 9-17 on page 424) and
starts to manage the extents within the pool by promoting or demoting them.
Note: Adding multiple MDisks of the same drive class will result in a single-tiered pool with
Balanced Easy Tier status, which will only benefit from the Storage Pool Balancing feature.
12.Navigate to Pools → MDisks by Pools to see the MDisks that were created within the
pool with two different drive classes. The tier information is not a default column in the
MDisks by Pools panel. To access the tier information right-click the gray header and
select Tier. Each MDisk will display its tier class as shown in Figure 9-18.
424 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
If you create a volume within a multi-tiered storage pool and navigate to Volumes →
Volumes by Pool panel, details such as the number of MDisks and the number of
volumes within the selected pool are displayed. The pool icon for an Easy Tier Pool differs
from the pools without Easy Tier enabled as shown in Figure 9-19.
If the Easy Tier Status column is enabled, the Easy Tier Status of each volume is
displayed. Volumes inherit the Easy Tier state of their parent pool, but Easy Tier can be
toggled on or toggled off at the volume level, if required. See “Enabling or disabling Easy
Tier on single volumes” on page 429.
If external storage is used as a drive class, you must select the drive class type manually
and add the external MDisks to a storage pool. If the internal storage and the external
storage are in different drive classes, this action also changes the storage pool to a
multi-tiered storage pool and enables Easy Tier on the pool and associated volumes.
Heat data files are produced approximately once a day (that is, roughly every 24 hours) when
Easy Tier is active on one or more storage pools.
Click Download Support Package to open the Download New Support Package or Log
File panel, as shown in Figure 9-21.
To download the Easy Tier log files you have two options:
– Choose one of the Snap Types shown in Figure 9-21 and click Download. The entire
support package is downloaded and the Easy Tier log file is available within it.
– Click Download Existing Package to open the panel shown in Figure 9-22 on
page 427. Select the required Easy Tier log file and click Download.
426 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 9-22 Select Support Package or Logs to Download panel
Before you use the CLI, you must configure CLI access, as described in Appendix A, “CLI
setup and SAN Boot” on page 761.
Readability: In the examples that are shown in this section, we deleted many unrelated
lines in the command output or responses so that you can focus on the information that
relates to Easy Tier.
To enable Easy Tier on a single-tiered storage pool in measure mode, run the chmdiskgrp
-easytier measure storage pool name command, as shown in Example 9-2.
Example 9-2 Enable Easy Tier in measure mode on a single-tiered storage pool
IBM_Storwize:ITSO_V5000:superuser>chmdiskgrp -easytier measure Enterprise_Pool
IBM_Storwize:ITSO_V5000:superuser>
Check the status of the storage pool again by running the lsmdiskgrp storage pool name
command again, as shown in Example 9-3.
Easy Tier measured mode does not place data. Easy Tier measured mode collects statistics
for measurement only. For more information about downloading the I/O statistics, see 9.2.14,
“Downloading Easy Tier I/O measurements” on page 425.
428 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Enabling or disabling Easy Tier on single volumes
By default, enabling Easy Tier on a storage pool also enables it for the volume copies that are
inside the selected pool. This setting applies to multi-tiered and single-tiered storage pools. It
is also possible to turn on and turn off Easy Tier for single volume copies.
Before you disable Easy tier on a single volume, run the svcinfo lsmdisgrp storage pool
name command to list all storage pools that are configured, as shown in Example 9-4. In our
example, Multi_Tier_Pool is the storage pool that is used as a reference.
Run the svcinfo lsvdisk command to show all configured volumes within your Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030, as shown in Example 9-5. For this example,
we are only interested in a single volume.
To disable Easy Tier on single volumes, run the svctask chvdisk -easytier off volume
name command, as shown in Example 9-6.
This command disables Easy Tier on all copies of the volume. Example 9-7 shows Easy Tier
turned off for copy 0 even if Easy Tier is still enabled on the storage pool. The status for copy
0 changed to measured because the pool is still actively measuring the I/O on the volume.
To enable Easy Tier on a volume, run the svctask chvdisk -easytier on volume name
command (as shown in Example 9-8). Easy Tier changes back to on (as shown in
Example 9-9). The copy 0 status also changed back to active.
copy_id 0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
used_capacity 5.00GB
real_capacity 5.00GB
430 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
free_capacity 0.00MB
overallocation 100
easy_tier on
easy_tier_status active
tier ssd
tier_capacity 1.00GB
tier enterprise
tier_capacity 4.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 5.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name Multi_Tier_Pool
IBM_Storwize:ITSO_V5000:superuser>
The output provides a graphical representation of the performance data that is collected by
Easy Tier over a 24-hour operational cycle.
The tool comes packaged as an International Organization for Standardization (ISO) file,
which needs to be extracted to a temporary folder. The STAT can be downloaded from the
following link:
https://ibm.biz/BdEfrX
On Windows navigate to Start → Run, enter cmd, and then click OK to open a command
prompt.
Typically, the tool is installed in the C:\Program Files\IBM\STAT directory. The command to
create the index and other data files is has the following parameters:
Example 9-10 shows the command to create the report and the message that is displayed
when it is successfully generated.
C:\EasyTier>
IBM Storage Tier Advisor Tool creates a set of HTML files. Browse to the directory where you
directed the output file and locate the file that is named index.html. Open the file by using
your browser to view the report.
Thin provisioning presents more storage space to the hosts or servers that are connected to
the storage system than is available on the storage system. The Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 support this capability for Fibre Channel (FC) and Internet Small
Computer System Interface (iSCSI) provisioned volumes.
An example of thin provisioning is when a storage system contains 5000 GiB of usable
storage capacity, but the storage administrator mapped volumes of 500 GiB each to 15 hosts.
In this example, the storage administrator makes 7500 GiB of storage space visible to the
hosts, even though the storage system has only 5000 GiB of usable space, as shown in
Figure 9-23 on page 433. In this case, all 15 hosts cannot immediately use all 500 GiB that is
provisioned to them. The storage administrator must monitor the system and add storage as
needed.
432 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 9-23 Thin provisioning concept
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the
capacity of the volume that is reported to other Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 components (such as FlashCopy or remote copy) and to the hosts. For example, you
can create a volume with real capacity of only 100 GiB, but virtual capacity of 1 tebibyte (TiB).
The actual space used by the volume on Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 is 100 GiB, but hosts see a 1 TiB volume.
A directory maps the virtual address space to the real address space. The directory and the
user data share the real capacity.
You can switch the mode at any time. If you select the autoexpand feature, the Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 automatically add a fixed amount of more real
capacity to the thin volume as required. Therefore, the autoexpand feature attempts to
maintain a fixed amount of unused real capacity for the volume.
This amount is known as the contingency capacity. The contingency capacity is initially set to
the real capacity that is assigned when the volume is created. If the user modifies the real
capacity, the contingency capacity is reset to be the difference between the used capacity
and real capacity.
Warning threshold: Enable the warning threshold, by using email or a Simple Network
Management Protocol (SNMP) trap, when you work with thin-provisioned volumes. You
can enable the warning threshold on the volume, and on the storage pool side, especially
when you do not use the autoexpand mode. Otherwise, the thin volume goes offline if it
runs out of space.
Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by
the current virtual capacity and the contingency capacity is recalculated.
Space allocation
When a thin-provisioned volume is created, a small amount of the real capacity is used for
initial metadata. Write I/Os to the grains of the thin volume (that were not previously written to)
cause grains of the real capacity to be used to store metadata and user data. Write I/Os to the
grains (that were previously written to) update the grain where data was previously written.
Grain definition: The grain is defined when the volume is created, and can be 32 KiB,
64 KiB, 128 KiB, or 256 KiB.
Smaller granularities can save more space, but they have larger directories. When you use
thin-provisioning with FlashCopy, specify the same grain size for the thin-provisioned volume
and FlashCopy.
To create a thin-provisioned volume from the dynamic menu, complete the following steps:
1. Navigate to Volumes → Volumes and click Create Volumes as shown in Figure 9-24 on
page 435.
434 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 9-24 Volumes panel
2. Select the Custom tab. Specify the volume capacity, the type of the capacity saving and
the name of the volume. By selecting Thin-provisioned as the capacity saving method,
additional thin-provisioning parameters such as real capacity, autoexpand, warning
threshold and grain size are displayed as shown in Figure 9-25 on page 436. Fill the
necessary information for a single volume or click the + icon next to the volume name for
multiple volumes and click Create.
Note: A thin-provisioned volume can also be created by using the Basic or the Mirrored
tabs within the Create Volumes panel, but you can only custom the thin-provisioning
parameters through the customized volume creation. If your system uses HyperSwap
topology the mirrored tab is replaced by the HyperSwap tab.
File system problems can be moderated by tools, such as defrag, or by managing storage by
using host Logical Volume Managers (LVMs). The thin-provisioned volume also depends on
how applications use the file system. For example, some applications delete log files only
when the file system is nearly full.
436 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Important: Do not use defrag on thin-provisioned volumes. The defragmentation process
can write data to different areas of a volume, which can cause a thin-provisioned volume to
grow up to its virtual size.
Table 9-2 Maximum thin provisioned volume virtual capacities for an extent size
Extent size in Maximum volume real capacity Maximum thin virtual capacity
megabytes (MB) in gigabytes (GB) in GB
Table 9-3 shows the maximum thin-provisioned volume virtual capacities for a grain size.
Table 9-3 Maximum thin volume virtual capacities for a grain size
Grain size in KiB Maximum thin virtual capacity in GiB
032 0,260,000
064 0,520,000
128 1,040,000
256 2,080,000
The space reduction occurs when the host writes the data. This process is unlike other
compression solutions in which some or all of the reduction is performed only after running a
post-process compression batch job.
For additional information on how to estimate compression ratios for each of the listed items,
see 9.4.9, “Comprestimator” on page 448.
438 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home
directories, CAD/CAM, oil and gas geo-seismic data and log data. Storing such types of data
in compressed volumes provides immediate capacity reduction to the overall used space.
More space can be provided to users without any change to the environment.
Many file types can be stored in general-purpose servers. However, for practical information,
the estimated compression ratios are based on actual field experience.
File systems that contain audio, video files, and compressed files are not good candidates for
compression. The overall capacity savings on these file types are minimal.
Databases
Database information is stored in table space files. It is common to observe high compression
ratios in database volumes. Examples of databases that can greatly benefit from Real-Time
Compression are IBM DB2, Oracle and Microsoft SQL Server.
Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of
storage space, with more virtual server images and backups kept online. The use of
compression reduces the storage requirements at the source.
Examples of virtualization solutions that can greatly benefit from Real-time Compression are
VMware, Microsoft Hyper-V, and KVM.
Tip: Virtual machines with file systems that contain compressed files are not good
candidates for compression, as described in “Databases”.
At a high level, the IBM RACE component compresses data that is written into the storage
system dynamically. This compression occurs transparently, so Fibre Channel and iSCSI
connected hosts are not aware of the compression. RACE is an online compression
technology, which means that each host write is compressed as it passes to the disks. This
technique has a clear benefit over other compression technologies that are post-processing
based.
RACE is based on the Lempel-Ziv lossless data compression algorithm and operates using a
real-time method. When a host sends a write request, it is acknowledged by the write cache
of the system and then staged to the storage pool. As part of its staging, it passes through the
compression engine and is then stored in compressed format into the storage pool.
Therefore, writes are acknowledged immediately after they are received by the write cache,
with compression occurring as part of the staging to internal or external physical storage.
Capacity is saved when the data is written by the host because the host writes are smaller
when they are written to the storage pool. IBM Real-time Compression is a self-tuning
solution. It is adapting to the workload that runs on the system at any particular moment.
Compression utilities
Compression is probably most known to users because of the widespread use of
compression utilities. At a high level, these utilities take a file as their input and parse the data
by using a sliding window technique. Repetitions of data are detected within the sliding
window history, most often 32 KiB. Repetitions outside of the window cannot be referenced.
Therefore, the file cannot be reduced in size unless data is repeated when the window
“slides” to the next 32 KiB slot.
Figure 9-26 shows compression that uses a sliding window, where the first two repetitions of
the string “ABCD” fall within the same compression window, and can therefore be
compressed by using the same dictionary. The third repetition of the string falls outside of this
window, and therefore cannot be compressed by using the same compression dictionary as
the first two repetitions, reducing the overall achieved compression ratio.
440 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Traditional data compression in storage systems
The traditional approach taken to implement data compression in storage systems is an
extension of how compression works in the previously mentioned compression utilities.
Similar to compression utilities, the incoming data is broken into fixed chunks, and then each
chunk is compressed and extracted independently.
However, there are drawbacks to this approach. An update to a chunk requires a read of the
chunk followed by a recompression of the chunk to include the update. The larger the chunk
size chosen, the heavier the I/O penalty to recompress the chunk. If a small chunk size is
chosen, the compression ratio is reduced because the repetition detection potential is
reduced.
Figure 9-27 shows an example of how the data is broken into fixed-size chunks (in the
upper-left side of the figure). It also shows how each chunk gets compressed independently
into variable length compressed chunks (in the upper-right side of the figure). The resulting
compressed chunks are stored sequentially in the compressed output.
This method enables an efficient and consistent way to index the compressed data because it
is stored in fixed-size containers (Figure 9-28 on page 442).
Location-based compression
Both compression utilities and traditional storage systems compression compress data by
finding repetitions of bytes within the chunk that is being compressed. The compression ratio
of this chunk depends on how many repetitions can be detected within it. The number of
repetitions is affected by how much the bytes stored in the chunk are related to each other.
The relation between bytes is driven by the format of the object. For example, an office
document might contain textual information and an embedded drawing.
Because the chunking of the file is arbitrary, it has no concept of how the data is laid out
within the document. Therefore, a compressed chunk can be a mixture of the textual
information and part of the drawing. This process yields a lower compression ratio because
the different data types mixed together cause a suboptimal dictionary of repetitions. Which
means that fewer repetitions can be detected, because a repetition of bytes in a text object is
unlikely to be found in a drawing.
This traditional approach to data compression is also called location-based compression. The
data repetition detection is based on the location of data within the same chunk.
Predecide mechanism
Some data chunks have a higher compression ratio than others. Compressing some of the
chunks saves little space, but still requires resources, such as processor (CPU) and memory.
To avoid spending resources on incompressible data, and to provide the ability to use a
different, more effective compression algorithm.
442 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The chunks that are below a given compression ratio are skipped by the compression engine,
saving CPU time and memory processing. Chunks that are not compressed with the main
compression algorithm, but that still can be compressed well with the other, are marked and
processed accordingly. The result might vary because predecide does not check the entire
block, only a sample of it.
Temporal compression
RACE offers a technology leap beyond location-based compression, called temporal
compression. When host writes arrive to RACE, they are compressed and fill up fixed size
chunks, also named as compressed blocks. Multiple compressed writes can be aggregated
into a single compressed block. A dictionary of the detected repetitions is stored within the
compressed block.
When applications write new data or update existing data, it is typically sent from the host to
the storage system as a series of writes. Because these writes are likely to originate from the
same application and be of the same data type, more repetitions are usually detected by the
compression algorithm. This type of data compression is called temporal compression
because the data repetition detection is based on the time the data was written into the same
compressed block.
Temporal compression adds the time dimension that is not available to other compression
algorithms. It offers a higher compression ratio because the compressed data in a block
represents a more homogeneous set of input data.
When the same three writes are sent through RACE, as shown on Figure 9-31, the writes are
compressed together by using a single dictionary. This process yields a higher compression
ratio than location-based compression (Figure 9-31).
In addition, you can use Real-time Compression along with Easy Tier on the same volumes.
This compression method provides non disruptive conversion between compressed and
decompressed volumes. This conversion provides a uniform user-experience and eliminates
the need for special procedures when dealing with compressed volumes.
444 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
When the upper cache layer destages to the RACE, the I/Os are sent to the thin-provisioning
layer. They are then sent to RACE, and if necessary, to the original host write or writes. The
metadata that holds the index of the compressed volume is updated if needed, and is
compressed as well.
3. After the copies are fully synchronized, the original volume copy is deleted automatically.
As a result, you have compressed data on the existing volume. This process is non
disruptive, so the data remains online and accessible by applications and users.
With virtualization of external storage systems, the ability to compress already stored data
significantly enhances and accelerates the benefit to users. It enables them to see a
tremendous return on their Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 investment.
On initial purchase of a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 with Real-time
Compression, customers can defer their purchase of new storage. As new storage needs to
be acquired, IT purchases a lower amount of the required storage before compression.
Important: Remember that Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 will
reserve some of its resources like CPU cores and RAM after you create just one
compressed volume or volume copy. This setting can affect your system performance if
you do not plan accordingly in advance.
There are two ways of creating a compressed volume: Basic and Custom.
446 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
To create a compressed volume using Basic option, navigate to Volumes → Volumes and
click Create Volumes. Select the Basic tab and fill the required information as shown in
Figure 9-34. Click Create to finish the compressed volume creation.
If the volume being created through the Custom tab is the first compressed volume in the
environment, a warning message will be displayed as shown in Figure 9-36.
9.4.9 Comprestimator
The built-in Comprestimator is a command-line utility that can be used to estimate an
expected compression rate for a specific volume.
448 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
accuracy level by showing the maximum error range of the results achieved based on the
formulas that it uses.
Example 9-11 Example of the command run over one volume with ID 0
IBM_2078:ITSO Gen2:superuser>lsvdiskanalysis 0
id 0
name SQL_Data0
state estimated
started_time 151012104343
analysis_time 151012104353
capacity 300.00GB
thin_size 290.85GB
thin_savings 9.15GB
thin_savings_ratio 3.05
compressed_size 141.58GB
compression_savings 149.26GB
compression_savings_ratio 51.32
total_savings 158.42GB
total_savings_ratio 52.80
accuracy 4.97
450 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10
You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and its cache. Therefore, the copy is not apparent
to the host.
Important: Because FlashCopy operates at the block level below the host operating
system and cache, those levels do need to be flushed for consistent FlashCopies.
While the FlashCopy operation is performed, the source volume is briefly halted to initialize
the FlashCopy bitmap, and then input/output (I/O) can resume. Although several FlashCopy
options require the data to be copied from the source to the target in the background, which
can take time to complete, the resulting data on the target volume is presented so that the
copy appears to complete immediately.
This process is performed by using a bitmap (or bit array), which tracks changes to the data
after the FlashCopy is started, and an indirection layer, which enables data to be read from
the source volume transparently.
The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples:
Rapidly creating consistent backups of dynamically changing data
Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
Rapidly creating copies of production data sets for application development and testing
Rapidly creating copies of production data sets for auditing purposes and data mining
Rapidly creating copies of production data sets for quality assurance
Regardless of your business needs, FlashCopy is flexible and offers a broad feature set,
which makes it applicable to many scenarios.
452 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
After the FlashCopy is performed, the resulting image of the data can be backed up to tape,
as though it were the source system. After the copy to tape is complete, the image data is
redundant and the target volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. The use of these
methods puts less load on your infrastructure.
When FlashCopy is used for backup purposes, the target data usually is managed as
read-only at the operating system level. This approach provides extra security by ensuring
that your target data was not modified and remains true to the source.
This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.
In addition to the restore option, which copies the original blocks from the target volume to
modified blocks on the source volume, the target can be used to perform a restore of
individual files. To do that you need to make the target available on a host. We suggest that
you do not make the target available to the source host, because seeing duplicates of disks
causes problems for most host operating systems. Copy the files to the source using normal
host data copy methods for your environment.
This method differs from the other migration methods, which are described later in this
chapter. Common uses for this capability are host and back-end storage hardware refreshes.
You create a FlashCopy of your source and use that for your testing. This copy is a duplicate
of your production data down to the block level so that even physical disk identifiers are
copied. Therefore, it is impossible for your applications to tell the difference.
Both of these layers have various levels and methods of caching data to provide better
speed. Because the FlashCopy sit below these layers, they are unaware of the cache at the
application or operating system layers.
To ensure the integrity of the copy that is made, it is necessary to flush the host operating
system and application cache for any outstanding reads or writes before the FlashCopy
operation is performed. Failing to flush the host operating system and application cache
produces what is referred to as a crash consistent copy.
The resulting copy requires the same type of recovery procedure, such as log replay and file
system checks, that is required following a host crash. FlashCopies that are crash consistent
often can be used following file system and application recovery procedures.
Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare for a FlashCopy operation. When this type of facility is unavailable, the host
cache must be flushed manually by quiescing the application and unmounting the file system
or drives.
Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that application’s data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases, because the database maintains strict control over I/O.
This method is as opposed to flushing data from both the application and the backing
database, which is always the suggested method because it is safer. However, this method
can be used when facilities do not exist or your environment includes time sensitivity.
454 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The source volume and target volume are available (almost) immediately following the
FlashCopy operation.
The source and target volumes must be the same “virtual” size.
The source and target volumes must be on the same IBM Storwize for Lenovo system.
The source and target volumes do not need to be in the same I/O Group or storage pool.
The storage pool extent sizes can differ between the source and target.
The source volumes can have up to 256 target volumes (Multiple Target FlashCopy).
The target volumes can be the source volumes for other FlashCopy relationships
(cascaded FlashCopy).
Consistency groups are supported to enable FlashCopy across multiple volumes at the
same time.
Up to 255 FlashCopy consistency groups are supported per system.
Up to 512 FlashCopy mappings can be placed in one consistency group.
The target volume can be updated independently of the source volume.
Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both
node canisters of the IBM Storwize for Lenovo I/O Group to prevent a single point of
failure.
FlashCopy mapping and Consistency Groups can be automatically withdrawn after the
completion of the background copy.
Thin-provisioned FlashCopy (or Snapshot in the graphical user interface (GUI)) use disk
space only when updates are made to the source or target data, and not for the entire
capacity of a volume copy.
FlashCopy licensing is based on the virtual capacity of the source volumes.
Incremental FlashCopy copies all of the data when you first start FlashCopy and then only
the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy
can substantially reduce the time that is required to re-create an independent image.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship, and without having to wait for the original
copy operation to complete.
The maximum number of supported FlashCopy mappings is 4096 per clustered system.
The size of the source and target volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.
A key advantage of the Multiple Target Reverse FlashCopy function is that the reverse
FlashCopy does not destroy the original target, which enables processes that are using the
target, such as a tape backup, to continue uninterrupted.
Lenovo Storage V series system also provides the ability to create an optional copy of the
source volume to be made before the reverse copy operation starts. This ability to restore
back to the original source data can be useful for diagnostic purposes.
The production disk is instantly available with the backup data. Figure 10-1 shows an
example of Reverse FlashCopy.
Regardless of whether the initial FlashCopy map (volume X → volume Y) is incremental, the
Reverse FlashCopy operation copies the modified data only.
Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and
adding them to a new reverse Consistency Group. Consistency Groups cannot contain more
than one FlashCopy map with the same target volume.
456 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Before you start a FlashCopy (regardless of the type and options specified), you must issue a
prestartfcmap or prestartfcconsistgrp command, which puts the cache into write-through
mode and provides a flushing of the I/O currently bound for your volume. After FlashCopy is
started, an effective copy of a source volume to a target volume is created.
The content of the source volume is presented immediately on the target volume and the
original content of the target volume is lost. This FlashCopy operation is also referred to as a
time-zero copy (T0).
Tip: Rather than using prestartfcmap or prestartfcconsistgrp, you can also use the
-prep parameter in the startfcmap or startfcconsistgrp command to prepare and start
FlashCopy in one step.
The source and target volumes are available for use immediately after the FlashCopy
operation. The FlashCopy operation creates a bitmap that is referenced and maintained to
direct I/O requests within the source and target relationship. This bitmap is updated to reflect
the active block locations as data is copied in the background from the source to the target,
and updates are made to the source.
For more information about background copy, see 10.3.5, “Grains and the FlashCopy bitmap”
on page 462.
Figure 10-2 shows the redirection of the host I/O toward the source volume and the target
volume.
Important: As with any point-in-time copy technology, you are bound by operating system
and application requirements for interdependent data and the restriction to an entire
volume.
The source and target volumes must belong to the same Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 systems, but they do not have to be in the same I/O Group or storage pool.
FlashCopy associates a source volume to a target volume through FlashCopy mapping.
To become members of a FlashCopy mapping, source and target volumes must be the same
size. Volumes that are members of a FlashCopy mapping cannot have their size increased or
decreased while they are members of the FlashCopy mapping.
A FlashCopy mapping is the act of creating a relationship between a source volume and a
target volume. FlashCopy mappings can be stand-alone or a member of a Consistency
Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a
stand-alone mapping or a Consistency Group.
458 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-4 Multiple Target FlashCopy implementation
Figure 10-4 also shows four targets and mappings that are taken from a single source, along
with their interdependencies. In this example, Target 1 is the oldest (as measured from the
time that it was started) through to Target 4, which is the newest. The ordering is important
because of how the data is copied when multiple target volumes are defined and because of
the dependency chain that results.
A write to the source volume does not cause its data to be copied to all of the targets. Instead,
it is copied to the newest target volume only (Target 4 in Figure 10-4). The older targets refer
to new targets first before referring to the source.
From the point of view of an intermediate target disk (not the oldest or the newest), it treats
the set of newer target volumes and the true source volume as a type of composite source. It
treats all older volumes as a kind of target (and behaves like a source to them).
If the mapping for an intermediate target volume shows 100% progress, its target volume
contains a complete set of data. In this case, mappings treat the set of newer target volumes
(up to and including the 100% progress target) as a form of composite source. A dependency
relationship exists between a particular target and all newer targets (up to and including a
target that shows 100% progress) that share the source until all data is copied to this target
and all older targets.
For more information about Multiple Target FlashCopy, see 10.3.6, “Interaction and
dependency between multiple target FlashCopy mappings” on page 464.
When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs the operation on all FlashCopy mappings that are
contained within the Consistency Group at the same time.
Dependent writes
To show why it is crucial to use Consistency Groups when a data set spans multiple volumes,
consider the following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to perform the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur before the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes 1 and 3 but not 2 because the FlashCopy of the
database volume occurred before the write was completed.
In this case, if the database was restarted by using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the volume
with the database file was started (the bitmap was created) before the write completed to the
volume. Therefore, the transaction is lost and the integrity of the database is in question.
To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, a FlashCopy operation must be performed on multiple volumes as an
atomic operation. To accomplish this method, the concept of Consistency Groups is
supported.
460 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings. The maximum
number of FlashCopy mappings that is supported by the Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 systems V8.1.0 is 4096. FlashCopy commands can then be issued to the
FlashCopy Consistency Group and, therefore, simultaneously for all of the FlashCopy
mappings that are defined in the Consistency Group.
For example, when a FlashCopy start command is issued to the Consistency Group, all of
the FlashCopy mappings in the Consistency Group are started at the same time. This
simultaneous start results in a point-in-time copy that is consistent across all of the
FlashCopy mappings that are contained in the Consistency Group.
If a particular volume is the source volume for multiple FlashCopy mappings, you might want
to create separate Consistency Groups to separate each mapping of the same source
volume. Regardless of whether the source volume with multiple target volumes is in the same
consistency group or in separate consistency groups, the resulting FlashCopy produces
multiple identical copies of the source data.
Maximum configurations
Table 10-1 lists the FlashCopy properties and maximum configurations.
FlashCopy targets per source 256 This maximum is the number of FlashCopy
mappings that can exist with the same source
volume.
FlashCopy mappings per system 4096 The number of mappings is no longer limited by
the number of volumes in the system, so the
FlashCopy component limit applies.
FlashCopy Consistency Groups 255 This maximum is an arbitrary limit that is policed
per system by the software.
FlashCopy volume capacity per 4 pebibytes This maximum is a limit on the quantity of
I/O Group (PiB) FlashCopy mappings that are using bitmap space
from this I/O Group. This maximum configuration
uses all 4 gibibytes (GiB) of bitmap space for the
I/O Group and allows no Metro or Global Mirror
bitmap space. The default is 40 tebibytes (TiB).
FlashCopy mappings per 512 This limit is because of the time that is taken to
Consistency Group prepare a Consistency Group with many
mappings.
To show how the FlashCopy indirection layer works, we examine what happens when a
FlashCopy mapping is prepared and then started.
When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush the write cache to the source volume or volumes that are part of a Consistency
Group.
2. Put cache into write-through mode on the source volumes.
3. Discard cache for the target volumes.
4. Establish a sync point on all of the source volumes in the Consistency Group (which
creates the FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source volumes and target
volumes.
6. Enable cache on the source volumes and target volumes.
FlashCopy provides the semantics of a point-in-time copy by using the indirection layer, which
intercepts I/O that is directed at the source or target volumes. The act of starting a FlashCopy
mapping causes this indirection layer to become active in the I/O path, which occurs
automatically across all FlashCopy mappings in the Consistency Group.
The indirection layer then determines how each I/O is to be routed, based on the following
factors:
The volume and the logical block address (LBA) to which the I/O is addressed
Its direction (read or write)
The state of an internal data structure, the FlashCopy bitmap
The indirection layer allows the I/O to go through to the underlying volume, redirects the I/O
from the target volume to the source volume, or queues the I/O while it arranges for data to
be copied from the source volume to the target volume. To explain in more detail which action
is applied for each I/O, we first look at the FlashCopy bitmap.
The FlashCopy bitmap dictates read and write behavior for the source and target volumes.
Source reads
Reads are performed from the source volume, which is the same as for non-FlashCopy
volumes.
462 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Source writes
Writes to the source cause one of the following actions:
If the grain was not copied to the target yet, the grain is copied before the actual write is
performed to the source. The bitmap is updated to indicate that this grain is already copied
to the target.
If the grain was already copied, the write is performed to the source as usual.
Target reads
Reads are performed from the target if the grain was copied. Otherwise, the read is
performed from the source and no copy is performed.
Target writes
Writes to the target cause one of the following actions:
If the grain was not copied from the source to the target, the grain is copied from the
source to the target before the actual write is performed to the source. The bitmap is
updated to indicate that this grain is already copied to the target.
If the entire grain is being updated on the target, the target is marked as split with the
source (if there is no I/O error during the write) and the write goes directly to the target.
If the grain in question was already copied from the source to the target, the write goes
directly to the target.
Figure 10-6 on page 464 shows how the background copy runs while I/Os are handled
according to the indirection layer algorithm.
464 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
In Figure 10-7 on page 464, Target 0 is not dependent on a source because it completed
copying. Target 0 has two dependent mappings (Target 1 and Target 2).
Target 1 depends on Target 0. It remains dependent until all of Target 1 is copied. Target 2
depends on it because Target 2 is 20% copy complete. After all of Target 1 is copied, it can
then move to the idle_copied state.
Target 2 is dependent upon Target 0 and Target 1 and remains dependent until all of Target 2
is copied. No target depends on Target 2; therefore, when all of the data is copied to Target 2,
it can move to the idle_copied state.
If the grain of the next oldest mapping is not yet copied, it must be copied before the write can
proceed, to preserve the contents of the next oldest mapping. The data that is written to the
next oldest mapping comes from a target or source.
If the grain in the target that is being written is not yet copied, the grain is copied from the
oldest copied grain in the mappings that are newer than the target, or from the source if none
is copied. After this copy is done, the write can be applied to the target.
Note: The stopping copy process can be ongoing for several mappings that share the
source at the same time. At the completion of this process, the mapping automatically
makes an asynchronous state transition to the stopped state, or the idle_copied state if
the mapping was in the copying state with progress = 100%.
For example, if the mapping that is associated with Target 0 was issued a stopfcmap or
stopfcconsistgrp command, Target 0 enters the stopping state while a process copies the
data of Target 0 to Target 1. After all of the data is copied, Target 0 enters the stopped state,
and Target 1 is no longer dependent upon Target 0; however, Target 1 remains dependent on
Target 2.
Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.
Yes Read from the target volume. Write to the target volume.
This copy-on-write process introduces significant latency into write operations. To isolate the
active application from this additional latency, the FlashCopy indirection layer is placed
logically between upper and lower cache. Therefore, the additional latency that is introduced
466 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
by the copy-on-write process is encountered only by the internal cache operations and not by
the application.
Figure 10-9 shows the logical placement of the FlashCopy indirection layer.
Additionally, in the multitarget FlashCopy the target volumes of the same image share cache
data. This design is opposite to previous code versions, where each volume had its own copy
of cached data.
This method provides an exact number of bytes because image mode volumes might not line
up one-to-one on other measurement unit boundaries. Example 10-1 on page 468 lists the
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 3
mdisk_grp_name temp_migration_pool
468 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
type image
mdisk_id 5
mdisk_name mdisk3
fast_write_state empty
used_capacity 21474836480
real_capacity 21474836480
free_capacity 0
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status measured
tier ssd
tier_capacity 0
tier enterprise
tier_capacity 21474836480
tier nearline
tier_capacity 0
compressed_copy no
uncompressed_used_capacity 21474836480
parent_mdisk_grp_id 3
parent_mdisk_grp_name temp_migration_pool
encrypt no
Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume
commands to modify the size of the volume.
You can use an image mode volume as a FlashCopy source volume or target volume.
Overview of a FlashCopy sequence of events: The following tasks show the FlashCopy
sequence:
1. Associate the source data set with a target location (one or more source and target
volumes).
2. Create a FlashCopy mapping for each source volume to the corresponding target
volume. The target volume must be equal in size to the source volume.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.
Flush done The FlashCopy mapping automatically moves from the preparing
state to the prepared state after all cached data for the source is
flushed and all cached data for the target is no longer valid.
470 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Mapping event Description
Start When all of the FlashCopy mappings in a Consistency Group are in the
prepared state, the FlashCopy mappings can be started. To preserve
the cross-volume Consistency Group, the start of all of the FlashCopy
mappings in the Consistency Group must be synchronized correctly
concerning I/Os that are directed at the volumes by using the
startfcmap or startfcconsistgrp command.
The following actions occur during the running of the startfcmap
command or the startfcconsistgrp command:
New reads and writes to all source volumes in the Consistency
Group are paused in the cache layer until all ongoing reads and
writes beneath the cache layer are completed.
After all FlashCopy mappings in the Consistency Group are
paused, the internal cluster state is set to enable FlashCopy
operations.
After the cluster state is set for all FlashCopy mappings in the
Consistency Group, read and write operations continue on the
source volumes.
The target volumes are brought online.
As part of the startfcmap or startfcconsistgrp command, read and
write caching is enabled for the source and target volumes.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.
Copy complete After all of the source data is copied to the target and there are no
dependent mappings, the state is set to copied. If the option to
automatically delete the mapping after the background copy completes
is specified, the FlashCopy mapping is deleted automatically. If this
option is not specified, the FlashCopy mapping is not deleted
automatically and can be reactivated by preparing and starting again.
Idle_or_copied
The source and target volumes act as independent volumes even if a mapping exists
between the two. Read and write caching is enabled for the source and the target volumes.
If the mapping is incremental and the background copy is complete, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in
the I/O group that the mapping is assigned to is lost, the source and target volumes are
offline.
Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.
Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the Small
Computer System Interface (SCSI) front end as a hardware error. If the mapping is
incremental and a previous mapping is completed, the mapping records the differences
between the source and target volumes only. If the connection to both nodes in the I/O group
that the mapping is assigned to is lost, the source and target volumes go offline.
Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error.
Any changed write data for the source volume is flushed from the cache. Any read or write
data for the target volume is discarded from the cache.
If the mapping is incremental and a previous mapping is completed, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in
the I/O group that the mapping is assigned to is lost, the source and target volumes go offline.
Performing the cache flush that is required as part of the startfcmap or startfcconsistgrp
command causes I/Os to be delayed while they are waiting for the cache flush to complete.
To overcome this problem, FlashCopy supports the prestartfcmap or prestartfcconsistgrp
commands, which prepare for a FlashCopy start while still allowing I/Os to continue to the
source volume.
In the Preparing state, the FlashCopy mapping is prepared by completing the following steps:
1. Flushing any modified write data that is associated with the source volume from the
cache. Read data for the source is left in the cache.
2. Placing the cache for the source volume into write-through mode so that subsequent
writes wait until data is written to disk before the write command that is received from the
host is complete.
3. Discarding any read or write data that is associated with the target volume from the cache.
472 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Stopped
The mapping is stopped because you issued a stop command or an I/O error occurred. The
target volume is offline and its data is lost. To access the target volume, you must restart or
delete the mapping. The source volume is accessible and the read and write cache is
enabled. If the mapping is incremental, the mapping is recording write operations to the
source volume. If the connection to both nodes in the I/O group that the mapping is assigned
to is lost, the source and target volumes go offline.
Stopping
The mapping is copying data to another mapping.
If the background copy process is complete, the target volume is online while the stopping
copy process completes.
If the background copy process is not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs.
Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
the source and target volume to go offline. When access to the metadata is restored, the
mapping returns to the copying or stopping state and the source and target volumes return
online. The background copy process resumes. Any data that was not flushed and was
written to the source or target volume before the suspension is in cache until the mapping
leaves the suspended state.
Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.
The benefit of using a FlashCopy mapping with background copy enabled is that the target
volume becomes a real clone (independent from the source volume) of the FlashCopy
mapping source volume after the copy is complete. When the background copy function is
not performed, the target volume remains a valid copy of the source data only while the
FlashCopy mapping remains in place.
474 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The background copy rate is a property of a FlashCopy mapping that is defined as a value
0 - 100. The background copy rate can be defined and changed dynamically for individual
FlashCopy mappings. A value of 0 disables the background copy.
Table 10-5 shows the relationship of the background copy rate value to the attempted number
of grains to be copied per second.
The grains per second numbers represent the maximum number of grains that the IBM
Storwize V5000 for Lenovo copies per second, assuming that the bandwidth to the managed
disks (MDisks) can accommodate this rate.
If the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems cannot achieve these
copy rates because of insufficient width from the nodes to the MDisks, the background copy
I/O contends for resources on an equal basis with the I/O that is arriving from the hosts.
Background copy I/O and I/O that is arriving from the hosts tend to see an increase in latency
and a consequential reduction in throughput.
Background copy and foreground I/O continue to make progress, and do not stop, hang, or
cause the node to fail. The background copy is performed by both nodes of the I/O Group in
which the source volume is found.
If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process that is wanting to use the lock in
shared or exclusive mode must wait for it to be freed.
Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O Group of the source volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source volume is a
member of the failing node’s I/O Group becomes inaccessible.
FlashCopy continues with a single copy of the FlashCopy bitmap that is stored as non-volatile
in the remaining node in the source I/O Group. The system metadata is updated to indicate
that the missing node no longer holds a current bitmap. When the failing node recovers or a
replacement node is added to the I/O Group, the bitmap redundancy is restored.
Because the storage area network (SAN) that links Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030 systems node canisters to each other and to the MDisks is made up of many
independent links, it is possible for a subset of the nodes to be temporarily isolated from
several of the MDisks. When this situation happens, the managed disks are said to be path
offline on certain nodes.
Other nodes: Other nodes might see the managed disks as Online because their
connection to the managed disks is still functioning.
476 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
for the newest mapping that is 100% copied but remains in the copying state. If no mappings
are 100% copied, all of the target volumes are taken offline. Path offline is a state that
exists on a per-node basis. Other nodes might not be affected. If the source volume comes
online, the target and source volumes are brought back online.
Other configuration events complete synchronously, and no informational events are logged
as a result of the following events:
PREPARE_COMPLETED
This state transition is logged when the FlashCopy mapping or Consistency Group enters
the prepared state as a result of a user request to prepare. The user can now start (or
stop) the mapping or Consistency Group.
COPY_COMPLETED
This state transition is logged when the FlashCopy mapping or Consistency Group enters
the idle_or_copied state when it was in the copying or stopping state. This state
transition indicates that the target disk now contains a complete copy and no longer
depends on the source.
STOP_COMPLETED
This state transition is logged when the FlashCopy mapping or Consistency Group enters
the stopped state as a result of a user request to stop. It is logged after the automatic copy
process completes. This state transition includes mappings where no copying needed to
be performed. This state transition differs from the event that is logged when a mapping or
group enters the stopped state as a result of an I/O error.
For example, we can perform a Metro Mirror copy to duplicate data from Site_A to Site_B and
then perform a daily FlashCopy to back up the data to another location.
Table 10-6 on page 478 lists the supported combinations of FlashCopy and remote copy. In
the table, remote copy refers to Metro Mirror and Global Mirror.
Although these presets meet most FlashCopy requirements, they do not support all possible
FlashCopy options. If more specialized options are required that are not supported by the
presets, the options must be performed by using CLI commands.
This section describes the preset options and their use cases.
Snapshot
This preset creates a copy-on-write point-in-time copy. The snapshot is not intended to be an
independent copy. Instead, the copy is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.
Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate many changes to be made to the source or target volume; a
significant proportion of the volumes remains unchanged.
478 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced. Therefore, many Snapshot copies can be used
in the environment.
Snapshots are useful for providing protection against corruption or similar issues with the
validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing (including “what-if”
modeling that is based on production data) without requiring a full copy of the data to be
provisioned.
Clone
The clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.
Use case
Users want a copy of the volume that they can modify without affecting the original volume.
After the clone is established, there is no expectation that it is refreshed or that there is any
further need to reference the original production data again. If the source is thin-provisioned,
the target is thin-provisioned for the auto-create target.
Backup
The backup preset creates a point-in-time replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.
Use case
The user wants to create a copy of the volume that can be used as a backup if the source
becomes unavailable, as in the case of loss of the underlying physical controller. The user
plans to periodically update the secondary copy, and does not want to suffer from the
resource demands of creating a new copy each time (and incremental FlashCopy times are
faster than full copy, which helps to reduce the window where the new backup is not yet fully
effective). If the source is thin-provisioned, the target is also thin-provisioned in this option for
the auto-create target.
Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volume’s performance.
This section describes the tasks that you can perform at a FlashCopy level using the GUI.
The following methods can be used to visualize and manage your FlashCopy:
Use the main pane. Move the mouse pointer over Copy Services in the dynamic menu
and click FlashCopy, as shown in Figure 10-10.
In its basic mode, the FlashCopy function copies the contents of a source volume to a
target volume. Any data that existed on the target volume is lost, and that data is replaced
by the copied data.
480 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
From the Copy Services option on the main panel, use the Consistency Groups option, as
shown in Figure 10-11. A Consistency Group is a container for mappings. You can add
many mappings to a Consistency Group.
From the Copy Services option on the main panel, use the FlashCopy Mappings pane, as
shown in Figure 10-12. A FlashCopy mapping defines the relationship between a source
volume and a target volume.
2. Select the volume for which you want to create the FlashCopy relationship, as shown in
Figure 10-14 on page 482.
Depending on whether you created the target volumes for your FlashCopy mappings or you
want the system to create the target volumes for you, the following options are available:
If you created the target volumes, see “Using existing target volumes” on page 482.
If you want the system to create the target volumes for you, see “Creating target volumes”
on page 487.
482 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. The Create FlashCopy Mapping window opens (Figure 10-16). In this window, you must
create the relationship between the source volume (the disk that is copied) and the target
volume (the disk that receives the copy). A mapping can be created between any two
volumes that are managed by the same clustered system. Select a source volume and a
target volume for your FlashCopy mapping, and then click Add. If you must create other
copies, repeat this step.
Important: The source volume and the target volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source volume.
Volumes: The volumes do not have to be in the same I/O Group or storage pool.
3. Click Next after you create all of the relationships that you need, as shown in Figure 10-17
on page 484.
4. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations.
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates a replica of the source volume on a target volume. The copy can be
changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
source and target volumes.
For each preset, you can customize various advanced options. You can access these
settings by clicking on the preset. The preset options are shown in Figure 10-18 on
page 485.
484 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-18 Create FlashCopy Mapping Presets
486 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
7. Check the result of this FlashCopy mapping. From the main panel, click Copy Services →
FlashCopy Mappings as shown in Figure 10-21.
8. For each FlashCopy mapping relationship that was created, a mapping name is
automatically generated that starts with fcmapX, where X is the next available number. If
needed, you can rename these mappings, as shown in 10.4.11, “Renaming FlashCopy
mapping” on page 509.
9. The FlashCopy mapping is now ready for use as shown in Figure 10-22.
Tip: You can start FlashCopy from the GUI. However, the use of the GUI might be
impractical if you plan to handle many FlashCopy mappings or Consistency Groups
periodically or at varying times. In these cases, creating a script by using the CLI might be
more convenient.
Target volume naming: If the target volume does not exist, the target volume is
created. The target volume name is based on its source volume and a generated
number at the end, for example, source_volume_name_XX, where XX is a number that
was generated dynamically.
2. In the Create FlashCopy Mapping window (Figure 10-24 on page 489), you must select
one FlashCopy preset. The GUI provides the following presets to simplify common
FlashCopy operations:
– Snapshot
– Clone
– Backup
488 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-24 Create FlashCopy Mapping window
3. For each preset, you can customize various advanced options as shown in Figure 10-25.
If you prefer not to customize these advanced settings, go directly to step 4 on page 490.
Figure 10-26 Selecting the option to add the mappings to a Consistency Group
6. The next window will show capacity management options about the new target volume as
shown in Figure 10-27 on page 491.
490 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-27 FlashCopy target volume capacity management options
7. Click Next and you will be asked to select the desired storage pool in which the new
FlashCopy target volume needs to be created as shown in Figure 10-28.
8. Click Finish and you will see a window indicating the status of the operation as shown in
Figure 10-29 on page 492.
9. Check the result of this FlashCopy mapping, as shown in Figure 10-30. For each
FlashCopy mapping relationship that is created, a mapping name is automatically
generated that starts with fcmapX where X is the next available number. If necessary, you
can rename these mappings, as shown in Figure 10-30. For more information, see
10.4.11, “Renaming FlashCopy mapping” on page 509.
Note: If the FlashCopy target volume is a generic volume and is not ready, then the
volume may be getting formatted. Check the running tasks in the GUI.
492 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Tip: You can start FlashCopy from the GUI. However, the use of the GUI might be
impractical if you plan to handle many FlashCopy mappings or Consistency Groups
periodically or at varying times. In these cases, creating a script by using the CLI might be
more convenient.
3. A volume is created as a target volume for this snapshot in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column Status area, as shown in
Figure 10-32 on page 495.
494 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-32 Snapshot created and started
A volume is created as a target volume for this clone in the same pool as the source volume.
The FlashCopy mapping is created and started. You can check the FlashCopy progress in the
FlashCopy mappings option or by clicking on to the Tasks → Running Tasks from the main
pane. After the FlashCopy clone is created, the mapping is removed and the new cloned
volume becomes available, as shown in Figure 10-34.
496 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
To create and start a backup, complete the following steps:
1. From the main panel, click Copy Services → FlashCopy.
2. Select the volume that you want to back up, and click Actions → Create Backup, as
shown in Figure 10-35.
3. A volume is created as a target volume for this backup in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column, as shown in Figure 10-36
on page 498, or in the Running Tasks from the main panel.
2. Click Create Consistency Group and enter the FlashCopy Consistency Group name that
you want to use and click Create (Figure 10-38 on page 499).
498 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-38 Create Consistency Group window
Consistency Group name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The volume name can be 1 - 63 characters.
2. Select in which Consistency Group you want to create the FlashCopy mapping. If you
prefer not to create a FlashCopy mapping in a Consistency Group, select Not in a Group
as shown in Figure 10-41.
4. If you did not select a Consistency Group, click Create FlashCopy Mapping, as shown in
Figure 10-43.
5. The Create FlashCopy Mapping window opens, as shown in Figure 10-44 on page 501. In
this window, you must create the relationships between the source volumes (the volumes
that are copied) and the target volumes (the volumes that receive the copy). A mapping
can be created between any two volumes in a clustered system.
Important: The source volume and the target volume must be of equal size.
500 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-44 Create FlashCopy Mapping window
Tip: The volumes do not have to be in the same I/O Group or storage pool.
6. Select a volume in the Source Volume column by using the drop-down list. Then, select a
volume in the Target Volume column by using the drop-down list. Click Add, as shown in
Figure 10-45. Repeat this step to create other relationships.
Important: The source and target volumes must be of equal size. Therefore, only the
targets with the appropriate size are shown for a source volume.
7. Click Next after all of the relationships that you want to create are shown (Figure 10-46).
Figure 10-46 Create FlashCopy Mapping with the relationships that were created
8. In the next window, you must select one FlashCopy preset along with their customization
options. The GUI provides the following presets to simplify common FlashCopy
operations, as shown in Figure 10-47 on page 503:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy
can be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.
502 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-47 Create FlashCopy Mapping window
Whichever preset you select, based on that you can customize options based on the
selected preset as shown in Figure 10-48.
If you prefer not to customize these settings, go directly to step 9.
Tip: You can start FlashCopy from the GUI. However, if you plan to handle many
FlashCopy mappings or Consistency Groups periodically, or at varying times, creating a
script by using the operating system shell CLI might be more convenient.
3. Click Actions → Show Related Volumes, as shown in Figure 10-51 on page 505.
Tip: You can also right-click a FlashCopy mapping and select Show Related Volumes.
504 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-51 Show Related Volumes
In the Related Volumes window (Figure 10-52), you can see the related mapping for a
volume. If you click one of these volumes, you can see its properties.
Tip: You can also right-click a FlashCopy mapping and select Move to Consistency
Group.
4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for this FlashCopy mapping by using the drop-down list (Figure 10-54).
Tip: You can also right-click a FlashCopy mapping and select Remove from
Consistency Group.
4. In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 10-56 on page 507.
506 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-56 Remove FlashCopy Mapping from Consistency Group
5. Click Remove and the desired FlashCopy map will be removed from the consistency
group.
Tip: You can also right-click a FlashCopy mapping and select Edit Properties.
4. In the Edit FlashCopy Mapping window, you can modify the Background Copy Rate from
the drop-down as parameters for a selected FlashCopy mapping, as shown in
Figure 10-58 on page 508.
Note: Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect he
performance of other operations.
For FlashCopy background copy rates, starting from V7.8.1, the background copy rate up
to 2 GB/s.
5. In the Edit Flash Copy Mapping window, you can modify the Cleaning Rate from the
drop-down for the selected FlashCopy mapping as shown in Figure 10-59.
Note: Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.
For FlashCopy background cleaning rates, starting from V7.8.1, the background cleaning
rate up to 2 GB/s.
508 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10.4.11 Renaming FlashCopy mapping
Complete the following steps to rename a FlashCopy mapping:
1. From the main panel, click Copy Services → FlashCopy or Consistency Groups, or
FlashCopy Mappings.
2. In the table, select the FlashCopy mapping that you want to rename.
3. Click Actions → Rename Mapping, as shown in Figure 10-60.
Tip: You can also right-click a FlashCopy mapping and select Rename Mapping.
4. In the Rename FlashCopy Mapping window, enter the new name that you want to assign
to the FlashCopy mapping and click Rename, as shown in Figure 10-61.
FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.
Consistency Group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.
The new Consistency Group name is displayed in the Consistency Group pane.
Tip: You can also right-click a FlashCopy mapping and select Delete Mapping.
4. The Delete FlashCopy Mapping window opens, as shown in Figure 10-65 on page 511. In
the “Verify the number of FlashCopy mappings that you are deleting” field, you must enter
the number of volumes that you want to remove. This verification was added to help avoid
deleting the wrong mappings.
If you still have target volumes that are inconsistent with the source volumes and you want
to delete these FlashCopy mappings, select Delete the FlashCopy mapping even when
510 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
the data on the target volume is inconsistent, or if the target volume has other
dependencies.
Click Delete, as shown in Figure 10-65.
Important: Deleting a Consistency Group does not delete the FlashCopy mappings.
Tip: You can also right-click a FlashCopy mapping and select Start.
4. You can check the FlashCopy progress in the Progress column of the table or in the
Running Tasks status area. After the task completes, the FlashCopy mapping status is in
a Copied state, as shown in Figure 10-69.
Important: Stop a FlashCopy copy process only when the data on the target volume is not
useful and can be discarded, or if you want to modify the FlashCopy mapping. When a
FlashCopy mapping is stopped, the target volume becomes invalid and it is set offline by
the system.
512 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-70 Stopping the FlashCopy copy process
Volume mirroring is provided by a specific volume mirroring function in the I/O stack, and it
cannot be manipulated like a FlashCopy or other types of copy volumes. However, this
feature provides migration functionality, which can be obtained by splitting the mirrored copy
from the source, or by using the migrate to function. Volume mirroring cannot control backend
storage mirroring or replication.
With volume mirroring, host I/O completes when both copies are written, and this feature is
enhanced with a tunable latency tolerance. This tolerance provides an option to give
preference to losing the redundancy between the two copies. This tunable timeout value is
Latency or Redundancy.
The Latency tuning option, which is set with chvdisk -mirrowritepriority latency, is the
default. It prioritizes host I/O latency, which yields a preference to host I/O over availability.
However, you might need to give preference to redundancy in your environment when
availability is more important than I/O response time. Use the chvdisk -mirror
writepriority redundancy command to set the redundancy option.
Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.
Migration: While these migration methods do not disrupt access, you must take a brief
outage to install the host drivers for your Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 systems if you do not already have them installed.
With volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. Using volume mirroring over volume migration is
beneficial because with volume mirroring, storage pools do not need to have the same extent
size as is the case with volume migration.
Note: Volume mirroring does not create a second volume before you split copies. Volume
mirroring adds a second copy of the data under the same volume, so you end up having
one volume presented to the host with two copies of data connected to this volume. Only
splitting copies creates another volume, and then both volumes have only one copy of
the data.
Starting with V7.3 and the introduction of the new cache architecture, mirrored volume
performance has been significantly improved. Now, lower cache is beneath the volume
mirroring layer, which means that both copies have their own cache.
This approach helps in cases of having copies of different types, for example generic and
compressed, because now both copies use its independent cache and performs its own read
prefetch. Destaging of the cache can now be done independently for each copy, so one copy
does not affect performance of a second copy.
Also, because the IBM Storwize for Lenovo and Lenovo Storage V series destage algorithm
is MDisk aware, it can tune or adapt the destaging process, depending on MDisk type and
usage, for each copy independently.
514 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10.6 Native IP replication
Before we describe Remote Copy features that benefit from the use of multiple Lenovo
Storage V3700 V2, V3700 V2 XP, and V5030 systems, it is important to describe the
partnership option introduced with V7.2 native IP replication.
Bridgeworks’ SANSlide technology, which is integrated into the controller firmware, uses
artificial intelligence to help optimize network bandwidth use and adapt to changing workload
and network conditions.
This technology can improve remote mirroring network bandwidth usage up to three times,
which can enable clients to deploy a less costly network infrastructure, or speed up remote
replication cycles to enhance disaster recovery effectiveness.
With an Ethernet network data flow, the data transfer can slow down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set
of packets that are sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 10-72.
Figure 10-73 Optimized network data flow by using Bridgeworks SANSlide technology
With native IP partnership, the following Copy Services features are supported:
Metro Mirror (MM)
Referred to as synchronous replication, MM provides a consistent copy of a source virtual
disk on a target virtual disk. Data is written to the target virtual disk synchronously after it is
written to the source virtual disk so that the copy is continuously updated.
Global Mirror (GM) and GM with Change Volumes
Referred to as asynchronous replication, GM provides a consistent copy of a source
virtual disk on a target virtual disk. Data is written to the target virtual disk asynchronously
so that the copy is continuously updated. However, the copy might not contain the last few
updates if a disaster recovery (DR) operation is performed. An added extension to GM is
GM with Change Volumes. GM with Change Volumes is the preferred method for use with
native IP replication.
In the storage layer, an IBM Storwize for Lenovo family system has the following
characteristics and requirements:
The system can perform MM and GM replication with other storage-layer systems.
The system can provide external storage for replication-layer systems or IBM SAN
Volume Controller.
The system cannot use a storage-layer system as external storage.
In the replication layer, an IBM SAN Volume Controller or an IBM Storwize for Lenovo family
system has the following characteristics and requirements:
516 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The system can perform MM and GM replication with other replication-layer systems or
IBM SAN Volume Controller.
The system cannot provide external storage for a replication-layer system or an IBM SAN
Volume Controller.
The system can use a storage-layer system as external storage.
An IBM Storwize for Lenovo family system is in the storage layer by default, but the layer can
be changed. For example, you might want to change an IBM Storwize V5000 for Lenovo to a
replication layer to complete Global Mirror or Metro Mirror replication with a SAN Volume
Controller system.
To change the storage layer of an existing IBM Storwize for Lenovo system with internal
storage before you add a second system into the SAN zone, you do not need to stop I/O
operations. However, if the system has Fibre Channel (FC) connections to another IBM
Storwize for Lenovo family or SAN Volume Controller system in the SAN fabric, I/O
operations must be stopped temporarily. In this scenario, the FC ports must be disabled (for
example, by unplugging all the FC ports, changing zoning, disabling switch ports) before you
change the system layer. Then, you must re-enable the FC ports.
Note: Before you change the layer of an IBM Storwize for Lenovo family system, the
following conditions must be met:
No host object can be configured with worldwide port names (WWPNs) from an IBM
Storwize for Lenovo family system.
No system partnerships can be defined.
No IBM Storwize for Lenovo family system can be visible on the SAN fabric.
In your IBM Storwize for Lenovo system, use the lssystem command to check the current
system layer, as shown in Example 10-2.
Example 10-2 Output from lssystem command showing the system layer
IBM_Storwize:ITSO_5K:superuser>lssystem
id 000001002140020E
name ITSO_V5K
...
lines omited for brevity
...
easy_tier_acceleration off
has_nas_key no
layer replication
...
Note: Consider the following rules for creating remote partnerships between the IBM SAN
Volume Controller and IBM Storwize for Lenovo Family systems:
An IBM SAN Volume Controller is always in the replication layer.
By default, the IBM Storwize for Lenovo systems are in the storage layer but can be
changed to the replication layer.
A system can form partnerships only with systems in the same layer.
Starting in software V6.4, an IBM SAN Volume Controller or IBM Storwize for Lenovo
system in the replication layer can virtualize an IBM Storwize for Lenovo in the storage
layer.
518 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: A physical link is the physical IP link between the two sites, A (local) and B
(remote). Multiple IP addresses on local system A could be connected (by Ethernet
switches) to this physical link. Similarly, multiple IP addresses on remote system B
could be connected (by Ethernet switches) to the same physical link. At any point in
time, only a single IP address on cluster A can form an RC data session with an IP
address on cluster B.
The maximum throughput is restricted based on the use of 1 Gbps or 10 Gbps Ethernet
ports, and varies based on distance (for example, round-trip latency) and quality of
communication link (for example, packet loss):
– One 1 Gbps port might transfer up to 110 megabytes per second (MBps) unidirectional,
190 MBps bidirectional
– Two 1 Gbps ports might transfer up to 220 MBps unidirectional, 325 MBps bidirectional
– One 10 Gbps port might transfer up to 240 MBps unidirectional, 350 MBps bidirectional
– Two 10 Gbps port might transfer up to 440 MBps unidirectional, 600 MBps bidirectional
Note: The Bandwidth setting definition when the IP partnerships are created changed.
Previously, the bandwidth setting defaulted to 50 MB, and was the maximum transfer
rate from the primary site to the secondary site for initial sync/resyncs of volumes.
The Link Bandwidth setting is now configured by using megabits (Mb) not MB. You set
the Link Bandwidth setting to a value that the communication link can sustain, or to
what is allocated for replication. The Background Copy Rate setting is now a
percentage of the Link Bandwidth. The Background Copy Rate setting determines the
available bandwidth for the initial sync and resyncs or for GM with Change Volumes.
When the VLAN ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication, the appropriate VLAN settings on the Ethernet network and servers
must be configured correctly in order not to experience connectivity issues. After the VLANs
are configured, changes to the VLAN settings will disrupt iSCSI and IP replication traffic to
and from the partnerships.
During the VLAN configuration for each IP address, the VLAN settings for the local and
failover ports on two nodes of an I/O Group can differ. To avoid any service disruption,
switches must be configured so the failover VLANs are configured on the local switch ports
and the failover of IP addresses from a failing node to a surviving node succeeds. If failover
Consider the following requirements and procedures when implementing VLAN tagging:
VLAN tagging is supported for IP partnership traffic between two systems.
VLAN provides network traffic separation at the layer 2 level for Ethernet transport.
VLAN tagging by default is disabled for any IP address of a node port. You can use the
CLI or GUI to optionally set the VLAN ID for port IPs on both systems in the IP partnership.
When a VLAN ID is configured for the port IP addresses that are used in remote copy port
groups, appropriate VLAN settings on the Ethernet network must also be properly
configured to prevent connectivity issues.
Setting VLAN tags for a port is disruptive. Therefore, VLAN tagging requires that you stop the
partnership first before you configure VLAN tags. Then, restart again when the configuration
is complete.
For further information on configuring VLAN for IP replication, please see Information Center
for Lenovo Storage V3700 V2, V3700 V2 XP and V5030 at:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/svc_v
lanconfigipreplication.html
Remote copy group or The following numbers group a set of IP addresses that are connected to the same
Remote copy port group physical link. Therefore, only IP addresses that are part of the same remote copy group
can form remote copy connections with the partner system:
0 – Ports that are not configured for remote copy
1 – Ports that belong to remote copy port group 1
2 – Ports that belong to remote copy port group 2
Each IP address can be shared for iSCSI host attach and remote copy functionality.
Therefore, appropriate settings must be applied to each IP address.
IP partnership Two systems that are partnered to perform remote copy over native IP links.
FC partnership Two systems that are partnered to perform remote copy over native Fibre Channel links.
Failover Failure of a node within an I/O group causes the volume access to go through the
surviving node. The IP addresses fail over to the surviving node in the I/O group. When
the configuration node of the system fails, management IPs also fail over to an alternative
node.
Failback When the failed node rejoins the system, all failed over IP addresses are failed back from
the surviving node to the rejoined node, and virtual disk access is restored through
this node.
520 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
IP partnership Description
terminology
Discovery Process by which two Lenovo Storage V-series systems exchange information about their
IP address configuration. For IP-based partnerships, only IP addresses configured for
Remote Copy are discovered.
For example, the first Discovery takes place when the user is running the
mkippartnership CLI command. Subsequent Discoveries can take place as a result of
user activities (configuration changes) or as a result of hardware failures (for example,
node failure, ports failure, and so on).
The following steps must be completed to establish two systems in the IP partnerships:
1. The administrator configures the CHAP secret on both the systems. This step is not
mandatory, and users can choose to not configure the CHAP secret.
2. The administrator configures the system IP addresses on both local and remote systems
so that they can discover each other over the network.
Remote copy port group ID is a numerical tag associated with an IP port of Lenovo Storage
V3700 V2, V3700 V2 XP and V5030 systems to indicate which physical IP link it is connected
to. Multiple nodes could be connected to the same physical long-distance link, and must
therefore share the same remote copy port group id.
In scenarios where there are two physical links between the local and remote clusters, two
remote copy port group IDs must be used to designate which IP addresses are connected to
which physical link. This configuration must be done by the system administrator using the
GUI or the cfgportip CLI command.
Remember: IP ports on both partners must have been configured with identical remote
copy port group IDs for the partnership to be established correctly.
The Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems IP addresses that are
connected to the same physical link are designated with identical remote copy port groups.
The Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems support three remote copy
groups: 0, 1, and 2.
The Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems IP addresses are, by
default, in remote copy port group 0. Ports in port group 0 are not considered for creating
remote copy data paths between two systems. For partnerships to be established over IP
522 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
links directly, IP ports must be configured in remote copy group 1 if a single inter-site link
exists, or in remote copy groups 1 and 2 if two inter-site links exist.
You can assign one IPv4 address and one IPv6 address to each Ethernet port on the system
platforms. Each of these IP addresses can be shared between iSCSI host attach and the IP
partnership. The user must configure the required IP address (IPv4 or IPv6) on an Ethernet
port with a remote copy port group.
The administrator might want to use IPv6 addresses for remote copy operations and use IPv4
addresses on that same port for iSCSI host attach. This configuration also implies that for two
systems to establish an IP partnership, both systems must have IPv6 addresses that are
configured.
Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address and any other IP address that is
configured on that Ethernet port.
Note: To establish an IP partnership, each Lenovo Storage V3700 V2, V3700 V2 XP and
V5030 nodes must have only a single remote copy port group that is configured, 1 or 2.
The remaining IP addresses must be in remote copy port group 0.
The general application of remote copy services is to maintain two real-time synchronized
copies of a volume. Often, two copies are geographically dispersed between two Lenovo
Storage V3700 V2, V3700 V2 XP and V5030 systems, although it is possible to use MM or
GM within a single system (within an I/O Group). If the master copy fails, you can enable an
auxiliary copy for
I/O operation.
Tips: Intracluster MM/GM uses more resources within the system when compared to an
intercluster MM/GM relationship, where resource allocation is shared between the
systems. Use intercluster MM/GM when possible. For mirroring volumes in the same
system, it is better to use Volume Mirroring or the FlashCopy feature.
A typical application of this function is to set up a dual-site solution that uses two Lenovo
Storage V3700 V2, V3700 V2 XP and V5030 systems. The first site is considered the primary
site or production site, and the second site is considered the backup site or failover site,
which is activated when a failure at the first site is detected.
10.7.1 Multiple Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems
mirroring
Each Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems can maintain up to three
partner system relationships, which enables as many as four systems to be directly
Note: For more information about restrictions and limitations of native IP replication, see
10.6.3, “IP partnership limitations” on page 518.
V5000 (B)
V5000 (C)
V5000 (A)
V5000 (D)
524 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-75 on page 524 shows four systems in a star topology, with System A at the center.
System A can be a central DR site for the three other locations.
By using a star topology, you can migrate applications by using a process, such as the one
described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).
4. Synchronize to system C, and ensure that A → C is established:
– A → B, A → C, A → D, B → C, B → D, and C → D
– A → B, A → C, and B → C
Figure 10-77 shows an example of Lenovo Storage V3700 V2, V3700 V2 XP and V5030
systems fully connected topology (A → B, A → C, A → D, B → D, and C → D).
Figure 10-77 is a fully connected mesh in which every system has a partnership to each of
the three other systems. This topology enables volumes to be replicated between any pair of
systems, for example A → B, A → C, and B → C.
System partnership intermix: All of the preceding topologies are valid for the intermix of
the IBM SAN Volume Controller with the Lenovo Storage V3700 V2, V3700 V2 XP and
V5030 systems if the Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems are set
to the replication layer and running controller firmware code 6.3.0 or later.
An application that performs a high volume of database updates is designed with the concept
of dependent writes. With dependent writes, it is important to ensure that an earlier write
completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the application’s algorithms and can
lead to problems, such as detected or undetected data corruption.
The Metro Mirror and Global Mirror implementation operates in a manner that is designed to
always keep a consistent image at the secondary site. The Global Mirror implementation uses
complex algorithms that operate to identify sets of data and number those sets of data in
sequence. The data is then applied at the secondary site in the defined sequence.
Therefore, these commands can be issued simultaneously for all MM/GM relationships that
are defined within that Consistency Group, or to a single MM/GM relationship that is not part
of a Remote Copy Consistency Group. For example, when a startrcconsistgrp command is
issued to the Consistency Group, all of the MM/GM relationships in the Consistency Group
are started at the same time.
Figure 10-79 on page 527 shows the concept of Metro Mirror Consistency Groups. The same
applies to Global Mirror Consistency Groups.
526 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-79 Metro Mirror Consistency Group
Because the MM_Relationship 1 and 2 are part of the Consistency Group, they can be
handled as one entity. The stand-alone MM_Relationship 3 is handled separately.
Certain uses of MM/GM require the manipulation of more than one relationship. Remote
Copy Consistency Groups can group relationships so that they are manipulated in unison.
Although Consistency Groups can be used to manipulate sets of relationships that do not
need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a Consistency Group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
Consistency Group.
For example, consider the case of two applications that are independent, yet they are placed
into a single Consistency Group. If an error occurs, synchronization is lost and a background
If one application finishes its background copy more quickly than the other application,
MM/GM still refuses to grant access to its auxiliary volumes even though it is safe in this case.
The MM/GM policy is to refuse access to the entire Consistency Group if any part of it is
inconsistent.
Stand-alone relationships and Consistency Groups share a common configuration and state
model. All of the relationships in a non-empty Consistency Group have the same state as the
Consistency Group.
Zoning
The Lenovo Storage V3700 V2, V3700 V2 XP and V5030 systems FC ports on each system
must communicate with each other to create the partnership. Switch zoning is critical to
facilitating intercluster communication.
These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and are repaired to maintain operation where possible. If communication between
the systems is interrupted or lost, an event is logged (and the Metro Mirror and Global Mirror
relationships stop).
Alerts: You can configure the system to raise Simple Network Management Protocol
(SNMP) traps to the enterprise monitoring system to alert on events that indicate an
interruption in internode communication occurred.
Intercluster links
All Lenovo Storage V3700 V2, V3700 V2 XP and V5030 node canisters maintain a database
of other devices that are visible on the fabric. This database is updated as devices appear
and disappear.
Devices that advertise themselves as Lenovo storage V-series family product nodes are
categorized according to the system to which they belong. Nodes that belong to the same
system establish communication channels between themselves and begin to exchange
messages to implement clustering and the functional protocols of controller firmware.
528 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they are configured together to perform a remote copy relationship.
The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.
With synchronous copies, host applications write to the master volume, but they do not
receive confirmation that the write operation completed until the data is written to the auxiliary
volume. This action ensures that both the volumes have identical data when the copy
completes. After the initial copy completes, the Metro Mirror function always maintains a fully
synchronized copy of the source data at the target site.
Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your Metro Mirror
auxiliary location.
Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups (FlashCopy Consistency Groups are described in
10.3, “Implementing FlashCopy” on page 457).
The Lenovo Storage V series system provides intracluster and intercluster Metro Mirror.
Important: Performing Metro Mirror across I/O Groups within a system is not supported.
Two Lenovo Storage V series systems must be defined in a partnership, which must be
performed on both systems to establish a fully functional Metro Mirror partnership.
By using standard single-mode connections, the supported distance between two systems in
a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved
by using extenders. For extended distance solutions, contact your Lenovo representative.
Limit: When a local fabric and a remote fabric are connected for Metro Mirror purposes,
the inter-switch link (ISL) hop count between a local node and a remote node cannot
exceed seven.
Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, Metro Mirror suspends writes to
the auxiliary volume and enables I/O to the master volume to continue to avoid affecting the
operation of the master volumes.
Figure 10-80 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.
However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
530 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, the use of traditional Fibre Channel Metro Mirror has
distance limitations that are based on your performance requirements. It does not support
more than 300 km (186.4 miles).
The controller firmware supports the resynchronization of changed data so that write failures
that occur on the master or auxiliary volumes do not require a complete resynchronization of
the relationship.
Switching copy direction: The copy direction for a Metro Mirror relationship can be
switched so that the auxiliary volume becomes the master, and the master volume
becomes the auxiliary, which is similar to the FlashCopy restore option. However, although
the FlashCopy target volume can operate in read/write mode, the target volume of the
started remote copy is always in read-only mode.
While the Metro Mirror relationship is active, the auxiliary volume is not accessible for host
application write I/O at any time. The Lenovo Storage V3700 V2, V3700 V2 XP and V5030
systems allow read-only access to the auxiliary volume when it contains a consistent image.
IBM Storwize for Lenovo allows boot time operating system discovery to complete without an
error, so that any hosts at the secondary site can be ready to start the applications with
minimum delay, if required.
For example, many operating systems must read LBA zero to configure a logical unit.
Although read access is allowed at the auxiliary in practice, the data on the auxiliary volumes
cannot be read by a host because most operating systems write a “dirty bit” to the file system
when it is mounted. Because this write operation is not allowed on the auxiliary volume, the
volume cannot be mounted.
This access is provided only where consistency can be ensured. However, coherency cannot
be maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.
To enable access to the auxiliary volume for host operations, you must stop the Metro Mirror
relationship by specifying the -access parameter. While access to the auxiliary volume for
host operations is enabled, the host must be instructed to mount the volume before the
application can be started, or instructed to perform a recovery process.
For example, the Metro Mirror requirement to enable the auxiliary copy for access
differentiates it from third-party mirroring software on the host, which aims to emulate a
single, reliable disk regardless of what system is accessing it. Metro Mirror retains the
property that there are two volumes in existence but it suppresses one volume while the copy
is being maintained.
The use of an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required, and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.
The failover process can be automated through failover management software. The Lenovo
Storage V3700 V2, V3700 V2 XP and V5030 systems provide SNMP traps and programming
(or scripting) for the CLI to enable this automation.
532 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10.7.9 Global Mirror Overview
This section describes the Global Mirror copy service, which is an asynchronous remote copy
service. This service provides and maintains a consistent mirrored copy of a source volume
to a target volume.
Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The
volumes in a Global Mirror relationship are referred to as the master (source) volume and the
auxiliary (target) volume, which is the same as Metro Mirror. Consistency Groups can be
used to maintain data integrity for dependent writes, which is similar to FlashCopy
Consistency Groups.
Global Mirror writes data to the auxiliary volume asynchronously, which means that host
writes to the master volume provide the host with confirmation that the write is complete
before the I/O completes on the auxiliary volume.
Limit: When a local fabric and a remote fabric are connected for Global Mirror purposes,
the ISL hop count between a local node and a remote node must not exceed seven hops.
The Global Mirror function provides the same function as Metro Mirror remote copy, but over
long-distance links with higher latency without requiring the hosts to wait for the full round-trip
delay of the long-distance link.
Figure 10-81 on page 534 shows that a write operation to the master volume is
acknowledged back to the host that is issuing the write before the write operation is mirrored
to the cache for the auxiliary volume.
The Global Mirror algorithms maintain a consistent image on the auxiliary always. They
achieve this consistent image by identifying sets of I/Os that are active concurrently at the
master, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary. As a result, Global Mirror maintains the features of Write Ordering
and Read Stability.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system. Therefore, the process is not
subject to the latency of the long-distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.
Global Mirror write I/O from production system to a secondary system requires serialization
and sequence-tagging before being sent across the network to a remote site (to maintain a
write-order consistent copy of data).
To avoid affecting the production site, Lenovo storage V series system supports more
parallelism in processing and managing Global Mirror writes on the secondary system by
using the following methods:
Secondary system nodes store replication writes in new redundant non-volatile cache
Cache content details are shared between nodes
Cache content details are batched together to make node-to-node latency less of an issue
Nodes intelligently apply these batches in parallel as soon as possible
Nodes internally manage and optimize Global Mirror secondary write I/O processing
In a failover scenario where the secondary site must become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them; for example, a transaction log replay.
Global Mirror is supported over FC, FC over IP (FCIP), FC over Ethernet (FCoE), and native
IP connections. The maximum supported round-trip latency between sites depends on the
type of partnership between systems, the version of software, and the system hardware that
is used.
534 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-82 lists the maximum round-trip latency. This restriction applies to all variant of
remote mirroring. More configuration requirements and guidelines apply to systems that
perform remote mirroring over extended distances, where the round-trip time is greater than
80 ms.
Colliding writes
Before V4.3.1, the Global Mirror algorithm required that only a single write is active on any
512-byte logical block address (LBA) of a volume. If a further write is received from a host
while the auxiliary write is still active (even though the master write might complete), the new
If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such
write activity do not achieve the performance that Global Mirror is intended to support. A
volume statistic is maintained about the frequency of these collisions.
An attempt is made to allow multiple writes to a single location to be outstanding in the Global
Mirror algorithm. There is still a need for master writes to be serialized, and the intermediate
states of the master data must be kept in a non-volatile journal while the writes are
outstanding to maintain the correct write ordering during reconstruction. Reconstruction must
never overwrite data on the auxiliary with an earlier version. The volume statistic that is
monitoring colliding writes is now limited to those writes that are not affected by this change.
The following numbers correspond to the numbers that are shown in Figure 10-83:
(1) The first write is performed from the host to LBA X.
(2) The host is provided acknowledgment that the write completed even though the
mirrored write to the auxiliary volume is not yet complete.
(1’) and (2’) occur asynchronously with the first write.
(3) The second write is performed from the host also to LBA X. If this write occurs before
(2’), the write is written to the journal file.
(4) The host is provided acknowledgment that the second write is complete.
Delay simulation
An optional feature for Global Mirror enables a delay simulation to be applied on writes that
are sent to auxiliary volumes. This feature enables you to perform testing that detects
colliding writes. Therefore, you can use this feature to test an application before the full
deployment of the feature. The feature can be enabled separately for each of the intracluster
or intercluster Global Mirrors.
536 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
You specify the delay setting by using the chsystem command and view the delay by using the
lssystem command. The gm_intra_cluster_delay_simulation field expresses the amount of
time that intracluster auxiliary I/Os are delayed. The gm_inter_cluster_delay_simulation
field expresses the amount of time that intercluster auxiliary I/Os are delayed. A value of zero
disables the feature.
Tip: If you are experiencing repeated problems with the delay on your link, make sure that
the delay simulator was properly disabled.
Global Mirror has functionality that is designed to address the following conditions, which
might negatively affect certain Global Mirror implementations:
The estimation of the bandwidth requirements tends to be complex.
Ensuring the latency and bandwidth requirements can be met is often difficult.
Congested hosts on the source or target site can cause disruption.
Congested network links can cause disruption with only intermittent peaks.
To address these issues, Change Volumes were added as an option for Global Mirror
relationships. Change Volumes use the FlashCopy functionality, but they cannot be
manipulated as FlashCopy volumes because they are for a special purpose only. Depending
on the cycling mode defined, Change Volumes replicate point-in-time images on a cycling
period.
Note: The cycling mode can be either none or multi. When cycling mode is set to none,
the Global Mirror will behave identically to Global Mirror without Change Volumes.When
cycling mode is set to multi, the Global Mirror will behave as described in this section.
The cycling mode can be changed only when the relationship is stopped and in
consistent_stopped or inconsistent_stopped status.
Your change rate needs to include only the condition of the data at the point-in-time that the
image was taken, rather than all the updates during the period. The use of this function can
provide significant reductions in replication volume.
With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary Global Mirror volume at
the target site, which is then captured in another Change Volume on the target site. This
approach provides an always consistent image at the target site and protects your data from
being inconsistent during resynchronization.
Figure 10-86 shows how Change Volumes might save you replication traffic.
538 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
In Figure 10-86 on page 538, you can see several I/Os on the source and the same number
on the target, and in the same order. Assuming that this data is the same set of data being
updated repeatedly, this approach results in wasted network traffic. The I/O can be completed
much more efficiently, as shown in Figure 10-87.
Figure 10-87 Global Mirror I/O with Change Volumes V6.3.0 and beyond
In Figure 10-87, the same data is being updated repeatedly. Therefore, Change Volumes
demonstrate significant I/O transmission savings by needing to send I/O number 16 only,
which was the last I/O before the cycling period.
You can adjust the cycling period by using the chrcrelationship -cycleperiodseconds
<60 - 86400> command from the CLI. If a copy does not complete in the cycle period, the
next cycle does not start until the prior cycle completes. For this reason, the use of Change
Volumes gives you the following possibilities for RPO:
If your replication completes in the cycling period, your RPO is twice the cycling period.
If your replication does not complete within the cycling period, RPO is twice the
completion time. The next cycling period starts immediately after the prior cycling period is
finished.
Carefully consider your business requirements versus the performance of Global Mirror with
Change Volumes. Global Mirror with Change Volumes increases the intercluster traffic for
more frequent cycling periods. Therefore, selecting the shortest cycle periods possible is not
always the answer. In most cases, the default must meet requirements and perform well.
Important: When you create your Global Mirror volumes with Change Volumes, make sure
that you remember to select the Change Volume on the auxiliary (target) site. Failure to do
so leaves you exposed during a resynchronization operation.
If this preferred practice is not maintained, for example, source volumes are assigned to only
one node in the I/O group, you can change the preferred node for each volume to distribute
volumes evenly between the nodes. You can also change the preferred node for volumes that
are in a remote copy relationship without affecting the host I/O to a particular volume.
Background copy I/O is scheduled to avoid bursts of activity that might have an adverse effect
on system behavior. An entire grain of tracks on one volume is processed at around the same
time but not as a single I/O. Double buffering is used to try to use sequential performance
within a grain. However, the next grain within the volume might not be scheduled for some
time. Multiple grains might be copied simultaneously, and might be enough to satisfy the
requested rate, unless the available resources cannot sustain the requested rate.
Global Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy occurs on relationships that are in the InconsistentCopying
state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
all nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node in turn divides its allocation evenly between the multiple relationships that are
performing a background copy.
The default value of the background copy is 25 megabytes per second (MBps), per volume.
Important: The background copy value is a system-wide parameter that can be changed
dynamically but only on a per-system basis and not on a per-relationship basis. Therefore,
the copy rate of all relationships changes when this value is increased or decreased. In
systems with many remote copy relationships, increasing this value might affect overall
system or intercluster link performance. The background copy rate can be changed from
1 - 1000 MBps.
If the auxiliary volume is thin-provisioned and the region is deallocated, the special buffer
prevents a write and, therefore, an allocation. If the auxiliary volume is not thin-provisioned or
the region in question is an allocated region of a thin-provisioned volume, a buffer of “real”
zeros is synthesized on the auxiliary and written as normal.
540 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Full synchronization after creation
The full synchronization after creation method is the default method. It is the simplest method
in that it requires no administrative activity apart from issuing the necessary commands.
However, in certain environments, the available bandwidth can make this method unsuitable.
With this technique, do not allow I/O on the master or auxiliary before the relationship is
established. Then, the administrator must run the following commands:
Run mkrcrelationship with the -sync flag.
Run startrcrelationship without the -clean flag.
Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not, therefore creating a data loss or data integrity
exposure for hosts accessing data on the auxiliary volume.
Number of Metro Mirror or Global Mirror No limit is imposed beyond the Remote Copy
relationships per Consistency Group relationships per system limit
Total volume size per I/O Group There is a per-I/O Group limit of 1024 terabytes (TB) on
the quantity of master and auxiliary volume address
spaces that can participate in Metro Mirror and Global
Mirror relationships. This maximum configuration uses all
512 MiB of bitmap space for the I/O Group and allows 10
MiB of space for all remaining copy services features.
For further details on the configuration limits, please see Information Center Lenovo Storage
> Lenovo Storage V3700 V2/V5030 Series > Version 8.1.0 > Configuring > Configuring >
Known issues and limitations with Virtual Volumes at:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/svc_v
mwareknownvvoliss.html
542 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-88 Metro Mirror or Global Mirror mapping state diagram
When the MM/GM relationship is created, you can specify whether the auxiliary volume is
already in sync with the master volume, and the background copy process is then skipped.
This capability is useful when MM/GM relationships are established for volumes that were
created with the format option.
Stop on Error
When a MM/GM relationship is stopped (intentionally, or because of an error), the state
changes. For example, the MM/GM relationships in the ConsistentSynchronized state enter
the ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.
If the connection is broken between the two systems that are in a partnership, all (intercluster)
MM/GM relationships enter a Disconnected state. For more information, see “Connected
versus disconnected” on page 544.
State overview
In the following sections, we provide an overview of the various MM/GM states.
When the two systems can communicate, the systems and the relationships that spans them
are described as connected. When they cannot communicate, the systems and the
relationships spanning them are described as disconnected.
544 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
In this state, both systems are left with fragmented relationships and are limited regarding the
configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and the configuration commands that are permitted.
When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments, considering any configuration or other event
that occurred while the relationship was disconnected. As a result, the relationship can return
to the state that it was in when it became disconnected, or it can enter a new state.
Relationships that are configured between volumes in the same Lenovo Storage V3700 V2,
V3700 V2 XP and V5030 systems (intracluster) are never described as being in a
disconnected state.
An auxiliary volume is described as consistent if it contains data that might be read by a host
system from the master if power failed at an imaginary point while I/O was in progress, and
power was later restored. This imaginary point is defined as the recovery point.
The requirements for consistency are expressed regarding activity at the master up to the
recovery point. The auxiliary volume contains the data from all of the writes to the master for
which the host received successful completion and that data was not overwritten by a
subsequent write (before the recovery point).
Consider writes for which the host did not receive a successful completion (that is, it received
bad completion or no completion at all). If the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from the
master.
From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred). If an application is designed to cope with an unexpected power
failure, this assurance of consistency means that the application can use the auxiliary and
begin operation as though it was restarted after the hypothetical power failure. Again,
maintaining the application write ordering is the key property of consistency.
Because of the risk of data corruption, and in particular undetected data corruption, MM/GM
strongly enforces the concept of consistency and prohibits access to inconsistent data.
When you are deciding how to use Consistency Groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.
If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
Consistency Group.
The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.
When communication is lost for an extended period, MM/GM tracks the changes that
occurred on the master, but not the order or the details of such changes (write data). When
communication is restored, it is impossible to synchronize the auxiliary without sending write
data to the auxiliary out of order. Therefore, consistency is lost.
Detailed states
In the following sections, we describe the states that are portrayed to the user for either
Consistency Groups or relationships. We also describe information that is available in each
state. The major states are designed to provide guidance about the available configuration
commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent.
This state is entered when the relationship or Consistency Group was InconsistentCopying
and suffered a persistent error or received a stop command that caused the copy process
to stop.
546 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
A start command causes the relationship or Consistency Group to move to the
InconsistentCopying state. A stop command is accepted, but has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. This state is entered after a
start command is issued to an InconsistentStopped relationship or a Consistency Group.
A persistent error or stop command places the relationship or Consistency Group into an
InconsistentStopped state. A start command is accepted but has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master. This state can arise when a
relationship was in a ConsistentSynchronized state and experienced an error that forces a
Consistency Freeze. It can also arise when a relationship is created with a
CreateConsistentFlag set to TRUE.
Normally, write activity that follows an I/O error causes updates to the master, and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must use a start command with the -force option to
acknowledge this condition, and the relationship or Consistency Group transitions to
InconsistentCopying. Enter this command only after all outstanding events are repaired.
In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can enter a switch command that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
If the relationship or Consistency Group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Therefore, both master and auxiliary volumes are accessible for write I/O.
In this state, the relationship or Consistency Group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine what areas must be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship received write I/O, which is indicated
by the Synchronized status. If the start command leads to loss of consistency, you must
specify the -force parameter.
Also, the relationship or Consistency Group accepts a -clean option on the start command
while in this state. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the master role and accept read or write I/O.
The priority in this state is to recover the link to restore the relationship or consistency.
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transitions to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or Consistency Group, which depends on the following factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
548 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
While IdlingDisconnected, if a write I/O is received that causes the loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event also is raised when this condition occurs for the
ConsistentSynchronized state.
When the relationship or Consistency Group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions
are true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role, and accept read I/O but not
write I/O.
In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point when Consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or Consistency Group was known
to be consistent. This time corresponds to the time of the last successful heartbeat to the
other system.
A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.
When the relationship or Consistency Group becomes connected again, the relationship or
Consistency Group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the master while disconnected.
Empty
This state applies only to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show. It is entered when a Consistency
Group is first created. It is exited when the first relationship is added to the Consistency
Group, at which point the state of the relationship becomes the state of the Consistency
Group.
Global Mirror with Change Volumes (GMCV) relationships use a secondary change volume to
retain a consistent copy during resync and automatically restart when they can.
From V7.8.1, Metro Mirror and regular Global Mirror also behave like GMCV relationship if a
secondary change volume is configured which will do the following:
Makes Metro Mirror and Global Mirror more suited to links with intermittent connectivity
and IP replication
Stop as before if no secondary change volume configured
The consistency protection mechanism for metro mirror and regular global mirror uses
change volumes and has following characteristics:
It is a tweak to existing Metro Mirror and Global Mirror copy types using technology
already in Global Mirror with Change Volumes
Does not need FlashCopy license
Uses two FlashCopy maps per relationship per system (so maximum of 2500
relationships on a 10k volume-capable system)
Supported on all systems that can have remote mirroring license
Requires both participating systems to be at V7.8.1 or later
Consistency protection for metro or regular Global mirror can be enabled by configuring a
secondary change volume and no further configuration needed to enable this behavior. All
relationships in a consistency group have to be so configured for this behavior to work on any
relationship in the consistency group.
Table 10-11 describes the events and the expected behavior when consistency protection
mechanism has been enabled for metro mirror and regular global mirror.
550 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Event Expected behavior for the
relationship
Note: Change volume should be created as thin provisioned, but in theory, can grow to
100%
A change volume is owned and used by the associated Remote Copy relationship. Therefore,
it cannot be:
Mapped to a host.
Used as source or target of any FlashCopy maps.
Part of any other relationship.
A filesystem disk
Following these commands, the remote host server is mapped to the auxiliary volume and the
disk is available for I/O.
The command set for MM/GM contains the following broad groups:
Commands to create, delete, and manipulate relationships and Consistency Groups
Commands to cause state changes
If a configuration command affects more than one system, MM/GM performs the work to
coordinate configuration activity between the systems. Certain configuration commands can
be performed only when the systems are connected, and fail with no effect when they are
disconnected.
Other configuration commands are permitted even though the systems are disconnected.
The state is reconciled automatically by MM/GM when the systems become connected again.
For any command (with one exception) a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case the
system that is receiving the command is called the local system.
The exception is a command that sets systems into a MM/GM partnership. The
mkfcpartnership and mkippartnership commands must be issued on both, the local and
remote systems.
The commands in this section are described as an abstract command set, and are
implemented by either of the following methods:
CLI can be used for scripting and automation.
GUI can be used for one-off tasks.
552 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: This command is not supported on IP partnerships. Use mkippartnership for
IP connections.
Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure
connecting the remote sites, regardless of the compression rates that you might
achieve.
-gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping Global Mirror relationships. Specify values 60 - 86,400 seconds in increments of
10 seconds. The default value is 300. Do not change this value except under the direction
of IBM Support.
-gmmaxhostdelay max_host_delay
This parameter specifies the maximum time delay, in milliseconds, at which the Global
Mirror link tolerance timer starts counting down. This threshold value determines the
additional effect that Global Mirror operations can add to the response times of the Global
Mirror source volumes. You can use this parameter to increase the threshold from the
default value of 5 milliseconds.
-gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to an auxiliary volume) is delayed. This parameter enables you to test performance
implications before Global Mirror is deployed and a long-distance link is obtained. Specify
a value of 0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this
argument to test each intercluster Global Mirror relationship separately.
-gmintradelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intracluster copying
to an auxiliary volume) is delayed. By using this parameter, you can test performance
implications before Global Mirror is deployed and a long-distance link is obtained. Specify
a value of 0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this
argument to test each intracluster Global Mirror relationship separately.
-maxreplicationdelay max_replication_delay
This parameter sets a maximum replication delay in seconds. The value must be a
number 1 - 360. This feature sets the maximum number of seconds to be tolerated to
complete a single I/O. If I/O can’t complete within the max_replication_delay the 1920
Use the chsystem command to adjust these values, as shown in the following example:
chsystem -gmlinktolerance 300
You can view all of these parameter values by using the lssystem <system_name> command.
We focus on the gmlinktolerance parameter in particular. If poor response extends past the
specified tolerance, a 1920 event is logged and one or more GM relationships automatically
stop to protect the application hosts at the primary site. During normal operations, application
hosts experience a minimal effect from the response times because the GM feature uses
asynchronous replication.
However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations begin to queue at the primary system. This queue
results in an extended response time to application hosts. In this situation, the
gmlinktolerance feature stops GM relationships, and the application host’s response time
returns to normal.
After a 1920 event occurs, the GM auxiliary volumes are no longer in the
consistent_synchronized state until you fix the cause of the event and restart your GM
relationships. For this reason, ensure that you monitor the system to track when these 1920
events occur.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
During SAN maintenance windows in which degraded performance is expected from SAN
components, and application hosts can withstand extended response times from GM
volumes.
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.
A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a
hardware failure or an unexpected host I/O workload).
If 1920 events are occurring, it can be necessary to use a performance monitoring and
analysis tool, such as the IBM Virtual Storage Center, to help identify and resolve the
problem.
554 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The svctask mkfcpartnership command
Use the mkfcpartnership command to establish a one-way MM/GM partnership between the
local system and a remote system. Alternatively, use mkippartnership to create IP-based
partnership.
To establish a fully functional MM/GM partnership, you must issue this command on both
systems. This step is a prerequisite for creating MM/GM relationships between volumes on
the Lenovo Storage V series systems.
When the partnership is created, you can specify the bandwidth to be used by the
background copy process between the local and remote system. If it is not specified, the
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link.
To set the background copy bandwidth optimally, ensure that you consider all three
resources: primary storage, intercluster link bandwidth, and auxiliary storage. Provision the
most restrictive of these three resources between the background copy bandwidth and the
peak foreground I/O workload.
The MM/GM consistency group name must be unique across all consistency groups that are
known to the systems owning this consistency group. If the consistency group involves two
systems, the systems must be in communication throughout the creation process.
Optional parameter: If you do not use the -global optional parameter, a Metro Mirror
relationship is created rather than a Global Mirror relationship.
The auxiliary volume must be equal in size to the master volume or the command fails. If both
volumes are in the same system, they must be in the same I/O Group. The master and
auxiliary volume cannot be in an existing relationship, and they cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When the MM/GM relationship is created, you can add it to an existing Consistency Group, or
it can be a stand-alone MM/GM relationship if no Consistency Group is specified.
When the command is issued, you can specify the master volume name and auxiliary system
to list the candidates that comply with the prerequisites to create a MM/GM relationship. If the
command is issued with no parameters, all of the volumes that are not disallowed by another
configuration state, such as being a FlashCopy target, are listed.
556 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
When the command is issued, you can set the copy direction if it is undefined, and, optionally,
you can mark the auxiliary volume of the relationship as clean. The command fails if it is used
as an attempt to start a relationship that is already a part of a consistency group.
You can issue this command only to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship.
The use of the -force parameter here is a reminder that the data on the auxiliary becomes
inconsistent while resynchronization (background copying) takes place. Therefore, this data
is unusable for DR purposes before the background copy completes.
In the Idling state, you must specify the master volume to indicate the copy direction. In
other connected states, you can provide the -primary argument, but it must match the
existing setting.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.
For a consistency group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.
If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the startrcconsistgrp command. Write activity is no longer copied
from the master to the auxiliary volumes that belong to the relationships in the group. For a
consistency group in the ConsistentSynchronized state, this command causes a Consistency
Freeze.
If the relationship is disconnected at the time that the command is issued, the relationship is
deleted only on the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the relationship
on both systems, you can issue the rmrcrelationship command independently on both of
the systems.
A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.
If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.
If the consistency group is disconnected at the time that the command is issued, the
consistency group is deleted only on the system on which the command is being run. When
the systems reconnect, the consistency group is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the consistency
group on both systems, you can issue the rmrcconsistgrp command separately on both of
the systems.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
558 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
Important: Remember that by reversing the roles, your current source volumes become
targets, and target volumes become source volumes. Therefore, you lose write access to
your current primary volumes.
The following panes are used to visualize and manage your remote copies:
The Remote Copy pane, as shown in Figure 10-89.
To access the Remote Copy pane, move the mouse pointer over the Copy Services
selection and click Remote Copy.
3. Click Create Partnership to create a partnership with another Lenovo storage V series
system, as shown in Figure 10-93.
560 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
4. In the Create Partnership window, indicate the partnership type, either Fibre Channel or
IP as shown in Figure 10-94.
5. For Fibre Channel partnership, select an available partner system from the drop-down list.
If no candidate is available, the following error message is displayed:
This system does not have any candidates.
– Enter a link bandwidth in megabits per second (Mbps) that is used by the background
copy process between the systems in the partnership.
– Enter the background copy rate.
– Click OK to confirm the partnership relationship as shown in Figure 10-95.
Note: If you choose IP partnership, you must provide the IP address of the partner system
and the partner system’s CHAP key.
6. You will get a confirmation window as shown in Figure 10-96 on page 562.
7. As shown in Figure 10-97, our partnership is in the Partially Configured state because this
work was performed only on one side of the partnership so far.
To fully configure the partnership between both systems, perform the same steps on the
other system in the partnership. For simplicity and brevity, we show only the two most
significant windows when the partnership is fully configured.
8. Starting the GUI at the partner system, select mcr-tb5-cluster-29 for the system
partnership. We specify the available bandwidth for the background copy (100 Mbps) and
then click OK.
Now that both sides of the system partnership are defined, the resulting windows are similar
at both of the systems, as shown in Figure 10-98.
562 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Complete the following steps to create stand-alone copy relationships:
1. From the main navigation pane, select Copy Services → Remote Copy.
2. Select Not in a Group and then click Action as shown in Figure 10-99.
Figure 10-99 Creating a new Remote Copy relationship without consistency group
4. In the Create Relationship window, select one of the following types of relationships that
you want to create (as shown in Figure 10-101 on page 564):
– Metro Mirror
– Global Mirror
– Global Mirror with Change Volumes
5. We want to create a Metro Mirror relationship. See Figure 10-102. Click Next.
Note: Starting from V7.8.1, consistency protection via Change Volume has been enabled
by default. Refer to 10.8, “Consistency protection for Remote and Global mirror” on
page 549 for more information on consistency protection.
564 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6. In the next window, select the location of the auxiliary volumes, as shown in
Figure 10-103:
– On this system, which means that the volumes are local.
– On another system, which means that you select the remote system from the
drop-down list.
After you make a selection, click Next.
Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list for a specific source volume.
566 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
8. Since Add Consistency Protection was checked as shown in 5 on page 564, you will get
a dialog window asking whether you want to add change volume as shown in
Figure 10-105.
Click Next. You will get a dialog window asking whether you want to add a new change
volume or use an existing one. In our example, we chose to create a new master change
volume.as shown in Figure 10-106.
Figure 10-106 Create a new master change volume or use an existing one
Figure 10-107 Create the relationships between the master and auxiliary volumes
After all of the relationships that you want to create are shown, click Next.
10.Specify whether the volumes are synchronized, as shown in Figure 10-108. Then, click
Next.
568 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
11. In the next window, select whether you want to start to copy the data and click Finish, as
shown in Figure 10-109.
12.Figure 10-110 shows that the task to create the relationship is complete.
The relationships are visible in the Remote Copy pane. If you selected to copy the data, you
can see that the status is Consistent Copying. You can check the copying progress in the
Running Tasks status area.
3. Enter a name for the Consistency Group, and then, click Next, as shown in Figure 10-113.
4. In the next window, select where the auxiliary volumes are located, as shown in
Figure 10-114 on page 571:
– On this system, which means that the volumes are local
– On another system, which means that you select the remote system in the drop-down
list
570 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
After you make a selection, click Next.
5. Select whether you want to add relationships to this group, as shown in Figure 10-115 on
page 572. The following options are available:
– If you select Yes, click Next to continue the wizard and go to step 6.
– If you select No, click Finish to create an empty Consistency Group that can be used
later.
6. Select one of the following types of relationships to create, as shown in Figure 10-116 on
page 573:
– Metro Mirror
– Global Mirror
– Global Mirror with Change Volumes
Note: For Metro Mirror or regular Global Mirror, also indicate whether to add consistency
protection.
572 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Click Next.
Figure 10-116 Select the type of relationship that you want to create
7. As shown in Figure 10-117 on page 574, you can optionally select existing relationships to
add to the group. Click Next.
Note: To select multiple relationships, hold down Ctrl and click the entries that you want
to include.
8. In the window that is shown in Figure 10-120 on page 576, you can create relationships.
a. Select a volume in the Master drop-down list.
b. Then, select a volume in the Auxiliary drop-down list for this master.
c. Click Add.
Note: When the first relationship is added to an empty group, the group takes on the
same state, primary (copy direction), type (Metro Mirror or Global Mirror), and
cycling mode as the relationship. Subsequent relationships must have the same
state, copy direction, and type as the group in order to be added to it. A relationship
can belong to only one consistency group.
d. As shown in Figure 10-118 on page 575, select whether you would like to add a
change volume or not.
574 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-118 Add a change volume
e. Click Next.
Note: Selecting to add a change volume, will not add a change volume to the remote
system. Creation of change volume on the remote system for the auxiliary has to be done
manually.
f. Select whether you want to create a new master change volume or use an existing one
as shown in Figure 10-119.
Figure 10-120 Create relationships between the master and auxiliary volumes
10.Specify whether the volumes are already synchronized. Then, click Next (Figure 10-121).
576 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
11. In the last window, select whether you want to start to copy the data. Then, click Finish, as
shown in Figure 10-122.
12.The relationships are visible in the Remote Copy pane. If you selected to copy the data,
you can see that the status of the relationships is Inconsistent copying. You can check the
copying progress in the Running Tasks status area, as shown in Figure 10-123.
Figure 10-123 Consistency Group created with relationship in copying and synchronized status
After the copies are completed, the relationships and the Consistency Group change to the
Consistent Synchronized status.
Note: You can also right-click a remote copy consistency group and select Rename.
3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 10-125.
The new Consistency Group name is displayed on the Remote Copy pane.
Tip: You can also right-click a remote copy relationship and select Rename.
578 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3. In the Rename Relationship window, enter the new name that you want to assign to the
FlashCopy mapping and click Rename, as shown in Figure 10-127.
Remote copy relationship name: You can use the letters A - Z and a - z, the numbers
0 - 9, and the underscore (_) character. The remote copy name can be 1 - 15
characters. No blanks are allowed.
Tip: You can also right-click a remote copy relationship and select Add to Consistency
Group.
5. In the Add Relationship to Consistency Group window, select the Consistency Group for
this remote copy relationship by using the drop-down list, as shown in Figure 10-129 on
page 580. Click Add to Consistency Group to confirm your changes.
Note: The state of the remote copy consistency group and the recopy copy relationship
that is being added, should match, otherwise you can not add that remote copy relationship
into the existing remote copy consistency group.
Tip: You can also right-click a remote copy relationship and select Remove from
Consistency Group.
5. In the Remove Relationship From Consistency Group window, click Remove, as shown in
Figure 10-131 on page 581.
580 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-131 Remove a relationship from a Consistency Group
Tip: You can also right-click a relationship and select Start from the list.
5. After the task is complete, the remote copy relationship status has a Consistent
Synchronized state, as shown in Figure 10-133 on page 582.
3. Click Actions → Start (Figure 10-135) to start the remote copy Consistency Group.
4. You can check the remote copy Consistency Group progress, as shown in Figure 10-136.
5. After the task completes, the Consistency Group and all of its relationships becomes in a
Consistent Synchronized state.
582 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
10.10.10 Switching copy direction
When a remote copy relationship is in the Consistent synchronized state, the copy direction
for the relationship can be changed. Only relationships that are not a member of a
Consistency Group (or the only relationship in a Consistency Group) can be switched
individually. These relationships can be switched from master to auxiliary or from auxiliary to
master, depending on the case.
5. The Warning window that is shown in Figure 10-138 opens. A confirmation is needed to
switch the remote copy relationship direction. The remote copy is switched from the
master volume to the auxiliary volume. Click Yes.
Figure 10-139 on page 584 shows the command-line output about this task.
The copy direction is now switched, as shown in Figure 10-140 with a red circle. The
auxiliary volume is now accessible and shown as the primary volume. Also, the auxiliary
volume is now synchronized to the master volume.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all of the I/O is inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a Consistency Group.
584 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-141 Switch action
4. The warning window that is shown in Figure 10-142 opens. A confirmation is needed to
switch the Consistency Group direction. In the example that is shown in here, the
Consistency Group is switched from the master group to the auxiliary group. Click Yes.
5. The remote copy direction is now switched as shown in Figure 10-143. The auxiliary
volume is now accessible and shown as a primary volume.
Tip: You can also right-click a relationship and select Stop from the list.
5. The Stop Remote Copy Relationship window opens, as shown in Figure 10-145. To allow
secondary read/write access, select Allow secondary read/write access. Then, click
Stop Relationship.
6. Figure 10-146 on page 587 shows the command-line output for the stop remote copy
relationship.
586 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-146 Stop remote copy relationship command-line output
The new relationship status can be checked, as shown in Figure 10-147. The relationship
is now Consistent Stopped.
Tip: You can also right-click a relationship and select Stop from the list.
4. The Stop Remote Copy Consistency Group window opens, as shown in Figure 10-149. To
allow secondary read/write access, select Allow secondary read/write access. Then,
click Stop Consistency Group.
The new relationship status can be checked, as shown in Figure 10-150. The relationship
is now Consistent Stopped.
588 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Multiple remote copy mappings: To select multiple remote copy mappings, hold down
Ctrl and click the entries that you want.
Tip: You can also right-click a remote copy mapping and select Delete.
4. The Delete Relationship window opens (Figure 10-152). In the “Verify the number of
relationships that you are deleting” field, enter the number of volumes that you want to
remove. This verification was added to help to avoid deleting the wrong relationships.
Click Delete.
Important: Deleting a Consistency Group does not delete its remote copy mappings.
4. The warning window that is shown in Figure 10-154 opens. Click Yes.
In practice, the most often overlooked cause is latency. Global Mirror has a round-trip-time
tolerance limit of 80 or 250 milliseconds, depending on the firmware version and the
hardware model. See Figure 10-82 on page 535. A message that is sent from your source
Lenovo storage V series system to your target system and the accompanying
acknowledgment must have a total time of 80 or 250 milliseconds round trip. In other words, it
must have up to 40 or 125 milliseconds latency each way.
590 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The primary component of your round-trip time is the physical distance between sites. For
every 1000 kilometers (621.4 miles), you observe a 5-millisecond delay each way. This delay
does not include the time that is added by equipment in the path. Every device adds a varying
amount of time depending on the device, but a good rule is 25 microseconds for pure
hardware devices.
Company A has a production site that is 1900 kilometers (1180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide Fibre Channel over IP (FCIP) to encapsulate the FC traffic between sites.
Now, there are seven devices, and 1900 kilometers (1180.6 miles) of distance delay. All the
devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds
each way, for a total of 19 milliseconds. Combined with the device latency, the delay is
19.4 milliseconds of physical latency minimum, which is under the 80-millisecond limit of
Global Mirror until you realize that this number is the best case number.
The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link. Therefore, be sure to stay as far beneath the Global
Mirror round-trip-time (RTT) limit as possible. You can easily double or triple the expected
physical latency with a lower quality or lower bandwidth network link. Then, you are within the
range of exceeding the limit if high I/O occurs that exceeds the existing bandwidth capacity.
When you get a 1920 event, always check the latency first. If the FCIP routing layer is not
properly configured, it can introduce latency. If your network provider reports a much
lower latency, you might have a problem at your FCIP routing layer. Most FCIP routing
devices have built-in tools to enable you to check the RTT. When you are checking latency,
remember that TCP/IP routing devices (including FCIP routers) report RTT using standard
64-byte ping packets.
In Figure 10-155 on page 592, you can see why the effective transit time must be measured
only by using packets that are large enough to hold an FC frame, or 2148 bytes (2112 bytes
of payload and 36 bytes of header). Allow estimated resource requirements to be a safe
amount, because various switch vendors have optional features that might increase this size.
After you verify your latency by using the proper packet size, proceed with normal hardware
troubleshooting.
Before we proceed, we look at the second largest component of your RTT, which is
serialization delay. Serialization delay is the amount of time that is required to move a packet
of data of a specific size across a network link of a certain bandwidth. The required time to
move a specific amount of data decreases as the data transmission rate increases.
Figure 10-155 on page 592 shows the orders of magnitude of difference between the link
bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient.
Never use a TCP/IP ping to measure RTT for FCIP traffic.
In Figure 10-155, the amount of time in microseconds that is required to transmit a packet
across network links of varying bandwidth capacity is compared. The following packet sizes
are used:
64 bytes: The size of the common ping packet
1500 bytes: The size of the standard TCP/IP packet
2148 bytes: The size of an FC frame
Finally, your path maximum transmission unit (MTU) affects the delay that is incurred to get a
packet from one location to another location. An MTU might cause fragmentation or be too
large and cause too many retransmits when a packet is lost.
The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for
zoning of more than one host bus adapter (HBA) port for each node per I/O Group if your
fabric has more than 64 HBA ports zoned. One port for each node per I/O Group per fabric
that is associated with the host is the suggested zoning configuration for fabrics.
592 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
For those fabrics with 64 or more host ports, this recommendation becomes a rule. Therefore,
you see four paths to each volume discovered on the host because each host needs to have
at least two FC ports from separate HBA cards, each in a separate fabric. On each fabric,
each host FC port is zoned to two of node ports where each port comes from one node
canister. This gives four paths per host volume. More than four paths per volume are
supported but not recommended.
Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer from IBM Virtual Storage Center and
comparing against your sample interval reveals potential SAN congestion. If a zero buffer
credit timer is above 2% of the total time of the sample interval, it might cause problems.
Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences could indicate a larger problem.
If you receive multiple 1720 errors, recheck your network connection and then check the
system partnership information to verify its status and settings. Then, proceed to perform
diagnostics for every piece of equipment in the path between the two Lenovo storage V series
systems. It often helps to have a diagram that shows the path of your replication from both
logical and physical configuration viewpoints.
If your investigations fail to resolve your remote copy problems, contact your Lenovo Support
representative for a more complete analysis.
10.12 HyperSwap
The HyperSwap high availability function allows business continuity in a hardware failure,
power failure, connectivity failure, or disasters, such as fire or flooding. It is available on the
IBM Storwize V7000 for Lenovo, and Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
products.
The HyperSwap function provides highly available volumes that are accessible through two
sites at up to 300 km (186.4 miles) apart. A fully independent copy of the data is maintained
at each site. When data is written by hosts at either site, both copies are synchronously
updated before the write operation is completed. The HyperSwap function automatically
optimizes itself to minimize data that is transmitted between sites and to minimize host read
and write latency.
If the nodes go offline or the storage at either site goes offline, leaving an online and
accessible up-to-date copy, the HyperSwap function can automatically fail over access to the
online copy. The HyperSwap function also automatically resynchronizes the two copies when
possible.
HyperSwap capability enables each volume to be presented by two I/O groups. The
configuration tolerates combinations of node and site failures, by using the same flexible
choices of host multipathing driver interoperability that are available for the Lenovo storage
V-series systems. The use of FlashCopy helps maintain a golden image during automatic
resynchronization.
At least two control enclosures are required for HyperSwap. System scalability depends on
the hardware details, as shown in Table 10-12.
So, a V3700 V2, V3700 V2 XP or V5030 HyperSwap cluster is always restricted to a single
control enclosure per site. The IBM Storwize V7000 for Lenovo and SVC can provide more
scalability and offer more flexibility.
The HyperSwap function works with the standard multipathing drivers that are available on
various host types. No additional host support is required to access the highly available
volume. Where multipathing drivers support Asymmetric Logical Unit Access (ALUA), the
storage system informs the multipathing driver about the nodes that are in the same site and
the nodes that need to be used to minimize I/O latency.
The host and Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 site attributes must be
configured to enable this optimization and to enable HyperSwap functionality. A three-site
setup is required. Two sites are used as the main data center to provide two independent
data copies. A quorum disk or an IP-based quorum can be used as a quorum device.
However, the quorum device must be placed in a third, independent site.
594 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The quorum disk must be supported as an “extended quorum device”. The connection can be
implemented by using Fibre Channel, Fibre Channel through wavelength-division
multiplexing (WDM), synchronous digital hierarchy (SDH) and synchronous optical network
(SONET), or FCIP. The minimum bandwidth is 2 MBps.
The IP quorum substitutes the active quorum disk’s tiebreaker role. Redundancy can be
implemented by using multiple quorum apps, similar to multiple quorum disks. However, only
one app is active at a time. The other apps are available if the active quorum device app fails.
For more information about quorum devices, see the Lenovo Storage V3700 V2, V3700 V2
XP, and V5030 Lenovo Information Center:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/tbrd_
clstrcli_4892pz.html
Because HyperSwap is running as a single cluster that is distributed across two main data
centers, one Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 control enclosure is
required in each site. Both control enclosures must be added to the same cluster. Only the
Lenovo Storage V5030 supports the clustering of two control enclosures, so two Lenovo
Storage V5030 control enclosures are required for HyperSwap. Metro Mirror is used to keep
both data copies in sync.
The host accesses both I/O groups, as shown in Figure 10-156 on page 596. The original
Metro Mirror target ID is not used for host access. Instead, HyperSwap presents the Metro
Mirror source ID for the target volume to the host. From the host perspective, the same
volume is available on both I/O groups, although the Lenovo Storage V3700 V2, V3700 V2
XP, and V5030 volumes are connected through Metro Mirror.
596 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
A site attribute must be set for any host, Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
storage systems, and external virtualized storage system. The host uses the local I/O group
(same site attribute) for data access, as shown in Figure 10-157.
The continuous blue line shows the host default access path to the volume at the same site.
The dotted blue line shows the non-preferred access path that is used if the preferred access
path is not available. Accessing both I/O groups doubles the number of paths from host to
volume. Take note of the limited number of supported paths for your multipath device driver
and limit the number of paths to an acceptable level.
Data flow
The host reads and writes data to the local I/O group within the same site. The HyperSwap
system sends the data to the remote site by using internal Metro Mirror, as shown in
Figure 10-158 on page 598.
If a host accesses the volume on the Metro Mirror target site, all read and write requests can
be forwarded to the I/O group that acts as the Metro Mirror source volume, as shown in
Figure 10-159 on page 599. All host-related traffic must be handled from the remote I/O
group, which increases the long-distance data traffic.
598 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-159 Data flow from the “wrong” site
Access to the Metro Mirror target volume is measured and HyperSwap triggers a switch of the
Metro Mirror copy direction, if the workload on the “wrong” site is significantly higher than the
workload on the Metro Mirror source site over any length of time, as shown in Figure 10-160
on page 600. This copy direction switch reduces the additional host-related long-distance
traffic and provides better response time.
The duration until the copy direction switches depends on the load distribution on the volume
in both sites. Although a HyperSwap volume can be accessed concurrently for read and write
I/O from any host in any site, all I/O is forwarded to one I/O group in one site. Usage of the
wrong site increases the long-distance traffic. The HyperSwap cluster monitors the workload
and it can switch the copy direction if the highest workload is arriving on the wrong site.
Applications with an equal workload pattern to the same volume by using both I/O groups
(Oracle Real Application Clusters (RAC) and VMware vMotion) are not optimal for local
HyperSwap.
600 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-161 Single node failure in an I/O group
If an I/O group fails, the host can use the second I/O group at the remote site, as shown in
Figure 10-162 on page 602. The remote I/O group handles all volume-related traffic, but
HyperSwap cannot keep both copies in sync anymore because of an inactive I/O group.
As soon as the failed I/O group is back, the system can automatically resynchronize both
copies in the background. Before the resynchronization, the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 can perform a FlashCopy on HyperSwap source and target
volumes, as shown in Figure 10-163 on page 603. Each change volume requires two
FlashCopy relationships, one relationship in each direction. So, four FlashCopy relationships
are required for each HyperSwap volume.
To provide an easy to use GUI, those relationships and FlashCopy volumes are not shown in
the GUI. They are only visible and manageable by using the CLI.
602 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 10-163 Resynchronization
After successful resynchronization, the host switches automatically to the I/O group at the
same site for the best performance and limited inter-switch link (ISL) usage, as shown in
Figure 10-164 on page 604.
HyperSwap uses Metro Mirror technology, which enables the usage of Metro Mirror
consistency groups that are described in “Remote Copy Consistency Groups” on page 526.
604 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Hosts that access a HyperSwap volume through Internet Small Computer System
Interface (iSCSI) or serial-attached SCSI (SAS) cannot take advantage of the high
availability function.
FlashCopy usage can be complicated because the Metro Mirror source volume and target
volume can switch during daily operation. Because of this possibility, the identification of
the copy direction is required for a successful FlashCopy.
The Remote Copy relationship must be removed first for a reverse FlashCopy operation.
After a reverse FlashCopy, all HyperSwap functions must be implemented manually again
(Remote Mirror + FlashCopy relationships).
IBM FlashCopy Manager is not supported by HyperSwap volumes.
A distinction must be made between virtualizing external storage and importing existing data
into the Lenovo Storage V5030. Virtualizing external storage means the creation of logical
units with no data on them and the addition of these logical units to storage pools under the
Lenovo Storage V5030 control. In this way, the external storage can benefit from the Lenovo
Storage V5030 features, such as Easy Tier and Copy Services.
When existing data needs to be put under the control of the Lenovo Storage V5030, it must
first be imported as an image mode volume. It is strongly recommended to copy the existing
data onto internal or external storage that is under the control of the Lenovo Storage V5030
instead of letting the data within an image mode volume, so the data can benefit from the
Lenovo Storage V5030 features.
Note: External storage virtualization is available on the Lenovo Storage V5030 model only.
It is not available on the LenovoV3700 V2 or Lenovo Storage V3700 V2 XP. However,
these models can still import data from external storage systems. For more information
about storage migration, see Chapter 7, “Storage migration” on page 323.
The external storage systems that are incorporated into the Lenovo Storage V5030
environment can be new systems or existing systems. Any data on the existing storage
systems can be easily migrated to an environment that is managed by the Lenovo Storage
V5030, as described in Chapter 7, “Storage migration” on page 323.
Migration: If the Lenovo Storage V5030 is used as a general management tool, you must
order the correct External Virtualization licenses. The only exception is if you want to
migrate existing data from external storage systems to Lenovo Storage V5030 internal
storage and then remove the external storage. You can temporarily configure your External
Storage license for a 45-day period. For more than a 45-day migration requirement, the
correct External Virtualization license must be ordered.
You can configure the Lenovo Storage V5030 licenses by clicking the Settings icon and then
System → Licensed Functions. For more information about setting licenses on the IBM
Storwize V5030, see Chapter 2, “Initial configuration” on page 35.
For assistance with licensing questions or to purchase any of these licenses, contact your
IBM account team or IBM Business Partner.
External storage controllers that are virtualized by the Lenovo Storage V5030 must be
connected through storage area network (SAN) switches. A direct connection between the
Lenovo Storage V5030 and external storage controllers is not supported.
Ensure that the switches or directors are at the firmware levels that are supported by the
Lenovo Storage V5030 and that the port login maximums that are listed in the restriction
document are not exceeded. The configuration restrictions are listed on the IBM Support
home page, which is available at this web page:
https://ibm.biz/BdjGMJ
The suggested SAN configuration is based in a dual fabric solution. The ports on external
storage systems and the Lenovo Storage V5030 ports must be evenly split between the two
fabrics to provide redundancy if one of the fabrics goes offline.
608 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
After the Lenovo Storage V5030 and external storage systems are connected to the SAN
fabrics, zoning on the switches must be configured. In each fabric, create a zone with the four
Lenovo Storage V5030 worldwide port names (WWPNs), two from each node canister with
up to a maximum of eight WWPNs from each external storage system.
Ports: The Lenovo Storage V5030 supports a maximum of 16 ports or WWPNs from an
externally virtualized storage system.
Figure 11-1 shows an example of how to cable devices to the SAN. Refer to this example as
we describe the zoning. For this example, we used an IBM Storwize V3700 for Lenovo as our
external storage.
Create an Lenovo Storage V5030 external storage zone for each storage system to be
virtualized, as shown in the following examples:
Zone the external IBM Storwize V3700 for Lenovo canister 1 port 2 with the Lenovo
Storage V5030 canister 1 port 2 and canister 2 port 2 in the blue fabric.
Zone the external IBM Storwize V3700 for Lenovo canister 2 port 2 with the Lenovo
Storage V5030 canister 1 port 4 and canister 2 port 4 in the blue fabric.
Zone the external IBM Storwize V3700 for Lenovo canister 1 port 3 with the Lenovo
Storage V5030 canister 1 port 1 and canister 2 port 1 in the red fabric.
Zone the external IBM Storwize V3700 for Lenovo canister 2 port 1 with the Lenovo
Storage V5030 canister 1 port 3 and canister 2 port 3 in the red fabric.
Verify that the storage controllers to be virtualized by the Lenovo Storage V5030 meet the
configuration restrictions, which are listed on the IBM Support home page, at this web page:
https://ibm.biz/BdjGMJ
Ensure that the firmware or microcode levels of the storage controllers to be virtualized are
supported by the Lenovo Storage V5030. See the Interoperability matrix web page for more
details:
https://datacentersupport.lenovo.com/tw/en/products/storage/lenovo-storage/v5030/6
536/documentation
The Lenovo Storage V5030 must have exclusive access to the LUNs from the external
storage system that are presented to it. LUNs cannot be shared between the Lenovo Storage
V5030 and other storage virtualization platforms or between an Lenovo Storage V5030 and
hosts. However, different LUNs can be mapped from the same external storage system to an
Lenovo Storage V5030 and other hosts in the SAN through different storage ports.
Ensure that the external storage subsystem LUN masking is configured to map all LUNs to all
of the WWPNs in the Lenovo Storage V5030 storage system.
Ensure that you check the Lenovo Information Center and review the “Configuring and
servicing storage system” topic before you prepare the external storage systems for
discovery from the Lenovo Storage V5030 system. This Knowledge Center topic is at this
web page:
https://ibm.biz/BdjGMJ
610 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Do not leave volumes in image mode. Use image mode only to import or export existing
data into or out of the Lenovo Storage V5030. Migrate data from image mode volumes
and associated MDisks to other storage pools to benefit from storage virtualization and the
enhanced benefits of the Lenovo Storage V5030, such as Easy Tier.
The basic concepts of managing an external storage system are the same as the concepts
for managing internal storage. The Lenovo Storage V5030 discovers LUNs from the external
storage system as one or more MDisks. These MDisks are added to a storage pool in which
volumes are created and mapped to hosts, as needed.
If the external storage does not show up automatically, click Discover storage from the
Actions menu on the External Storage panel, as shown in Figure 11-3.
7. The MDisks are unassigned and need to be assigned to the correct storage tiers. It is
important to set the tiers correctly if you plan to use the Easy Tier feature. For more
information about storage tiers, see Chapter 9, “Advanced features for storage efficiency”
on page 403.
8. Select the MDisks to assign and either use the Actions drop-down menu or right-click and
select Modify Tier, as shown in Figure 11-4 on page 613.
612 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 11-4 Modify Tier option
9. Ensure that the correct MDisk tier is chosen, as shown in Figure 11-5. Click Modify to
change the tier setting.
If the storage pool does not exist, follow the procedure that is outlined in Chapter 4,
“Storage pools” on page 139.
11. Add the MDisks to the pool. Select the pool to which the MDisks are going to be assigned
and click Assign, as shown in Figure 11-7. After the task completes, click Close.
Important: If the external storage volumes to virtualize behind the Lenovo Storage
V5030 contain data and this data needs to be retained, do not use the “Assign to pool”
option to manage the MDisks. This option can destroy the data on the disks. Instead,
use the Import option. For more information, see 11.2.2, “Importing image mode
volumes” on page 615.
Figure 11-7 Selecting the storage pool to assign the MDisks to the pool
12.The external MDisks that are assigned to a pool within Lenovo Storage V5030 are
displayed under the MDisks by Pools panel as shown in Figure 11-8 on page 615. Create
volumes from the storage pool and map them to hosts, as needed. See Chapter 6,
“Volume configuration” on page 269 to learn how to create and map volumes to hosts.
614 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 11-8 External MDisks displayed on MDisks by Pools panel
To manually import volumes, they must not be assigned to a storage pool and they must be
unmanaged managed disks (MDisks). Hosts that access data from these external storage
system LUNs can continue to access data, but the hosts must be rezoned and mapped to the
Lenovo Storage V5030 to use these external storage system LUNs after they are presented
through the Lenovo Storage V5030.
Figure 11-9 shows how to import an unmanaged MDisk. Select the unmanaged MDisk and
click Import from the Actions drop-down menu. Multiple MDisks can be selected by using the
Ctrl key.
You can change the default volume names to more meaningful names by editing the Volume
names text boxes.
You can choose between importing the volume to a temporary pool as an image mode
volume, which the Lenovo Storage V5030 can create and name for you, or migrating the
volume to an existing pool.
An image mode volume has a direct block-for-block translation from the imported MDisk and
the external LUN. Therefore, the existing data is preserved. In this state, the Lenovo Storage
V5030 is acting as a proxy and the image mode volume is simply a “pointer” to the existing
external LUN. Because of the way that virtualization works on the Lenovo Storage V5030, the
external LUN is presented as an MDisk, but we cannot map an MDisk directly to a host.
Therefore, the Spectrum Virtualize software must create the image mode volume, to allow
hosts to perform the mapping through the Lenovo Storage V5030.
If you choose a temporary pool, you must first select the extent size for the pool. The default
value for extents is 1 GB. If you plan to migrate this volume to another pool later, ensure that
the extent size matches the extent size of the prospective target pool. For more information
about extent sizes, see Chapter 4, “Storage pools” on page 139.
If an existing storage pool is chosen, the Lenovo Storage V5030 can perform a migration
task. The external LUN can be imported into a temporary migration pool and a migration task
can run in the background to copy data to MDisks that are in the target storage pool. At the
end of the migration, the external LUN and its associated MDisk can be in the temporary pool
and show as managed, but they can be removed from the Lenovo Storage V5030.
Figure 11-11 on page 617 shows how to select an existing pool for volume migration. The
pool must have enough available capacity to store the volumes that are being imported.
616 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 11-11 Import volumes into an existing pool
Select Copy Services if copy services (replication functionality) are used on the external
storage system that hosts the LUN. Click Import to confirm your selections and to start the
import process.
Note: Only pools with sufficient capacity are shown because the import of an MDisk to an
existing storage pool can migrate storage. This storage can be migrated only if sufficient
capacity exists in the target pool to create a copy of the data on its own MDisks. The
external MDisk can be imported as an image mode volume into a temporary migration pool
and a volume migration can take place in the background to create the volume in the target
pool.
A migration task starts and can be tracked through the System Migration panel within the
Pools menu, as shown in Figure 11-12. The actual data migration begins after the MDisk is
imported successfully.
When the migration completes, the migration status disappears and the volume is displayed
in the target pool, as shown in Figure 11-13 on page 618.
After the migration completes, the image mode volume is automatically deleted, but the
external LUN exists as a managed MDisk in the temporary storage pool. It is unassigned from
the pool and listed as an unassigned MDisk. Later, you can retire the external LUN and
remove it completely from the Lenovo Storage V5030 by unmapping the volume at the
external storage and by clicking Detect MDisks on the Lenovo Storage V5030. For more
information about removing external storage, see 11.2.4, “Removing external storage” on
page 623.
If you choose to import a volume as an image mode volume, the external LUN appears as an
MDisk with an associated image mode volume name and can be listed as shown in
Figure 11-14.
618 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The volume is also listed in the System Migration panel because the Lenovo Storage V5030
expects you to migrate these volumes later, as shown in Figure 11-15.
You can access the external panel by clicking Pools → External Storage, as shown in
Figure 11-2 on page 612. Extended help information for external storage is available by
clicking the help (?) icon and selecting External Storage, as shown in Figure 11-16.
In the External Storage panel, there are options in the Actions menu that can be applied to
external storage controllers, as shown in Figure 11-18. Select the external controller and click
Actions to display the available options. Alternatively, right-click the external controller.
You can change the name of any external storage system by right-clicking the controller and
selecting Rename. Alternatively, use the Actions drop-down menu and select Rename. In the
Rename Storage System panel, define the storage controller name and click Rename as
shown in Figure 11-19 on page 621.
620 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 11-19 Rename Storage System panel
Click Show Dependent Volumes to display the logical volumes that depend on the selected
external storage system, as shown in Figure 11-20.
From the Volumes Dependent on Storage System panel, multiple volume actions are
available, as shown in Figure 11-21 on page 622.
In the Lenovo Storage V5030 virtualization environment, you can migrate your application
data nondisruptively from one internal or external storage pool to another, simplifying storage
management with reduced risk.
Volume copy is another key feature that you can benefit from by using Lenovo Storage V5030
virtualization. Two copies can be created to enhance availability for a critical application. A
volume copy can be also used to generate test data or for data migration.
For more information about the volume actions of the Lenovo Storage V5030 storage system,
see Chapter 8, “Advanced host and volume administration” on page 349.
In the External Storage panel you can also right-click an MDisk (or use the Actions
drop-down menu) to display the available options for a selected MDisk, as shown in
Figure 11-22 on page 623.
622 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 11-22 MDisk Actions menu in the External Storage window
624 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
12
Fault tolerance and a high level of availability are achieved with the following features:
The RAID capabilities of the underlying disk subsystems
The software architecture that is used by The Lenovo Storage V3700 V2, V3700 V2 XP
and V5030 nodes
Auto-restart of nodes that are stopped
Battery units to provide cache memory protection in a site power failure
Host system multipathing and failover support
At the core of the Lenovo Storage V3700 V2, V3700 V2 XP and V5030 is a redundant pair of
node canisters. The two canisters share the load of transmitting and receiving data between
the attached hosts and the disk arrays.
626 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
12.2 System components
This section describes each of the components that make up the Lenovo Storage V3700 V2,
V3700 V2 XP and V5030 systems. The components are described in terms of location,
function, and serviceability.
During the basic system configuration, vital product data (VPD) is written to the enclosure
midplane. On a control enclosure midplane, the VPD contains information, such as worldwide
node name (WWNN) 1, WWNN 2, machine type and model, machine part number, and serial
number. On an expansion enclosure midplane, the VPD contains information, such as
machine type and model, machine part number, and serial number.
For information about the midplane replacement process, see the Lenovo Storage V3700 V2,
V3700 V2 XP and V5030 Information Center at:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/tbrd_
rmvrplparts_1955wm.html
Figure 12-1 shows the rear view of a fully equipped control enclosure.
Figure 12-1 Rear view of a control enclosure with two node canisters (the Storwize V5020)
Figure 12-2 Node canister USB port (the Lenovo Storage V3700 V2)
The USB flash drive is not required to initialize the system configuration. However, it can be
used for other functions. Using the USB flash drive is required in the following situations:
When you cannot connect to a node canister in a control enclosure by using the service
assistant or the technician port, and you want to see the status of the node or re-enable
the technician port.
When you do not know, or cannot use, the service IP address for the node canister in the
control enclosure and must set the address.
When you have forgotten the superuser password and must reset the password.
Ethernet ports
The Lenovo Storage V3700 V2 and Lenovo Storage V3700 V2 XP node canisters have two
100/1000 Mbps Ethernet ports. Both ports can be used for management, Internet Small
Computer System Interface (iSCSI) traffic, and Internet Protocol (IP) replication. Additionally,
port 2 can be used as a technician port (the white box with “T” in the center of the box) for
system initialization and servicing. After initialization the technician port will be disabled. It is
possible to reactivate the technician port later again via CLI commands.
Figure 12-3 shows the Ethernet ports on the Lenovo Storage V3700 V2.
Figure 12-4 on page 629 shows the Ethernet ports on the Lenovo Storage V3700 V2 XP.
628 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-4 Lenovo Storage V3700 V2 XP Ethernet ports
Each Lenovo Storage V5030 node canister has two 1/10 Gbps Ethernet ports and one
Ethernet technician port. Port 1 and 2 can be used for management, iSCSI traffic, and IP
replication. Port T can be used as a technician port for system initialization and service only.
Figure 12-5 shows the Ethernet ports on the Lenovo Storage V5030.
Each port has two LEDs that display the status of its activity. Their meanings are shown in
Table 12-1.
Figure 12-6 on page 630 shows the SAS ports on the Lenovo Storage V3700 V2.
Each Lenovo Storage V3700 V2 XP node canister has three 12 Gbps SAS ports. Port 1 can
be used to connect optional expansion enclosures, and ports 2 and 3 can be used for host
attachment.
Figure 12-7 shows the SAS ports on the Lenovo Storage V3700 V2 XP.
Each Lenovo Storage V5030 node canister has two 12 Gbps SAS ports to connect optional
expansion enclosures. This port does not support host attachment.
Figure 12-8 shows the SAS ports on the Lenovo Storage V5030.
Each port has two LEDs that display the status of its activity. Their meanings are shown in
Table 12-2 on page 631.
630 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Table 12-2 SAS port status LEDs
Name and Color State Meaning
position
Battery status
Each node canister houses a battery, the status of which is displayed by two LEDs on the
back of the unit, as shown in Figure 12-9.
Battery status Green FAST BLINK The battery is charging. It does not have a sufficient charge to
(left) perform a “fire hose” dump.
BLINK The battery has sufficient charge to perform one “fire hose” dump.
ON The battery is fully charged and has sufficient charge to perform two
“fire hose” dumps.
OFF The battery is not available for use.
Fault (right) Amber OFF If the LED is off, no known conditions are preventing normal operation,
unless the battery status LED is also on.
ON An active condition or fault could compromise normal operation.
SLOW BLINK There is a non-critical fault with the battery.
Battery in use Green OFF The battery is not being used to power the canister.
FAST BLINK The battery is currently providing power for a “fire hose”
dump.
Canister status
The status of each canister is displayed by three LEDs on the back of the unit, as shown in
Figure 12-10.
Figure 12-10 Node canister status LEDs (the Lenovo Storage V3700 V2 XP)
Power (left) Green OFF No power is available or power is coming from the battery.
SLOW BLINK Power is available but the main CPU is not running; the
system is in standby mode.
FAST BLINK System is in self test.
ON Power is available and the system code is running.
Status Green OFF Indicates one of the following conditions:
(middle)
– No power to the canister
– Canister is in standby mode or self test
– Operating system is loading
BLINK The canister is in candidate or service state. It is not performing I/O.
It is safe to remove the node.
BLINK FAST The canister is carrying out a fire hose dump.
ON The canister is active, able to perform I/O, or starting. The system is part
of a cluster.
632 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Name and Color State and Meaning
position
Canister Amber OFF The node is in candidate or active state. Any error that has been
Fault (right) detected is not severe enough to stop the node participating in a cluster or
performing I/O.
BLINK The canister is being identified. There might or might not be a fault
condition.
ON The node is in service state or an error exists that might be stopping the
system code from starting (node error 550). The node canister cannot
become active in the system until the problem is resolved. The problem is
not necessarily related to a hardware component.
Replaceable components
The Lenovo Storage V3700 V2, V3700 V2 XP and V5030 node canister contains the
following field-replaceable (client-replaceable) components:
Host Interface Card
Memory
Battery
Figure 12-11 shows the location of these parts within the node canister.
Note: Because these components are inside the node canister, their replacement leads to
a redundancy loss until the replacement is complete.
Figure 12-13 on page 635 shows the location of the memory modules.
634 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Slot 1 is next to the CPU.
Slot 2 is next to the battery area
Attention: The battery is a lithium ion battery. To avoid a possible explosion, do not
incinerate the battery. Exchange the battery only with the part that is approved by Lenovo.
Because the Battery Backup Unit (BBU) replacement leads to a redundancy loss until the
replacement is complete, we advise that you replace the BBU only when you are instructed to
replace it. We advise you to follow the Directed Maintenance Procedure (DMP).
During the procedure, while you lift and lower the battery, grasp the blue handle on each end
of the battery and keep the battery parallel to the canister system board, as shown in
Figure 12-15 on page 637.
636 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-15 BBU replacement
Important: During the replacement, the battery must be kept parallel to the canister
system board while the battery is removed or replaced. Keep equal force, or pressure, on
each end.
For more information about the BBU replacement process, see the Lenovo Information
Center at this web page:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/v3700
_rplc_batt_nodecan.html
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/tbrd_
rmvrplparts_1955wm.html
Figure 12-16 shows the rear view of a fully equipped expansion enclosure.
Figure 12-16 Rear view of an expansion enclosure with two expansion canisters
Each port has two LEDs that display the status of its activity. Their meanings are shown in
Table 12-5.
Link (right) Green Solid A connection exists on at least one lane (phy).
Canister status
The status of each expansion canister is displayed by three LEDs on the back of the unit, as
shown in Figure 12-18.
638 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Table 12-6 Expansion canister status LEDs
Name and Color State Meaning
position
Fault (right) Amber Solid A fault requires part replacement, or the canister is
still starting.
The LenovoV3700 V2 and LenovoV3700 V2 XP can have one control enclosure. The Lenovo
Storage V5030 can consist of 1 or 2 control enclosures.
Each Lenovo Storage V3700 V2 and Lenovo Storage V3700 V2 XP control enclosure can
attach up to 10 expansion enclosures. Each Lenovo Storage V5030 control enclosure can
attach up to 20 expansion enclosures.
SAS cabling
Expansion enclosures are attached to control enclosures and between each other by using
SAS cables.
A set of correctly interconnected enclosures is called a chain. Each chain is made up of two
strands. A strand runs through the canisters that are in the same position in each enclosure in
the chain. Canister 1 of an enclosure is cabled to canister 1 of the downstream enclosure.
Canister 2 of an enclosure is cabled to canister 2 of the downstream enclosure.
Each strand consist of 4 phys, and each phy operates at 12 Gbps, therefore a strand has a
usable speed of 48 Gbps.
A strand starts with a SAS initiator chip inside an Lenovo Storage V3700 V2, V3700 V2 XP
and V5030 node canister and progresses through SAS expanders, which connect to the disk
drives. Each canister contains an expander. Each drive has two ports, each of which is
connected to a different expander and strand. This configuration means that both nodes
directly access each drive, and no single point of failure exists.
The Lenovo Storage V3700 V2 supports one SAS chain for each control enclosure, and up to
10 expansion enclosures can be attached to this chain. The node canister uses SAS port 1
for expansion enclosures.
Figure 12-19 shows the SAS cabling on a Lenovo Storage V3700 V2 with three attached
expansion enclosures.
The Lenovo Storage V3700 V2 XP supports one SAS chain for each control enclosure, and
up to 10 expansion enclosures can be attached to this chain. The node canister uses SAS
port 1 for expansion enclosures.
Figure 12-20 on page 641 shows the SAS cabling on a Lenovo Storage V3700 V2 XP with
three attached expansion enclosures.
640 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-20 SAS expansion cabling on the Lenovo Storage V3700 V2 XP
The Lenovo Storage V5030 supports two SAS chains for each control enclosure, and up to
10 expansion enclosures can be attached to each chain. The node canister uses SAS port 1
for expansion enclosures.
Figure 12-21 on page 642 shows the SAS cabling on a Lenovo Storage V5030 with six
attached expansion enclosures (three enclosures in each chain).
Important: When a SAS cable is inserted, ensure that the connector is oriented correctly
by confirming that the following conditions are met:
The pull tab must be below the connector.
Insert the connector gently until it clicks into place. If you feel resistance, the connector
is probably oriented the wrong way. Do not force it.
When the connector is inserted correctly, the connector can be removed only by pulling
the tab.
Cabling is done from the controller view top → down. Top/down button up is not
supported.
Drive slots
The Lenovo Storage V3700 V2, V3700 V2 XP and V5030 have different types of enclosures,
depending on the model, warranty, and number of drive slots.
642 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Table 12-7 shows the drive slots on each enclosure type.
The system can automatically perform the drive hardware validation tests and can promote
the drive into the configuration if these tests pass, automatically configuring the inserted drive
as a spare. The status of the drive after the promotion can be recorded in the event log either
as an informational message or an error if a hardware failure occurs during the system action.
For more information about the drive replacement process, see the Lenovo Storage V3700
V2, V3700 V2 XP and V5030 Information Center at this web pages:
Replacing a 3.5-inch drive assembly
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/v3
700_rplc_35_drv_assembly.html
Replacing a 2.5-inch drive assembly:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/v3
700_rplc_25_drv_assembly.html
Figure 12-22 on page 644 shows a fully equipped control enclosure with two supply units.
The PSUs are identical between the control and expansion enclosures.
Power supplies in both control and expansion enclosures are hot-swappable and replaceable
without a need to shut down a node or cluster. If the power is interrupted in one node canister
for less than 2.5 seconds, the canister cannot perform a fire hose dump and continues
operation from battery.
PSU status
Each PSU has three LEDs that display the status of its activity. The LEDs are the same for
the control and expansion units.
644 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Table 12-8 PSU status LEDs
Name and Color State Meaning
position
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/v3700
_rplc_pwrsupply.html
The configuration backup file can be downloaded and saved by using the graphical user
interface (GUI) or the command-line interface (CLI). The CLI option requires you to log in to
the system and download the file by using Secure Copy Protocol (SCP). It is a preferred
practice for an automated backup of the configuration.
Important: Save the configuration files of the Lenovo Storage V3700 V2, V3700 V2 XP
and V5030 regularly. The best approach is to save daily and automate this task. Always
perform the additional manual backup before you perform any critical maintenance task,
such as an update of the microcode or software version.
The backup file is updated by the cluster every day and stored in the /dumps directory. Even
so, it is important to start a manual backup after you change your system configuration.
To successfully perform the configuration backup, follow the prerequisites and requirements:
All nodes must be online.
No independent operations that change the configuration can be running in parallel.
No object name can begin with an underscore.
Important: You can perform an ad hoc backup of the configuration only from the CLI.
However, the output of the command can be downloaded from both the CLI and the GUI.
The svcconfig backup command creates three files that provide information about the
backup process and cluster configuration. These files are created in the /dumps directory on
the configuration node and can be retrieved by using SCP. Use the lsdumps command to list
them, as shown in Example 12-2.
The three files that are created by the backup process are described in Table 12-9.
svc.config.backup.sh_<serial> This file contains the names of the commands that were
issued to create the backup of the cluster.
svc.config.backup.log_<serial> This file contains details about the backup, including any
error information that might be reported.
To download a configuration backup file by using the GUI, complete the following steps:
1. Browse to Settings → Support → Support Package and select Manual Upload
Instructions
See Figure 12-24 on page 647.
646 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-24 Manual Upload Instructions
When you select Manual Upload Instructions, a window opens (Figure 12-25).
Pressing the Button Download Support Package brings you to the next option, where you
can select the different kinds of Support packages, see Figure 12-26 on page 648 for details.
Select Download Existing Package to get a list of all the available log files that are stored on
the configuration node, as shown in Figure 12-27.
648 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. Search for the files that are named svc.config.backup.xml_*, svc.config.backup.sh_*,
and svc.config.backup.log_*. Select the files, right-click, and select Download, as
shown in Figure 12-28.
Even though the configuration backup files are updated automatically daily, it might be useful
to verify the time stamp of the actual file. Open the svc.config.backup.xml_xx file with a text
editor and search for the string timestamp=, which is near the top of the file. Figure 12-29
shows the file and the timestamp information.
The node canister software and the drive firmware are updated separately so these tasks are
described in different topics.
Note: IBM Storwize V5000 for Lenovo hardware is not supported by V8.1 or later. The
V7.7.1 and V7.8.1 code streams will continue to be updated with critical fixes for this
hardware.
The GUI also shows whether a software update is available and the latest software level
when you navigate to Settings → System → Update System, as shown in Figure 12-30.
Important: Certain levels of code support updates only from specific previous levels. If you
update to more than one level above your current level, you might be required to install an
intermediate level. For information about on update compatibility, see this web page:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004336
650 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Table 12-10 Software update tasks
Sequence Upgrade tasks
Ensure that Common Information Model (CIM) object manager (CIMOM) clients are
working correctly. When necessary, update these clients so that they can support the
2 new version of the Lenovo Storage V3700 V2, V3700 V2 XP and V5030 code.
Examples can be operating system (OS) versions and options, such as FlashCopy
Manager or VMware plug-ins.
Ensure that multipathing drivers in the environment are fully redundant. If you
3 experience failover issues with multipathing driver support, resolve these issues
before you start normal operations.
Update other devices in the Lenovo Storage V3700 V2, V3700 V2 XP and V5030
4 environment. Examples might include updating the hosts and switches to the correct
levels.
Important: Ensure that no unfixed errors are in the log and that the system date and time
are correctly set before you start the update.
The amount of time that it takes to perform a node canister update can vary depending on the
amount of preparation work that is required and the size of the environment. Generally, to
update the node software, allow 20 - 40 minutes for each node canister and a single
30-minute wait when the update is halfway complete. One node in each I/O group can be
upgraded to start, then the system can wait 30 minutes before it upgrades the second node in
each I/O group. The 30-minute wait allows the recently updated node canister to come online
and be confirmed as operational, and it allows time for the host multipath to recover.
The software update can be performed concurrently with normal user I/O operations. After
the updating node is unavailable, all I/O operations fail to that node and the failed I/O
operations are directed to the partner node of the working pair. Applications do not see any
I/O failures.
The maximum I/O rate that can be sustained by the system might degrade while the code is
uploaded to a node, the update is in progress, the node is rebooted, and the new code is
committed because write caching is disabled during the node canister update process.
Important: Ensure that the multipathing drivers are fully redundant with every available
path and online. You might see errors that are related to the paths, which can go away
(failover) and the error count can increase during the update. When the paths to the nodes
return, the nodes fall back to become a fully redundant system.
When new nodes are added to the system, the upgrade package is automatically
downloaded to the new nodes from the Lenovo Storage V3700 V2, V3700 V2 XP and V5030
systems.
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/svc_u
pdatetestutility.html
The software update test utility can be downloaded in advance of the update process, or it
can be downloaded and run directly during the software update, as guided by the update
wizard. You can run the utility multiple times on the same system to perform a readiness
check-in preparation for a software update.
The installation and use of this utility is non disruptive, and it does not require a restart of any
node. Therefore, host I/O is not interrupted. The utility is only installed on the current
configuration node.
System administrators must continue to check whether the version of code that they plan to
install is the latest version.
Alternatively, you can run only the test utility by selecting Test Only.
652 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. Select the test utility and update package files by clicking the folder icons, as shown in
Figure 12-32. The code levels are entered automatically.
Alternatively, for the Test Only option, upload only the test utility and enter the code level
manually.
3. Select Automatic update and click Next to come to the next question regarding paused
update, as shown in Figure 12-34 on page 654. The Automatic update option is the
default and advised choice.
Shown in Figure 12-34 on page 654 you can choose if you want to pause the update or
not. Default is Fully automatic. Click Finish to start the update
4. Wait for the test utility and update package to upload to the system, as shown in
Figure 12-35.
5. After the files upload, the test utility is automatically run, as shown in Figure 12-36. The
test utility verifies that no issues exist with the current system environment, such as failed
components and drive firmware that is not at the latest level.
If the test utility discovers any warnings or errors, a window opens to inform the user, as
shown in Figure 12-37 on page 655. Click Read more to get more information.
654 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-37 Warning about the issues that were detected
Figure 12-38 shows that in this example the test utility identified one warning.
Warnings do not prevent the software update from continuing, even if the recommended
procedure is to fix all issues before you proceed.
Close the window and select either Resume or Cancel, as shown in Figure 12-39 on
page 656. Clicking Resume continues the software update. Clicking Cancel cancels the
software update so that the user can correct any issues.
Selecting Resume prompts the user to confirm the action, as shown in Figure 12-40.
6. Wait for each node to be updated and rebooted, one at a time until the update process is
complete. The GUI displays the overall progress of the update and the current state of
each node, as shown in Figure 12-41.
During the update process, a node fails over and you can temporarily lose connection to
the GUI. After this situation happens, a warning is displayed, as shown in Figure 12-42 on
page 657. Select Yes.
656 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-42 Configuration node failover warning
Important: We advise that you update the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 automatically by following the update wizard. If a manual update is used, ensure
that you do not skip any steps.
Complete the following steps to manually update the software by using the GUI and Service
Assistant Tool (SAT):
1. Browse to Settings → System → Update System and select Update and Test, as
shown in Figure 12-43.
Alternatively, you can run the test utility by selecting Test Only.
2. Select the test utility and update package files by clicking the folder icons, as shown in
Figure 12-44. The code levels are entered automatically.
4. Wait for the test utility and update package to upload to the system, as shown in
Figure 12-46.
5. After the files upload, the test utility is automatically run, as shown in Figure 12-47 on
page 659. The test utility verifies that no issues exist with the current system environment,
such as failed components and drive firmware that is not at the latest level.
658 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-47 State while the test utility run
If the utility identifies no issues, the system is ready for the user to initiate the manual
upgrade, as shown in Figure 12-48.
Figure 12-48 State while you wait for the manual upgrade to start
6. Choose a node to update. Non-configuration nodes must be updated first. Update the
configuration node last. Browse to Monitoring → System and hover over the canisters to
confirm the nodes that are the non-configuration nodes, as shown in Figure 12-49 on
page 660.
7. Right-click the canister that contains the node that you want to update and select
Remove, as shown in Figure 12-50.
8. A warning message appears to ask whether you want to remove the node, as shown in
Figure 12-51 on page 661. Click Yes.
660 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-51 Node removal confirmation window
The non-configuration node is removed from the management GUI Update System panel
and is shown as Unconfigured when you hover over the node after you select
Monitoring → System.
9. Open the Service Assistant Tool for the node that you removed. Enter the Service IP
Address followed by /service into a browser window. Without /service the browser will
open the associated GUI to this service IP. No HTTP:// or HTTPS:// is needed.
Example: 172.163.18.34/service
10.In the Service Assistant Tool, ensure that the node that is ready for update is selected.
The node can be in the Service status, display a 690 error, and show no available cluster
information, as shown in Figure 12-52.
11. In the Service Assistant Tool, select Update Manually, and choose the required node
canister software upgrade file, as shown in Figure 12-53 on page 662.
12.Click Update to start the update process on the first node and wait for the node to finish
updating.
Non-configuration nodes can be reintroduced automatically into the system after the
update finishes. Updating and adding the node again can last 20 - 40 minutes.
The management GUI shows the progress of the update, as shown in Figure 12-54.
13.Repeat steps 7 - 12 for the remaining nodes, leaving the configuration node until last.
14.After you remove the configuration node from the cluster, you are asked whether you want
to refresh the panel, as shown in Figure 12-55 on page 663. Select Yes.
662 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-55 Configuration node failover warning
Important: The configuration node remains in the Service state when it is added to the
cluster again. Therefore, you need to exit the Service state manually.
15.To exit the Service state, browse to the Home panel of the Service Assistant Tool and
open the Actions menu. Select Exit Service State and click GO, as shown in
Figure 12-56.
Figure 12-56 Exiting the Service state in the Service Assistant Tool
To get the latest drive update package, go to the Supported Drive Types and Firmware Levels
for the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 web page:
https://datacentersupport.lenovo.com/us/en/solutions/ht503947
Note: Find the download link for the actual drive firmware at the bottom of the Web page.
Select the upgrade package, which was downloaded from the Lenovo Support site, by
clicking the folder icon, and click Upgrade, as shown in Figure 12-58.
The drive firmware update takes about 2 - 3 minutes for each drive.
To verify the new firmware level, right-click the drive and select Properties, as shown in
Figure 12-59 on page 665.
664 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-59 Individual drive update result
Figure 12-60 shows how to update all drives through the Actions menu in the Internal Storage
panel. Under Drive Class Filter, click All Internal. In the Actions menu, click Upgrade All.
Note: If any drives are selected, the Actions menu displays actions for the selected drives
and the Upgrade All option does not appear. If a drive is selected, deselect it by holding
down the Ctrl key and clicking the drive.
Figure 12-61 Upload the software upgrade package for multiple drives
12.5 Monitoring
Any issue that is reported by your Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
systems must be fixed as soon as possible. Therefore, it is important to configure the system
to send automatic notifications when a new event is reported. You can select the type of event
for which you want to be notified. For example, you can restrict notifications to only events
that require immediate action.
If your system is within warranty, or you have a hardware maintenance agreement, configure
your Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems to send email events
directly to IBM if an issue that requires hardware replacement is detected. This mechanism is
known as Call Home. When an event is received, IBM automatically opens a problem report
and, if appropriate, contacts you to verify whether replacement parts are required.
666 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Important: If you set up Call Home to the IBM Support Center, ensure that the contact
details that you configured are correct and kept up-to-date when personnel changes.
To configure Call Home and other optional email addresses, complete the following steps:
1. Browse to Settings → Notifications → Email and select Enable Notifications, as
shown in Figure 12-62.
For the correct functionality of email notifications, ensure that Simple Mail Transfer
Protocol (SMTP) is enabled on the management network and not, for example, blocked by
firewalls.
If Email Notification is not enabled, you will get a periodically warning like shown in
Figure 12-63.
2. Configure the SMTP servers. You can add several servers by clicking the plus (+) sign, as
shown in Figure 12-64.
3. Figure 12-65 on page 668 shows the entry for Call Home. This Email Address is given,
and can’t be changed.
4. You can add several recipients to receive notifications. Press the + sign to add a new
Email Address. Figure 12-66 shows one entry.
5. It is very important to add an Email contact, who is responsible for this Storage System.
Provide the contact information of the system owner, who can be contacted by the IBM
Support Center when necessary, Figure 12-67 shows such an entry. Ensure that you
always keep this information up-to-date.
6. Also the System Location is important. This Information will be used by the support
personnel to send the Support Representative to the failing system. If there is only a minor
problem this Info will be used to send the CRU parts to the given address. Figure 12-68 on
page 669 shows how the System Location panel should be filled out. Ensure that you
always keep this information up-to-date.
668 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-68 System Location
7. You can include an inventory file into your Email to check the actual inventory of your
system. Figure 12-69 shows you the location where you can set the checkmark to indicate
you want to receive inventory details. The emails include an inventory report that
describes the system hardware and critical configuration information. Object names and
other information, such as IP addresses, are not sent. Based on the information that is
received, IBM can inform you whether the hardware or software that you are using
requires an upgrade because of a known issue.
8. Click Save.
9. Select as shown in Figure 12-70 Edit → Call Home → Test to test the Call Home
function.
The same results can be achieved by using the CLI and entering the svctask stopmail and
svctask startmail commands.
The audit log tracks action commands that are issued through the CLI or the management
GUI. It provides the following entries:
Name of the user who issued the action command
Name of the actionable command
Time stamp of when the actionable command was issued on the configuration node
Parameters that were issued with the actionable command
Failed commands and view commands are not logged in the audit log. Certain service
commands are not logged either. The svcconfig backup, cpdumps, and ping service
commands are not logged.
To access the audit log by using the GUI, browse to Access → Audit Log, as shown in
Figure 12-72.
670 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Right-clicking any column header opens the option menu in which you can select columns
that are shown or hidden. It is also possible to click the Column icon on the far right of the
column headers to open the option menu.
Figure 12-73 shows all of the possible columns that can be displayed in the audit log view.
An alert is logged when the event requires action. Certain alerts have an associated error
code that defines the service action that is required. The service actions are automated
through the fix procedures. If the alert does not have an error code, the alert represents an
unexpected change in the state. This situation must be investigated to see whether it is
expected or represents a failure. Investigate an alert and resolve it when it is reported.
A message is logged when a change that is expected is reported, for instance, a FlashCopy
operation completes.
To check the event log, browse to Monitoring → Events, as shown in Figure 12-74 on
page 672.
Figure 12-75 on page 673 shows all of the possible columns that can be displayed in the error
log view.
672 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-75 Possible event log columns
Important: Check for this filter option if no event is listed. Events might exist that are not
associated with recommended actions.
Figure 12-77 shows an event log with no items when the Recommended Actions filter was
selected, which does not necessarily mean that the event log is clear. To check whether the
log is clear, click Show All.
674 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-78 Possible actions on a single event
Important: These actions cannot be undone and might prevent the system from being
analyzed when severe problems occur.
Properties
This option provides more information for the selected event that is shown in the list.
To run the fix procedure for the error with the highest priority, go to the Recommended Action
panel at the top of the Events page and click Run Fix, as shown in Figure 12-79. When you
fix higher-priority events first, the system often can automatically mark lower-priority events
as fixed.
This example can show how faults are represented in the error log, how information about the
fault can be gathered, and how the Recommended Action (DMP) can be used to fix the error:
Detecting the alert
The Health Status indicator shows a red alert. The Status Alerts indicator (on top of the
GUI) shows one alert. Click the alert to retrieve the specific information, as shown in
Figure 12-80.
676 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Gathering additional information
More details about the event are available by clicking the event and selecting Details.
This information might help you fix a problem or analyze a root cause. Figure 12-81 shows
the properties for the previous event.
Figure 12-83 on page 678 shows how to start the DMP by right-clicking the alert record
and selecting Run Fix Procedure. You can use this option to run a fix procedure that
might not be the recommended action.
The steps and panels of a DMP are specific to the error. When all of the steps of the DMP
are processed successfully, the recommended action is complete and the problem is fixed
usually. Figure 12-84 shows that the Health Status changed to green and both the Status
Alerts indicator and the Recommended Action box disappeared, implying that no more
actions must be taken.
Figure 12-85 Multiple alert events that are displayed in the event log
678 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
The Recommended Action function orders the alerts by severity and displays the events with
the highest severity first. If multiple events have the same severity, they are ordered by date
and the oldest event is displayed first.
Events are ordered by severity. The first event is the most severe. Events are ordered by
severity in the following way:
Unfixed alerts (sorted by error code). The lowest error code has the highest severity.
Unfixed messages.
Monitoring events (sorted by error code). The lowest error code has the highest severity.
Expired events.
Fixed alerts and messages.
The less severe events are often fixed with the resolution of the most severe events.
Note that if you want to enable remote support assistance or use the Assist On-Site tool, you
must configure local support assistance.
Note that you cannot enable remote support assistance and use the Assist On-Site tool at the
same time.
In addition, a service IP address must be configured before you set up remote support
assistance. During system initialization, you can optionally set up a service IP address and
remote support assistance. If you did not configure a service IP address, go to Settings →
When you enable remote support assistance, a shared-token is also generated by the system
and sent to the support center. If the system needs support services, support personnel can
be authenticated onto the system with a challenge-response mechanism. Use the chsra
command to enable remote support assistance on the system. After support personnel obtain
the response code, it is entered to gain access to the system. Service personnel have three
attempts to enter the correct response code. After three failed attempts, the system
generates a new random challenge and support personnel must obtain a new response code.
Support roles
When you enable local support assistance, support personnel are assigned either the Monitor
role or the Restricted Administrator role. The Monitor role can view, collect, and monitor logs
and errors to determine the solution to problems on the system. The Restricted Administrator
role gives support personnel access to administrator tasks to help solve problems on the
system. However, this role restricts these users from deleting volumes or pools, unmapping
hosts, or creating, deleting, or changing users. Roles limit access of the assigned user to
specific tasks on the system. Users with the service role can set the time and date on the
system, delete dump files, add and delete nodes, apply service, and shut down the system.
They can also view objects and system configuration but cannot configure, modify, or
manage the system or its resources. They also cannot read user data.
680 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
assistance. Both local and remote support assistance use secure connections to protect data
exchange between the support center and system. More access controls can be added by
the system administrator. The system supports both local and remote support assistance.
Use local support assistance if you have restrictions that require on-site support only. Unlike
other authentication methods, you can audit all actions that support personnel conduct on the
system when local support assistance is configured.With remote support assistance, support
personnel can visit on site and they can also access the system remotely through a secure
connection from the support center. However, before you enable remote support assistance
between the system and support, you first need to configure local support assistance.
Support personnel rely on the support package, such as snaps, dumps, and various trace
files, to troubleshoot issues on the system. The management GUI and the command-line
interface support sending this data to the support center securely. Additionally, support
personnel can download new builds, patches, and fixes automatically to the system with your
permission.
If you selected to configure both local and remote support assistance, verify the
pre-configured support centers. Optionally, enter the name, IP address, and port for the proxy
server on the Remote Support Centers page. A proxy server is used in systems where a
firewall is used to protect your internal network or if you want to route traffic from multiple
storage systems to the same place
Select this option to configure local support assistance. Use this option if your system has
certain restrictions that require on-site maintenance. If you select this option, click Finish to
set up local support assistance.
Under Support assistance → Start new Session (marked) you can select the time which
the Remote support Session can be idle before the system disconnects the line, see
Figure 12-90 on page 683.
682 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-90 Set Idle time before the line will be disconnected
Test Connection lets you test the connectivity as shown in Figure 12-91.
Figure 12-92 shows the pop up testing the line to the Service Center.
A new Token can be generated by pressing the button Generate New Token as shown in
Figure 12-94.
When you enable remote support assistance, the system generates a support assistance
token. This shared security token is sent to the support center and is used for authentication
during support assistance sessions. Updating a token is essentially overwriting the existing
token, then sending it securely to the support assistance administration server in an email
message. You specify the email addresses of the support assistance administration servers
when you configure support assistance. If the email is not received in time for a support
incident or cannot be sent for some reason, a service engineer can manually add the token to
the administration server. Before you can update a token, you must enable the support
assistance feature. You can update the token periodically as a security practice, similar to
how you update passwords.
If settings change over time you can reconfigure your settings using the button Reconfigure
Settings as shown in Figure 12-95 on page 685.
684 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-95 Reconfigure Settings
Select → I want support personnel to access my system both on-site and remotely, see
Figure 12-96 on page 686.
Select this option to configure remote support assistance. Use this option to allow support
personnel to access your system through a secure connection from the support center.
Secure remote assistance requires a valid service IP address, call home, and an optional
proxy server if a firewall is used to protect your internal network. If you select this option, click
Next to specify IP addresses for the support center and optional proxy server. See
Figure 12-97 on page 687.
686 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-97 Support Centers
Click Next. On the Remote Support Access Settings page, select one of these options to
control when support personnel can access your system to conduct maintenance and fix
problems:
At Any Time — Support personnel can access the system at any time. For this option,
remote support session does not need to be started manually and sessions remain open
continuously.
Click Finish. After you configure remote support assistance with permission only, you can
start sessions between the support center and the system. On the Support Assistance page,
select Start New Session and specify the number of hours the session can be idle before the
support user is logged off from the system. See Figure 12-99.
If you plan to use the command-line interface to configure local support assistance, enter the
following command:
chsra -enable
688 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
12.8.3 Disable Support Assistance
You can disable support assistance by using the command-line interface (CLI). When you
disable support assistance, the support assistance token is deleted. All active secure remote
access user sessions are closed immediately and a secure email message is sent to the
administration server to indicate that secure remote access is disabled on the system.
Before automatically uploading a support package, ensure that the following prerequisites are
configured on the system:
Ensure that all of the nodes on the system have internet access.
Ensure that a valid service IP address is configured on each node on the system.
Configure at least one valid DNS server for domain name resolution. To configure a DNS
server on the system, select Settings → System → DNS and specify valid IP addresses
and names for one or more DNS servers. You can also use the mkdnsserver command to
configure DNS servers.
Configure the firewall to allow connections to the following IP addresses on port 443:
129.42.56.189, 129.42.54.189, and 129.42.60.189. To test connections to the support
center, select Settings → Support → Support Assistance. On the Support Assistance
page, select Test Connection to verify connectivity between the system and the support
center.
The management GUI supports uploading new or existing support packages to support
automatically. See Figure 12-101.
When you press → Upload Support Package the selection screen opens as shown in
Figure 12-102 on page 691.
690 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-102 Upload Support package
On the Upload Support Package page, enter the Problem Management Report (PMR)
number that is associated with the support package that you are uploading. If you do not have
a PMR number, click Don't have a PMR? to open the Service Request (SR) tool to generate
a PMR. You need a IBM Partner ID to register.
Note: If you are not sure if a PMR exists or do not want to create a new PMR, the package
can still be sent to the support center. The machine serial number and type are used to
route the package to the support center. However, specifying a PMR number can decrease
response time for support personnel. You can call the Lenovo Support Line or use the
Lenovo Support Portal to open a call. Go to the following address:
https://datacentersupport.lenovo.com/us/en/
Specify the type of package that you want to generate and upload to the support center by
selecting:
Standard logs
This support package contains the most recent logs that were collected from the system.
These logs are most commonly used by the IBM Support Center to diagnose and solve
problems.
Standard logs plus one existing statesave
This support package contains the standard logs from the system and the most recent
statesave from any of the nodes in the system. Statesaves are also known as memory
dumps or live memory dumps.
Standard logs plus the most recent statesave from each node
The support center will let you know which package they need.
Click Upload. After the new support package is generated, a summary panel displays the
progress of the upload. If the upload is unsuccessful or encounters errors, verify the
connection between the system and the support center and retry the upload.
If you decide that you want to upload the support package later you can use the function
shown in Figure 12-103.
If you press Upload Existing Package a screen opens as shown in Figure 12-104.
692 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Using the command-line interface
To upload a support package or other file with the command-line interface, complete these
steps:
where the pmr_number is the number of an existing PMR and fullpath/filename is the full path
and the name of the file that you are uploading. The -pmr and -filename parameters are not
required. If you do not specify a PMR number, the file is uploaded by using the machine serial
and type to route the file to the support center. If you do not specify a file name, the latest
support package is uploaded.
To verify the progress of the upload to the support center, enter the following command:
lscmdstatus
In the results of this command, verify that the supportupload_status is Complete, which
indicates that the upload is successfully completed. Other possible values for this parameter
include Active, Wait, Abort, and Failed. If the upload is Active, you can use the
supportupload_progress_percent parameter to view the progress for the upload.
where the pmr_number is the number of an existing PMR. The command generates a new
support package and uploads it to the support center with the identifying PMR number. If you
do not have a PMR number that corresponds with support package, then you can use the
following command:
The command generates a new support package and uploads it to the support center by
using the machine type and serial to route the package.
To verify the progress of the upload to the support center, enter the following command:
lscmdstatus
In the results of this command, verify that the supportupload_status is Complete, which
indicates that the upload is successfully completed. Other possible values for this parameter
include Active, Wait, Abort, and Failed. If the upload is Active, you can use the
supportupload_progress_percent parameter to view the progress for the upload.
To manually upload a new support package to the support center, complete these steps:
On the Support Package page, expand Manual Upload Instructions. Figure 12-105.
In the Manual Upload Instructions section, click Download Support Package. See
Figure 12-106.
694 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
On the Download New Support Package or Log File panel, select one of these types of
support packages to download which are shown in Figure 12-107.
The type to select depends on the event that is being investigated. For example, if you notice
that a node is restarted, capture the snap file with the latest existing statesave. If needed, the
IBM Support Center can notify you of the package that is required.
This explains the different portals and in each case a Webpage opens in your browser.
Blue Diamond
Select the link to log in to the BlueDiamond portal. BlueDiamond provides enhanced
security and support for healthcare clients. You must be a registered BlueDiamond client
to use this option. After you accept the terms of service for the upload, log into the
BlueDiamond portal with your user name and password.
Upload over Browser
Use this option for small files under 200 MB. Select the link to upload the support package
to the support web page through the web browser. On the support web page, complete the
following steps:
Enter a valid PMR number that is associated with this support package. In the Upload is
for field, select Other. Enter a valid email address for the contact for this package.
FTP Transfer
Use this option for larger files. Select the link to send the package to support with file
transfer protocol (FTP). You can send packages to support with standard FTP
(non-secure), secure FTP, or with SFTP, which is FTP over secure shell protocol (SSH).
On the support port for FTP transfers, select the type of FTP you want to use and follow
the instructions for that method.
where the pmr_number is the number of an existing PMR and fullpath/filename is the full path
and the name of the file that you are uploading. The -pmr and -filename parameters are not
required. If you do not specify a PMR number, the file is uploaded by using the machine serial
and type to route the file to the support center. If you do not specify a file name, the latest
support package is uploaded.
To verify the progress of the upload to the support center, enter the following command:
lscmdstatus
In the results of this command, verify that the supportupload_status is Complete, which
indicates that the upload is successfully completed. Other possible values for this parameter
include Active, Wait, Abort, and Failed. If the upload is Active, you can use the
supportupload_progress_percent parameter to view the progress for the upload.
696 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
If you want to generate a new support package, complete these steps:
where the pmr_number is the number of an existing PMR. The command generates a new
support package and uploads it to the support center with the identifying PMR number. If you
do not have a PMR number that corresponds with support package, then you can use the
following command:
The command generates a new support package and uploads it to the support center by
using the machine type and serial to route the package.
To verify the progress of the upload to the support center, enter the following command:
lscmdstatus
In the results of this command, verify that the supportupload_status is Complete, which
indicates that the upload is successfully completed. Other possible values for this parameter
include Active, Wait, Abort, and Failed. If the upload is Active, you can use the
supportupload_progress_percent parameter to view the progress for the upload.
If the package is collected by using the Service Assistant Tool, ensure that the node from
which the logs are collected is the current node, as shown in Figure 12-109.
Figure 12-109 Accessing the Collect Logs panel in the Service Assistance Tool
Support information can be downloaded with or without the latest statesave, as shown in
Figure 12-110 on page 698.
Note: This procedure starts the initialization tool if the node canister that is being serviced
is in the candidate state, if no system details are configured, and if the partner node is not
in the active state.
Note: The Lenovo Storage V5030 system has a dedicated technician port that is
always enabled so this step is unnecessary.
3. Connect an Ethernet cable between the port on the personal computer and the technician
port. The technician port is labeled with a T on the rear of the node canister.
4. Open a supported web browser on the personal computer and browse to the
http://192.168.0.1 URL.
698 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: If the cluster is active and you connect to the configuration node, this URL opens
the management GUI. If you want to access the SAT in this case, browse to
http://192.168.0.1/service
SAS port 2 can then be used again to provide extra Ethernet connectivity for system
management, iSCSI, and IP replication.
Important: Never power off your Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
systems by powering off the power supply units (PSUs), removing both PSUs, or removing
both power cables from a running system. It can lead to inconsistency or loss of the data
that is staged in the cache.
You can power off a node canister or the entire system. When you power off only one node
canister for each I/O group, all of the running tasks remain active while the remaining node
takes over.
Powering off the system is typically planned in site maintenance (power outage, building
construction, and so on) because all components of the Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 are redundant and replaceable while the system is running.
Important: If you are powering off the entire system, you lose access to all volumes that
are provided by this system. Powering off the system also powers off all Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 nodes. All data is flushed to disk before the power is
removed.
Before you power off the system, stop all hosts with volumes that are allocated to this system.
This step can be skipped for hosts with volumes that are provisioned with mirroring
(host-based mirror) from different storage systems. However, skipping this step means that
errors that relate to lost storage paths and disks can be logged on the host error log.
2. Right-click the required canister and select Power Off Canister, as shown in
Figure 12-112.
3. Confirm that you want to power off the canister by entering the confirmation code and
clicking OK, as shown in Figure 12-113 on page 701.
700 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 12-113 Canister power off confirmation window
4. After the node canister is powered off, you can confirm that it is offline in the System
panel, as shown in Figure 12-114.
To power off a node canister by using the CLI, use the command that is shown in
Example 12-4.
2. Confirm that you want to power off the system by entering the confirmation code and
clicking OK, as shown in Figure 12-116. Ensure that all FlashCopy, Metro Mirror, Global
Mirror, data migration operations, and forced deletions are stopped or allowed to complete
before you continue.
To power off the system by using the CLI, use the command that is shown in Example 12-5.
Ensure that all FlashCopy, Metro Mirror, Global Mirror, data migration operations, and forced
deletions are stopped or allowed to complete before you continue.
Wait for the power LED on the node canisters to blink slowly, which indicates that the power
off operation completed.
702 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: When you power off a Lenovo Storage V3700 V2, V3700 V2 XP, or V5030, it does
not automatically restart. You must manually restart the system by removing and
reapplying the power / powercords.
There are multiple drivers for an organization to implement data at-rest encryption. These can
be internal, such as protection of confidential company data, and ease of storage sanitization,
or external, like compliance with legal requirements or contractual obligations.
Therefore, before configuring encryption on the storage, the organization should define its
needs and, if it is decided that data at-rest encryption is a required measure, include it in the
security policy. Without defining the purpose of the particular implementation of data at-rest
encryption, it would be difficult or impossible to choose the best approach to implementing
encryption and verifying if the implementation meets the set goals.
Below is a list of items which may be worth considering during the design of a solution
including data at-rest encryption:
Legal requirements
Contractual obligations
Organization's security policy
Attack vectors
Expected resources of an attacker
Encryption key management
Physical security
There are multiple regulations that mandate data at-rest encryption, from processing of
Sensitive Personal Information to guidelines of the Payment Card Industry. If there are any
regulatory or contractual obligations that govern the data which will be held on the storage
system, they often provide a wide and detailed range of requirements and characteristics that
need to be realized by that system. Apart from mandating data at-rest encryption, these
documents may contain requirements concerning encryption key management.
Another document which should be consulted when planning data at-rest encryption is the
organization's security policy
The final outcome of a data at-rest encryption planning session should be replies to three
questions:
1. What are the goals that the organization wants to realize using data at-rest encryption?
2. How will data at-rest encryption be implemented?
3. How can it be demonstrated that the proposed solution realizes the set goals?
706 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Encryption of data at-rest complies with the Federal Information Processing Standard 140
(FIPS-140) standard, but is not certified.
Ciphertext stealing XTS-AES-256 is used for data encryption.
AES 256 is used for master access keys.
The algorithm is public. The only secrets are the keys.
A symmetric key algorithm is used. The same key is used to encrypt and decrypt data.
The encryption of system data and metadata is not required, so they are not encrypted.
Encryption is enabled at a system level and all of the following prerequisites must be met
before you can use encryption:
You must purchase an encryption license before you activate the function.
If you did not purchase a license, contact a Lenovo sales representative or Lenovo
Business Partner to purchase an encryption license.
At least three USB flash drives are required if you plan not to use a key management
server. They are available as a feature code from Lenovo (see the note on 722).
You must activate the license that you purchased.
Encryption must be enabled.
Note: Only data at-rest is encrypted. Host to storage communication and data sent over
links used for Remote Mirroring are not encrypted.
Figure 13-1 shows an encryption example. Encrypted disks and encrypted data paths are
marked in blue. Unencrypted disks and data paths are marked in red. In this example the
server sends unencrypted data to a SAN Volume Controller 2145-DH8 system, which stores
hardware-encrypted data on internal disks. The data is mirrored to a remote IBM Storwize
V5000 for Lenovo system using Remote Copy. The data flowing through the Remote Copy
link is not encrypted. Because the IBM Storwize V5000 for Lenovo is unable to perform any
encryption activities, data on the IBM Storwize V5000 for Lenovo is not encrypted.
Server
Remote Copy
2145-DH8 2077-24C
SAS Hardware SAS
Encryption
2145-24F 2077-24E
2145-24F 2077-24E
To enable encryption of both data copies, the IBM Storwize V5000 for Lenovo must be
replaced by an encryption capable system, as shown in Figure 13-2 on page 708. After such
replacement both copies of data are encrypted, but the Remote Copy communication
between both sites remains unencrypted.
Remote Copy
2145-DH8 2077-324
SAS Hardware SAS
Encryption
2145-24F 2077-24F
2145-24F 2077-24F
Figure 13-3 shows an example configuration that uses both software and hardware
encryption. Software encryption is used to encrypt an external virtualized storage system.
Hardware encryption is used for internal, SAS-attached disk drives.
Server
2145-SV1
Software SAS Hardware
FC
Encryption Encryption
2145-24F
2077-324
2145-24F
Placement of hardware encryption and software encryption in the Lenovo Storage code stack
are shown in Figure 13-4 on page 709. The functions that are implemented in software are
shown in blue. The external storage system is shown in yellow. The hardware encryption on
the SAS chip is marked in pink. Compression is performed before encryption. Therefore, it is
possible to realize benefits of compression for the encrypted data.
708 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-4 Encryption software stack
Each volume copy can use different encryption methods (hardware, software). It is also
allowed to have volume copies with different encryption status (encrypted versus
unencrypted). The encryption method depends only on the pool that is used for the specific
copy. You can migrate data between different encryption methods by using volume migration
or volume mirroring.
Note: The design for encryption is based on the concept that a system should either be
encrypted or not encrypted. Encryption implementation is intended to encourage solutions
that contain only encrypted volumes or only unencrypted volumes. For example, once
encryption is enabled on the system, all new objects (e.g. pools) are by default created as
encrypted.
710 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Important: If all master access key copies are lost and the system must cold reboot, all
encrypted data is gone. No method exists, even for Lenovo, to decrypt the data without the
keys. If encryption is enabled and the system cannot access the master access key, all
SAS hardware is offline, including unencrypted arrays.
No trial licenses for encryption exist on the basis that when the trial runs out, the access to
the data would be lost. Therefore, you must purchase an encryption license before you
activate encryption. Licenses are generated by IBM Data storage feature activation (DSFA)
based on the serial number (S/N) and the machine type and model number (MTM) of the
nodes.
You can activate an encryption license during the initial system setup (on the Encryption
screen of the initial setup wizard) or later on, in the running environment.
Activation of the license can be performed in one of two ways: Automatically or manually.
Both methods are available during the initial system setup and on the running system.
When you purchase a license, you should receive a function authorization document with an
authorization code printed on it. This code allows you to proceed using the automatic
activation process.
If the automatic activation process fails or if you prefer using the manual activation process,
use this page to retrieve your license keys:
s://www.ibm.com/storage/dsfa/storwize/selectMachine.wss
See 13.3.5, “Activate the license manually” on page 717 for instructions about how to retrieve
the machine signature of a node.
2. The Encryption window displays information about your storage system, as shown in
Figure 13-6 on page 713.
712 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-6 Information storage system during initial system setup
3. Right clicking on the node opens a context menu with two license activation options
(Activate License Automatically and Activate License Manually), as shown in
Figure 13-7. Use either option to activate encryption. See 13.3.4, “Activate the license
automatically” on page 715 for instructions about how to complete an automatic activation
process. See “Activate the license manually” on page 717 for instructions on how to
complete a manual activation process.
4. After either activation process is complete, you can see a green check mark in the column
labeled Licensed next to a node for which the license was enabled and you can proceed
with the initial system setup by clicking Next, as shown in Figure 13-8 on page 714.
Note: Every enclosure needs an active encryption license before you can enable
encryption on the system.
Figure 13-9 Expanding Encryption Licenses section on the Licensed Functions view
2. The Encryption Licenses window displays information about your nodes. Right click on the
node on which you want to install an encryption license. This will open a context menu
with two license activation options (Activate License Automatically and Activate
License Manually), as shown in Figure 13-10 on page 715. Use either option to activate
714 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
encryption. See 13.3.4, “Activate the license automatically” on page 715 for instructions
on how to complete an automatic activation process. See 13.3.5, “Activate the license
manually” on page 717 for instructions on how to complete a manual activation process.
Figure 13-10 Select the node on which you want to enable the encryption
3. After either activation process is complete, you can see a green check mark in the column
labeled Licensed for the node, as shown in Figure 13-11.
Important: To perform this operation, the personal computer that is used to connect to the
GUI and activate the license must be able to connect to the Internet.
To activate the encryption license for a node automatically, follow this procedure:
1. Select Activate License Automatically, the Activate License Automatically window
opens, as shown in Figure 13-12 on page 716.
2. Enter the authorization code that is specific to the node that you selected, as shown in
Figure 13-13. You can now click Activate.
3. The system connects to Lenovo to verify the authorization code and retrieve the license
key. Figure 13-14 shows a window which is displayed during this connection. If everything
works correctly, the procedure takes less than a minute.
4. After the license key has been retrieved, it is automatically applied as shown in
Figure 13-15 on page 717.
716 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-15 Successful encryption license activation
Check whether the personal computer that is used to connect to the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 GUI and activate the license can access the internet. If you are
unable to complete the automatic activation procedure, try to use the manual activation
procedure that is described in 13.3.5, “Activate the license manually” on page 717.
Although authorization codes and encryption license keys use the same format (four groups
of four hexadecimal digits), you can only use each of them in the appropriate activation
process. If you use a license key when the system expects an authorization code, the system
will display an error message, as shown in Figure 13-16.
2. If you have not done so already, you need to obtain the encryption license for the node.
The information required to obtain the encryption license is displayed in the Manual
Activation window. Use this data to follow the instructions in 13.3.1, “Obtaining an
encryption license” on page 711.
3. You can enter the license key either by typing it, by using cut or copy and paste, or by
clicking the folder icon and uploading to the storage system the license key file
downloaded from DSFA. In Figure 13-18, the sample key is already entered. You can now
click Activate.
4. Once the task completes successfully, the GUI shows that encryption is licensed for the
given node, as shown in Figure 13-19 on page 719.
718 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-19 Successful encryption license activation
Key server support is available in controller firmware code V7.8 and later. Additionally
controller firmware V8.1 introduces the ability to define up to four encryption key servers,
which is a recommended configuration, as it increases key provider availability.
Support for simultaneous use of both USB flash drives and a key server is available in
controller firmware V8.1 and later. Organizations that use encryption key management
servers might consider parallel use of USB flash drives as a backup solution. During normal
operation such drives could be disconnected and stored in a secure location. However, in the
event of a catastrophic loss of encryption servers, the USB drives could still be used to unlock
the encrypted storage.
You can also click Settings → Security → Encryption and click Enable Encryption, as
shown in Figure 13-22 on page 721.
720 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-22 Enable Encryption from the Security panel
The Enable Encryption wizard starts by asking which encryption key provider to use for
storing the encryption keys, as shown in Figure 13-23. You can enable either or both
providers.
The next section will present a scenario in which both encryption key providers are enabled at
the same time. See 13.4.2, “Enabling encryption using USB flash drives” on page 722 for
instructions on how to enable encryption using only USB flash drives provider. See 13.4.3,
Note: The system needs at least three USB flash drives to be present before you can
enable encryption using this encryption key provider. Lenovo USB flash drives are
recommended, although other flash drives might work. You can use any USB ports in any
node of the cluster. After creating the USB flash drives you can copy them if you need
more than four.
Using USB flash drives as the encryption key provider requires a minimum of three USB flash
drives to store the generated encryption keys. Because the system will attempt to write the
encryption keys to any USB key inserted into a node port, it is critical to maintain physical
security of the system during this procedure.
While the system enables encryption, you are prompted to insert USB flash drives into the
system. The system generates and copies the encryption keys to all available USB flash
drives.
Ensure that each copy of the encryption key is valid before you write any user data to the
system. The system validates any key material on a USB flash drive when it is inserted into
the canister. If the key material is not valid, the system logs an error. If the USB flash drive is
unusable or fails, the system does not display it as output. Figure 13-79 on page 757 shows
an example where the system detected and validated three USB flash drives.
If your system is in a secure location with controlled access, one USB flash drive for each
canister may remain inserted in the system. If there is a risk of unauthorized access, then all
USB flash drives with the master access keys must be removed from the system and stored
in a secure place.
Securely store all copies of the encryption key. For example, any USB flash drives holding an
encryption key copy, that are not left plugged into the system, can be locked in a safe. Similar
precautions must be taken to protect any other copies of the encryption key that are stored on
other media.
Notes: Generally, create at least one additional copy on another USB flash drive for
storage in a secure location. You can also copy the encryption key from the USB drive and
store the data on other media, which may provide additional resilience and mitigate risk
that the USB drives used to store the encryption key come from a faulty batch.
Every encryption key copy must be stored securely to maintain confidentiality of the
encrypted data.
A minimum of one USB flash drive with the correct master access key is required to unlock
access to encrypted data after a system restart such as a system-wide reboot or power loss.
No USB flash drive is required during a warm reboot, such as a node exiting service mode or
a single node reboot. The data center power-on procedure needs to ensure that USB flash
drives containing encryption keys are plugged into the storage system before it is powered
on.
722 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
During power-on, insert USB flash drives into the USB ports on two supported canisters to
safeguard against failure of a node, node’s USB port, or USB flash drive during the power-on
procedure.
To enable encryption using USB flash drives as the only encryption key provider follow these
steps:
1. In the Enable Encryption wizard Welcome tab, select USB flash drives and click Next, as
shown in Figure 13-24.
Figure 13-24 Selecting USB flash drives in the Enable Encryption wizard
2. If there are fewer than 3 USB flash drives inserted into the system, you will be prompted to
insert additional drives, as shown in Figure 13-25 on page 724. The system will report how
many additional drives need to be inserted.
Note: The Next option remains disabled and the status at the bottom is kept at 0 until at
least three USB flash drives are detected.
3. Insert the USB flash drives into the USB ports as requested.
4. After the minimum required number of drives is detected, the encryption keys are
automatically copied on the USB flash drives, as shown in Figure 13-26.
Figure 13-26 Writing the master access key to USB flash drives
724 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
You can keep adding USB flash drives or replacing the ones already plugged in to create
new copies. When done, click Next.
5. The number of keys that were created is shown in the Summary tab, as shown in
Figure 13-27. Click Finish to finalize the encryption enablement.
6. You receive a message confirming that the encryption is now enabled on the system, as
shown in Figure 13-28.
7. You can confirm that encryption is enabled, as well as verify which key providers are in
use, by going to Settings → Security → Encryption, as shown in Figure 13-29 on
page 726.
Lenovo Storage V series system supports use of an IBM Security Key Lifecycle Manager key
server as an encryption key provider. SKLM supports Key Management Interoperability
Protocol (KMIP), which is a standard for management of cryptographic keys.
Note: Make sure, that the key management server functionality is fully independent from
storage provided by systems using a key server for encryption key management. Failure to
observe this requirement may create an encryption deadlock. An encryption deadlock is a
situation in which none of key servers in the given environment can become operational
because some critical part of the data in each server is stored on a storage system that
depends on one of the key servers to unlock access to the data.
Controller firmware code V8.1 and later supports up to 4 key server objects defined in
parallel.
Before you can create a key server object in the storage system, the key server must be
configured. Ensure that you complete the following tasks on the SKLM server before you
enable encryption on the storage system:
Configure the SKLM server to use Transport Layer Security version 2 (TLSv2). The default
setting is TLSv1, but controller firmware supports only version 2.
Ensure that the database service is started automatically on startup.
Ensure that there is at least one Secure Sockets Layer (SSL) certificate for browser
access.
Create a SPECTRUM_VIRT device group for Spectrum Virtualize systems.
For more information about completing these tasks, see SKLM documentation at IBM
Knowledge Center at:
https://www.ibm.com/support/knowledgecenter/SSWPVP
Access to the key server storing the correct master access key is required to enable
encryption for the cluster after a system restart such as a system-wide reboot or power loss.
Access to the key server is not required during a warm reboot, such as a node exiting service
726 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
mode or a single node reboot. The data center power-on procedure must ensure key server
availability before storage system using encryption is booted.
Figure 13-30 Selecting Key server as the only provider in the Enable Encryption wizard
3. The wizard moves to the Key Servers tab, as shown in Figure 13-42 on page 735. Enter
the name and IP address of the key servers. Note that the first key server specified must
be the primary SKLM key server.
Note: The supported versions of Security Key Lifecycle Manager (up to V2.7, which
was the latest code version available at the time of writing) differentiate between the
primary and secondary key server role. The Primary SKLM server as defined on Key
Servers screen of Enable Encryption wizard must be the server defined as the primary
by SKLM administrators.
The key server name serves just as a label, only the provided IP address will be used to
actually contact the server. If the key server’s TCP port number differs from the default
value for the KMIP protocol (i.e. 5696), then enter the port number. An example of a
complete primary SKLM configuration is shown in Figure 13-31 on page 728.
4. If you want to add additional, secondary SKLM servers, then click on “+” and fill the data
for secondary SKLM servers, as shown on Figure 13-32. You can define up to four SKLM
servers. Click Next when you are done.
5. The next page in the wizard is a reminder that SPECTRUM_VIRT device group dedicated
for controller firmware systems must exist on the SKLM key servers. Make sure that this
device group exists and click Next to continue, as shown in Figure 13-33 on page 729.
728 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-33 Checking key server device group
6. The next step is to enable secure communication between the controller firmware system
and the SKLM key servers. This can be done by either uploading the public certificate of
the certificate authority (CA) used to sign all the SKLM key server certificates, or by
uploading the public SSL certificate of each key server directly. Figure 13-34 shows the
case when an organization's CA certificate is used. Click Next to proceed to the next step.
Figure 13-34 Uploading the key server or certification authority SSL certificate
7. Subsequently, configure the SKLM key server to trust the SSL certificate of the controller
firmware system. You can download the controller firmware system public SSL certificate
8. When the Spectrum Virtualize system SSL certificate has been installed on the SKLM key
server, acknowledge this by selecting the box indicated in Figure 13-36 and click Next to
proceed to the next step.
9. The key server configuration is shown in the Summary tab, as shown in Figure 13-37.
Click Finish to create the key server object and finalize the encryption enablement.
730 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-37 Finish enabling encryption using a key server
10.If there are no errors while creating the key server object, you receive a message that
confirms that the encryption is now enabled on the system, as shown in Figure 13-38.
Figure 13-39 Encryption enabled with only key servers as encryption key providers
Controller firmware supports enabling encryption using a Security Key Lifecycle Manager
(SKLM) key server. SKLM supports Key Management Interoperability Protocol (KMIP), which
is a standard for encryption of stored data and management of cryptographic keys.
Note: Make sure, that the key management server functionality is fully independent from
storage provided by systems using a key server for encryption key management. Failure to
observe this requirement may create an encryption deadlock. An encryption deadlock is a
situation in which none of key servers in the given environment can become operational
because some critical part of the data in each server is stored on an encrypted storage
system that depends on one of the key servers to unlock access to the data.
Controller firmware code V8.1 and later supports up to four key server objects defined in
parallel.
Before you can create the key server object in a storage system, the key server must be
configured. Ensure that you complete the following tasks on the SKLM server before you
enable encryption on the storage system:
Configure the SKLM server to use Transport Layer Security version 2 (TLSv2). The default
setting is TLSv1, but controller firmware supports only version 2.
Ensure that the database service is started automatically on startup.
Ensure that there is at least one Secure Sockets Layer (SSL) certificate for browser
access.
Create a SPECTRUM_VIRT device group for controller firmware systems. A device group
allows for restricted management of subsets of devices within a larger pool.
For more information about completing these tasks, see SKLM at IBM Knowledge Center at:
https://www.ibm.com/support/knowledgecenter/SSWPVP
Access to the key server storing the correct master access key is required to enable
encryption for the cluster after a system restart such as a system-wide reboot or power loss.
Access to the key server is not required during a warm reboot, such as a node exiting service
mode or a single node reboot. The data center power-on procedure must ensure key server
availability before storage system using encryption is booted.
732 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-40 Selecting Key servers in the Enable Encryption wizard
3. The wizard moves to the Key Servers tab, as shown in Figure 13-42 on page 735. Enter
the name and IP address of the key servers. Note that the first key server specified must
be the primary SKLM key server.
4. If you want to add additional, secondary SKLM servers, then click on “+” and fill the data
for subsequent SKLM servers, as shown in Figure 13-42 on page 735. You can define up
to four SKLM servers. Click Next when you are done.
734 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-42 Configuring multiple SKLM servers
5. The next page in the wizard is a reminder that SPECTRUM_VIRT device group dedicated
for Spectrum Virtualize systems must exist on the SKLM key servers. Make sure that this
device group exists and click Next to continue, as shown in Figure 13-43.
6. The next step is to enable secure communication between the Spectrum Virtualize system
to and the SKLM key servers. This can be done by either uploading the public certificate of
the certificate authority used to sign all the SKLM key server certificates, or by uploading
the public SSL certificate of each key server directly. Figure 13-44 on page 736 shows the
Figure 13-44 Uploading the key server or certification authority SSL certificate
7. Subsequently, configure the SKLM key server to trust the SSL certificate of the controller
firmware system. You can download the controller firmware system public SSL certificate
by clicking Export Public Key, as shown in Figure 13-45. You should install this certificate
in the SKLM key servers in the SPECTRUM_VIRT device group.
736 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
8. When the controller firmware system SSL certificate has been installed on the SKLM key
server, acknowledge this by selecting the box indicated in Figure 13-46 and click Next to
proceed to the next step.
9. The next step in the wizard is to store the master encryption key copies on USB flash
drives. If there are fewer than three drives detected, the system will request plugging
additional USB flash drives as shown on Figure 13-47. You cannot proceed until the
required minimum number of USB flash drives is detected by the system.
Figure 13-47 At least 3 USB flash drives are required to configure USB flash drive key provider
Figure 13-48 Master Access Key successfully copied to USB flash drives
11. The next screen presents you with the summary of the configuration that will be
implemented on the system, see Figure 13-49. Click Finish to create the key server object
and finalize the encryption enablement.
738 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
12.If there are no errors while creating the key server object, the system displays a screen
that confirms that the encryption is now enabled on the system, and that both encryption
key providers are enabled (see Figure 13-50).
Figure 13-50 Encryption enabled message using both encryption key providers
13.You can confirm that encryption is enabled, as well as verify which key providers are in
use, by going to Settings → Security → Encryption, as shown in Figure 13-51. Note
four green check marks confirming, that the master access key is available on all four
SKLM servers.
Note: If you set up encryption of your storage system when it was running controller
firmware code version earlier than V7.8.0, then when you upgrade to code version V8.1
you have to rekey the master encryption key before you can enable second encryption
provider.
2. Subsequently, follow the steps required to configure the key server provider, as described
in 13.4.3, “Enabling encryption using key servers” on page 726. One difference to the
process described in that section is that the wizard will give you an option to migrate from
the USB flash drive provider to key server provider. Select No to enable both encryption
key providers, as shown in Figure 13-53.
Figure 13-53 Do not disable USB flash drive encryption key provider
3. This choice is confirmed on the summary screen before the configuration is committed, as
shown in Figure 13-54 on page 741.
740 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-54 Configuration summary before committing
4. After you click finish, the system will configure SKLM servers as a second encryption key
provider. Successful completion of the task will be confirmed by a message as in
Figure 13-55.
5. You can confirm that encryption is enabled, as well as verify which key providers are in
use, by going to Settings → Security → Encryption, as shown in Figure 13-56 on
page 742. Note four green check marks confirming that the master access key is available
on all four SKLM servers.
Figure 13-57 Enable USB flash drives as a second encryption key provider
2. After you click on Configure, you will be presented with a wizard similar to described in
13.4.2, “Enabling encryption using USB flash drives” on page 722. Note that you will not
be given an option to disable SKLM provider during this process. After successful
completion of the process you will be presented with a message confirming that both
encryption key providers are enabled, as shown in Figure 13-58 on page 743.
742 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-58 Confirmation of successful configuration of two encryption key providers
3. You can confirm that encryption is enabled, as well as verify which key providers are in
use, by going to Settings → Security → Encryption, as shown in Figure 13-59. Note
four green check marks indicating that the master access key is available on all four SKLM
servers.
13.6.1 Migration from USB flash drive provider to encryption key server
The system is designed to facilitate migration from USB flash drives encryption key provider
to encryption key server provider. If you follow the steps described in 13.5.1, “Adding SKLM
as a second provider” on page 739, but when executing procedure step 2 on page 740 select
Yes instead of No (see Figure 13-60 on page 744). This will cause de-activation of the USB
Flash drives provider, and the procedure will complete with a single active encryption keys
provider – SKLM server(s).
13.6.2 Migration from encryption key server to USB flash drive provider
Migration in the other direction, that is to say from using encryption key servers provider to
USB flash drives provider, is not possible using only the GUI.
To perform the migration, add USB flash drives as a second provider. You can do this by
following steps described in 13.5.2, “Adding USB flash drives as a second provider” on
page 742. Subsequently in the CLI issue the following command:
chencryption -usb validate
to make sure that USB drives contain the correct master access key. Subsequently, disable
the encryption key server provider by running the following command:
chencryption -keyserver disable
This will disable the encryption key server provider, effectively migrating your system from
encryption key server to USB flash drive provider.
If you have lost access to the encryption key server provider, then run the command:
744 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
If you have lost access to the USB flash drives provider, then run the command
If you want to restore the configuration with both encryption key providers, then follow the
instructions in 13.5, “Configuring additional providers” on page 739.
Note: If you lose access to all encryption key providers defined in the system, then there is
no method to recover access to the data protected by the master access key.
Notes: Encryption support for Distributed RAID is available in controller firmware code
V7.7 and later.
You must decide whether to encrypt or not encrypt an object when it is created. You cannot
change this setting at a later time. To change the encryption state of stored data you have
to migrate it from an encrypted object (e.g. pool) to unencrypted one, or vice versa. Volume
migration is the only way to encrypt any volumes that were created before enabling
encryption on the system.
You can customize Pools view in the management GUI to show pool encryption status. Click
Pools → and again Pools, and then click on the Actions → Customize Columns →
Encryption, as shown in Figure 13-62.
If you create an unencrypted pool, but you add only encrypted arrays or self-encrypting
MDisks to the pool, then the pool will be reported as encrypted, because all extents in the
pool are encrypted. The pool reverts back to the unencrypted state if you add an
unencrypted array or MDisk.
Further information about how to add encrypted storage to encrypted pools is in the following
sections. You can mix and match storage encryption types in a pool. Figure 13-63 on
page 747 shows an example of an encrypted pool containing storage using different
encryption methods.
746 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-63 Mix and match encryption in a pool
However, if you want to create encrypted child pools from an unencrypted storage pool
containing a mix of internal arrays and external MDisks. the following restrictions apply:
The parent pool must not contain any unencrypted internal arrays
All Lenovo Storage canisters in the system must support software encryption and have the
encryption license activated
If you modify Pools view as described earlier in this section, you will see the encryption status
of child pools, as shown in Figure 13-65. The example shows an encrypted child pool with
non-encrypted parent pool.
.
Note: To create an unencrypted array when encryption is enabled use the command-line
interface (CLI) to run the mkarray -encrypt no command. However, you cannot add
unencrypted arrays to an encrypted pool.
You can customize MDisks by Pools view to show array encryption status. Click Pools →
MDisk by Pools, and then click on Actions → Customize Columns → Encryption as
shown in Figure 13-66Figure 13-66.
You can also check the encryption state of an array by looking at its drives in Pools →
Internal Storage view. The internal drives associated with an encrypted array are assigned
748 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
an encrypted property that can be seen by clicking an icon at the right edge of the table
header row and selecting the Encrypted option from the menu, as shown in Figure 13-67.
The user interface gives no method to see which extents contain encrypted data and which
do not. However, if a volume is created in a correctly configured encrypted pool, then all data
written to this volume will be encrypted.
The extents could contain stale unencrypted data if the MDisk was earlier used for storage of
unencrypted data. This is because file deletion only marks disk space as free, the data is not
actually removed from the storage. So, if the MDisk is not self-encrypting and was a part of an
unencrypted pool, and then was moved to an encrypted pool, then it will contain stale data
from its previous life. Another failure mode is to misconfigure an external MDisk as
self-encrypting, while in reality it’s not self-encrypting. At the same time, the MDisk will not
encrypt the data, because it’s not self-encrypting, so we end up with unencrypted data on an
extent in an encrypted pool.
However, all data written to any MDisk that’s a part of correctly configured encrypted pool, is
going to be encrypted.
You can customise the MDisk by Pools view to show the object encryption state by clicking
Pools → MDisk by Pools, selecting the menu bar, right-clicking it, and selecting the
Encryption Key icon. Figure 13-68 on page 750 shows a case where self-encrypting MDisk
is in an unencrypted pools.
Self-encrypting MDisks
When adding external storage to a pool, you should be exceptionally diligent when declaring
the MDisk as self-encrypting. Correctly declaring an MDisk as self-encrypting avoids waste of
resources, such as CPU time. However, when used improperly it may lead to unencrypted
data at-rest.
Spectrum Virtualize products can detect that an MDisk is self-encrypting by using the SCSI
Inquiry page C2. MDisks provided by other Spectrum Virtualize products will report this page
correctly. For these MDisks, the Externally encrypted box shown in Figure 13-69 will not be
selected. However, when added, they are still considered as self-encrypting.
750 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Note: You can override external encryption setting of an MDisk detected as self-encrypting
and configure it as unencrypted using the CLI command chmdisk -encrypt no. However,
you should only do so if you plan to decrypt the data on the backend or if the backend uses
inadequate data encryption.
To check whether an MDisk has been detected or declared as self-encrypting, click Pools →
MDisk by Pools and customize the view to show the encryption state by selecting the menu
bar, right-clicking it, and selecting the Encryption Key icon, as shown in Figure 13-70.
Note that the value shown in the Encryption column shows the property of objects in
respective rows. That means that in the configuration shown in Figure 13-70, Pool1 is
encrypted, so every volume created from this pool will be encrypted. However, that pool is
backed by three MDisks, out of which two are self-encrypting and one is not. Therefore, a
value of “no” next to mdisk7 does not imply that encryption of Pool1 is in any way
compromised. It only indicates that encryption of the data placed on mdisk7 will be done via
software encryption, while data placed on mdisk2 and mdisk8 will be encrypted by the
back-end storage providing these MDisks.
Note: You can change the self-encrypting attribute of an MDisk that is unmanaged or is a
part of an unencrypted pool. However, you cannot change the self-encrypting attribute of
an MDisk after it has been added to an encrypted pool.
You can modify Volumes view to show if the given volume is encrypted. Click Volumes →
Volumes then click Actions → Customize Columns → Encryption to customize the view to
show volumes encryption status, as shown in Figure 13-71 on page 752.
Note that a volume is reported as encrypted only if all the volume copies are encrypted, as
shown in Figure 13-72.
When creating volumes make sure to select encrypted pools to create encrypted volumes, as
shown in Figure 13-73 on page 753.
752 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-73 Create an encrypted volume by selecting an encrypted pool
For more information about either method, see Chapter 6, “Volume configuration” on
page 269.
13.8.6 Restrictions
The following restrictions apply to encryption:
Image mode volumes cannot be in encrypted pools.
You cannot add external non self-encrypting MDisks to encrypted pools unless all nodes in
the cluster support encryption.
Nodes that cannot perform software encryption cannot be added to systems with
encrypted pools that contain external MDisks that are not self-encrypting.
Important: Before you create a master access key, ensure that all nodes are online and
that the current master access key is accessible.
Note: There is no method to directly change data encryption keys. If you need to change
the data encryption key used to encrypt given data, then the only available method is to
migrate that data to a new encrypted object (e.g. encrypted child pool). Because the data
encryption keys are defined per encrypted object, such migration will force a change of the
key used to encrypt that data.
To rekey the master access key kept on the key server provider, complete these steps:
1. Click Settings → Security → Encryption, ensure that Encryption Keys shows that all
configured SKLM servers are reported as Accessible, as shown in Figure 13-74. Click on
the Key Servers section label to expand the section.
754 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-75 Start rekey on SKLM key server
3. Click Yes in the next window to confirm the rekey operation, as shown in Figure 13-76.
Note: The rekey operation is performed only on the primary key server configured in
the system. If you have additional key servers configured apart from the primary one,
they will not hold the updated encryption key until they obtain it from the primary key
server. To restore encryption key provider redundancy after a rekey operation, replicate
the encryption key from the primary key server to the secondary key servers.
You receive a message confirming the rekey operation was successful, as shown in
Figure 13-77 on page 756.
After the rekey operation is complete, update all other copies of the encryption key, including
copies stored on other media. Take the same precautions to securely store all copies of the
new encryption key as when you were enabling encryption for the first time.
To rekey the master access key located on USB flash drives provider, complete these steps:
1. Click Settings → Security → Encryption. Click on the USB Flash Drives section label to
expand the section as shown in Figure 13-78.
Figure 13-78 Locate USB Flash Drive section in the Encryption view
2. Verify that all USB drives plugged into the system are detected and show as Validated, as
shown in Figure 13-79 on page 757. You need at least three USB flash drives, with at
least one reported as Validated to process with rekey.
756 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure 13-79 Start rekey on USB flash drives provider
3. If the system detects a validated USB flash drive and at least three available USB flash
drives, new encryption keys are automatically copied on the USB flash drives, as shown in
Figure 13-80. Click Commit to finalize the rekey operation.
4. You should receive a message confirming the rekey operation was successful, as shown in
Figure 13-81 on page 758.
If you only have the USB key provider enabled, and you choose to enable the key server,
then the GUI gives you an option to disable the USB key provider during key server
configuration. Follow the procedure as described in 13.4.3, “Enabling encryption using key
servers” on page 726. During the key server provider configuration the wizard will ask if the
USB flash drives provider should be disabled, as shown in Figure 13-82.
Figure 13-82 Disable the USB provider via encryption key server configuration wizard
Select Yes and continue with the procedure to migrate from USB to SKLM provider.
It is also possible to migrate from key server provider to USB provider or, if you have both
providers enabled, to disable either of them. However, these operations are possible only via
the CLI.
758 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
13.11 Disabling encryption
You are prevented from disabling encryption if there are any encrypted objects defined apart
from self-encrypting MDisks. You can disable encryption in the same way whether you use
USB flash drives, key server or both providers.
2. You receive a message confirming encryption has been disabled. Figure 13-84 shows the
message when using a key server.
Detailed CLI information is available at the Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 web page under the command-line section, which is at the following address:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/tbrd_
clstrcli_4892pz.html
Basic setup
In the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 GUI, authentication is performed
by using a user name and password. The CLI uses a Secure Shell (SSH) to connect from the
host to the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 system. Either a private and
a public key pair or user name and password combination is necessary. The following steps
are required to enable CLI access with SSH keys:
A public key and a private key are generated together as a pair.
A public key is uploaded to the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
systems through the GUI.
A client SSH tool must be configured to authenticate with the private key.
A secure connection can be established between the client and the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030.
Secure Shell is the communication vehicle between the management workstation and the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems. The SSH client provides a
secure environment from which to connect to a remote machine. It uses the principles of
public and private keys for authentication. The system supports up to 32 interactive SSH
sessions on the management IP address simultaneously. After 1 hour, a fixed SSH interactive
session times out, which means that the SSH session is automatically closed. This session
timeout limit is not configurable.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the clustered system, and a private key, which is kept
private on the workstation that is running the SSH client. These keys authorize specific users
to access the administration and service functions on the system. Each key pair is associated
with a user-defined ID string that consists of up to 30 characters. Up to 100 keys can be
stored on the system. New IDs and keys can be added, and unwanted IDs and keys can be
deleted. To use the CLI, an SSH client must be installed on that system, the SSH key pair
must be generated on the client system, and the client’s SSH public key must be stored on
the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030.
The SSH client that is used in this book is PuTTY. Also, a PuTTY key generator can be used
to generate the private and public key pair. The PuTTY client can be downloaded from the
following address at no cost:
http://www.chiark.greenend.org.uk
762 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Download the following tools:
PuTTY SSH client: putty.exe
PuTTY key generator: puttygen.exe
To generate keys: The blank area that is indicated by the message is the large blank
rectangle on the GUI inside the section of the GUI that is labeled Key. Continue to move
the mouse pointer over the blank area until the progress bar reaches the far right. This
action generates random characters to create a unique key pair.
764 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
3. After the keys are generated, save them for later use. Click Save public key (Figure A-3).
4. You are prompted for a name (for example, pubkey) and a location for the public key (for
example, C:\Support Utils\PuTTY). Click Save.
Ensure that you record the name and location because the name and location of this SSH
public key must be specified later.
Public key extension: By default, the PuTTY key generator saves the public key with
no extension. Use the string “pub” for naming the public key, for example,
superuser.pub, to easily differentiate the SSH public key from the SSH private key.
6. You are prompted with a warning message (Figure A-5). Click Yes to save the private key
without a passphrase.
7. When you are prompted, enter a name (for example, icat), select a secure place as the
location, and click Save.
Key generator: The PuTTY key generator saves the private key with the PPK
extension.
766 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Uploading the SSH public key to the storage
After you create your SSH key pair, upload your SSH public key onto the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 systems. Complete the following steps:
1. On the System Overview, click the Access functional icon and select Users in the GUI
menu (Figure A-6).
2. Under User Groups, select All Users. Right-click the user name for which you want to
upload the key and click Properties (Figure A-7).
5. Click OK, as shown in Figure A-10 on page 769. The key is uploaded.
768 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Figure A-10 Select the public key
6. Check in the GUI whether the SSH key was successfully imported. See Figure A-11.
In the right pane, select SSH as the connection type. Under the “Close window on exit”
section, select Only on clean exit, which ensures that if any connection errors occur, they
are displayed on the user’s window, see Figure A-13.
770 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
2. In the Category pane, on the left side of the PuTTY Configuration window (Figure A-14),
click Connection → SSH to open the PuTTY Configuration window Options controlling
SSH connections view.
772 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
6. Click Save to save the new session (Figure A-17).
7. Figure A-18 shows the saved PUTTY session. Select the new session and click Open.
8. If a PuTTY Security Alert window opens. Confirm it by clicking Yes (Figure A-19 on
page 774).
9. PuTTY now connects to the system and prompts you for a user name to log in as. Enter
Superuser as the user name (Example A-1) and click Enter.
The tasks to configure the CLI for the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030
administration are complete.
SAN Boot
The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support SAN Boot for Microsoft
Windows, VMware, and many other operating systems. SAN Boot support can change, so
regularly check the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 interoperability
matrix at this address:
https://datacentersupport.lenovo.com/us/en/products/storage/lenovo-storage/v3700v2
/6535/documentation
https://datacentersupport.lenovo.com/us/en/products/storage/lenovo-storage/v5030/6
536/documentation
More information about SAN Boot is also available in the Multipath Subsystem Device Driver
User’s Guide, which is available at the following address:
http://www.ibm.com/support/docview.wss?rs=503&context=HW26L&uid=ssg1S7000303
774 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Enabling SAN Boot for Windows
Complete the following procedure if you want to install a Windows host by using SAN Boot:
1. Configure the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems so that only
the boot volume is mapped to the host.
2. Configure the Fibre Channel storage area network (SAN) so that the host sees only one
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 systems node port. Multiple paths
during installation are not supported.
3. Configure and enable the host bus adapter (HBA) BIOS.
4. Install the operating system by using the normal procedure, selecting the volume as the
partition on which to install.
HBAs: You might need to load an additional HBA device driver during installation,
depending on your Windows version and the HBA type.
5. Install Subsystem Device Driver Device Specific Module (SDDDSM) after the installation
completes.
6. Modify your SAN zoning to allow multiple paths.
7. Check your host to see whether all paths are available.
8. Set redundant boot devices in the HBA BIOS to enable the host to boot when its original
path fails.
HBAs: You might need to load an additional HBA device driver during installation,
depending on your ESX level and the HBA type.
Complete the following steps to migrate your existing SAN Boot images:
1. If the existing SAN Boot images are controlled by an Lenovo storage controller that uses
the Lenovo Subsystem Device Driver (SDD) as the multipathing driver, you must use SDD
V1.6 or later. Run the SDD datapath set bootdiskmigrate 2077 command to prepare the
host for image migration. For more information, see the Multipath Subsystem Device
Driver documentation.
2. Shut down the host.
3. Complete the following configuration changes on the storage controller:
a. Write down the Small Computer System Interface (SCSI) logical unit number (LUN) ID
that each volume is using, for example, boot LUN SCSI ID 0, Swap LUN SCSI ID 1,
and Database LUN SCSID 2.
b. Remove all of the image-to-host mappings from the storage controller.
c. Map the existing SAN Boot image and any other disks to the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030 systems.
4. Change the zoning so that the host can see the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030 I/O group for the target image mode volume.
5. Complete the following configuration changes on the Lenovo Storage V3700 V2, V3700
V2 XP, and V5030 systems:
a. Create an image mode volume for the managed disk (MDisk) that contains the SAN
Boot image. Use the MDisk unique identifier to specify the correct MDisk.
b. Create a host object and assign the host HBA ports.
c. Map the image mode volume to the host by using the same SCSI ID as before. For
example, you might map the boot disk to the host with SCSI LUN ID 0.
d. Map the swap disk to the host, if required. For example, you might map the swap disk
to the host with SCSI LUN ID 1.
6. Change the boot address of the host by completing the following steps:
a. Restart the host and open the HBA BIOS utility of the host during the booting process.
b. Set the BIOS settings on the host to find the boot image at the worldwide port name
(WWPN) of the node that is zoned to the HBA port.
7. If SDD V1.6 or later is installed and you run bootdiskmigrate in step 1, reboot your host,
update SDDDSM to the current level, and go to step 14 on page 777. If SDD V1.6 is not
installed, go to step 8.
8. Modify the SAN zoning so that the host sees only one path to the Lenovo Storage V3700
V2, V3700 V2 XP, and V5030.
9. Boot the host in single-path mode.
10.Uninstall any multipathing driver that is not supported for the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 systems hosts that run the applicable Windows Server
operating system.
11. Install SDDDSM.
776 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
12.Restart the host in single-path mode. Ensure that SDDDSM was correctly installed.
13.Modify the SAN zoning to enable multipathing.
14.Rescan the drives on your host and check that all paths are available.
15.Reboot your host and enter the HBA BIOS.
16.Configure the HBA settings on the host. Ensure that all HBA ports are boot-enabled and
that they can see both nodes in the I/O group that contains the SAN Boot image.
Configure the HBA ports for redundant paths.
17.Exit the BIOS utility and finish starting the host.
Appendix B. Terminology
This appendix summarizes the controller firmware and Lenovo Storage V3700 V2, V3700 V2
XP, and V5030 terms that are commonly used in this book.
To see the complete set of terms that relate to the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030, see Lenovo Information Center at:
http://systemx.lenovofiles.com/help/topic/com.lenovo.storage.v5030.8.1.0.doc/lenov
o_vseries.html
Array
An ordered collection, or group, of physical devices (disk drive modules) that are used to
define logical volumes or devices. An array is a group of drives designated to be managed
with a Redundant Array of Independent Disks (RAID).
Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization engine is
outside the data path and performs a metadata-style service. The metadata server contains
all the mapping and locking tables, and the storage devices contain only data. See also
“Symmetric virtualization” on page 793
Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to the
application as soon as the write operation is made to the source volume. Later, the write
operation is made to the target volume. See also “Synchronous replication” on page 793.
Back end
See “Front end and back end” on page 785.
Call home
Call home is a communication link that is established between a product and a service
provider. The product can use this link to call IBM or another service provider when the
product requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.
Canister
A canister is a single processing unit within a storage system.
Capacity licensing
Capacity licensing is a licensing model that licenses features with a price-per-terabyte model.
Licensed features are FlashCopy, Metro Mirror, Global Mirror, and virtualization. See also
“FlashCopy” on page 784, “Metro Mirror” on page 788, and “Virtualization” on page 794.
Chain
A set of enclosures that are attached to provide redundant access to the drives inside the
enclosures. Each control enclosure can have one or more chains.
Channel extender
A channel extender is a device that is used for long-distance communication that connects
other storage area network (SAN) fabric components. Generally, channel extenders can
involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or
another long-distance communication protocol.
Child pool
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes. Rather than being created directly from managed disks (MDisks), child
pools are created from existing capacity that is allocated to a parent pool. As with parent
pools, volumes can be created that specifically use the capacity that is allocated to the child
pool. Child pools are similar to parent pools with similar properties. Child pools can be used
for volume copy operation. Also, see “Parent pool” on page 789.
Cloud Container
Cloud Container is a virtual object that includes all of the elements, components or data that
are common to a specific application or data.
780 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Cloud Tenant
Cloud Tenant is a group or an instance that provides common access with the specific
privileges to a object, software or data source.
Clustered system (Lenovo Storage V3700 V2, V3700 V2 XP, and V5030)
A clustered system, formerly known as a cluster, is a group of up to four Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 canisters (two in each system) that presents a single
configuration, management, and service interface to the user.
Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if it is moved
from a hard disk drive (HDD) to a Flash disk. A cold extent also refers to an extent that needs
to be migrated onto an HDD if it is on a Flash disk drive.
Compression
Compression is a function that removes repetitive characters, spaces, strings of characters,
or binary data from the data that is being processed and replaces characters with control
characters. Compression reduces the amount of storage space that is required for data. See
also “RACE engine” on page 790.
Compression accelerator
A compression accelerator is hardware onto which the work of compression is off-loaded
from the microprocessor.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the data that describes the
clustered-system configuration and provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.
Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets
that are maintained with the same time reference so that all copies are consistent in time. A
Consistency Group can be managed as a single entity.
Container
A container is a software object that holds or organizes other software objects or entities.
Contingency capacity
For thin-provisioned volumes that are configured to automatically expand, the unused real
capacity that is maintained. For thin-provisioned volumes that are not configured to
automatically expand, the difference between the used capacity and the new real capacity.
Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete and
the target disk has no further dependency on the source disk. The time of the last trigger
event is normally displayed with this status.
Cross-volume consistency
A consistency group property that ensures consistency between volumes when an
application issues dependent write operations that span multiple volumes.
Data consistency
Data consistency is a characteristic of the data at the target site where the dependent write
order is maintained to ensure the recoverability of applications.
Data migration
Data migration is the movement of data from one physical location to another physical
location without the disruption of application I/O operations.
Discovery
The automatic detection of a network topology change, for example, new and deleted nodes
or links.
Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the Lenovo Storage V3700 V2,
V3700 V2 XP, and V5030 likely have different performance attributes because of the type of
disk or RAID array on which they are installed. MDisks can be on 15,000 revolutions per
minute (RPM) Fibre Channel (FC) or serial-attached SCSI (SAS) disk, Nearline SAS, or
782 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Serial Advanced Technology Attachment (SATA), or even Flash Disks. Therefore, a storage
tier attribute is assigned to each MDisk, and the default is generic_hdd.
Easy Tier
Easy Tier is a volume performance function within the Lenovo Storage V series family that
provides automatic data placement of a volume’s extents in a multitiered storage pool. The
pool normally contains a mix of Flash Disks and HDDs. Easy Tier measures host I/O activity
on the volume’s extents and migrates hot extents onto the Flash Disks to ensure the
maximum performance.
Encryption key
The encryption key, also known as master access key, is created and stored on USB flash
drives or on a key server when encryption is enabled. The master access key is used to
decrypt the data encryption key.
Evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the
volume extents in a pool are “measured” only. No automatic extent migration is performed.
Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process.
Event code
An event code is a value that is used to identify an event condition to a user. This value might
map to one or more event IDs or to values that are presented on the service panel. This value
is used to report error conditions to Lenovo and to provide an entry point into the service
guide.
Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. An event ID is used internally in the
cluster to identify the error.
Excluded condition
The excluded condition is a status condition. It describes an MDisk that the Lenovo Storage
V3700 V2, V3700 V2 XP, and V5030 have decided is no longer sufficiently reliable to be
Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range 16 MB - 8 GB in size.
External storage
External storage refers to managed disks (MDisks) that are SCSI logical units that are
presented by storage systems that are attached to and managed by the clustered system.
Failback
Failback is the restoration of an appliance to its initial configuration after the detection and
repair of a failed network or component.
Failover
Failover is an automatic operation that switches to a redundant or standby system or node in
a software, hardware, or network interruption. See also Failback.
Field-replaceable unit
Field-replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the unit’s components fails. They are held as spares by the Lenovo service organization.
FlashCopy
FlashCopy refers to a point-in-time copy where a virtual copy of a volume is created. The
target volume maintains the contents of the volume at the point in time when the copy was
established. Any subsequent write operations to the source volume are not reflected on the
target volume.
FlashCopy mapping
A FlashCopy mapping is a continuous space on a direct-access storage volume, which is
occupied by or reserved for a particular data set, data space, or file.
FlashCopy relationship
See FlashCopy mapping.
FlashCopy service
FlashCopy service is a copy service that duplicates the contents of a source volume on a
target volume. In the process, the original contents of the target volume are lost. See also
“Point-in-time copy” on page 789.
784 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Flash drive
A data storage device that uses solid-state memory to store persistent data.
Flash module
A modular hardware unit containing flash memory, one or more flash controllers, and
associated electronics.
Global Mirror
Global Mirror (GM) is a method of asynchronous replication that maintains data consistency
across multiple volumes within or across multiple systems. Global Mirror is generally used
where distances between the source site and target site cause increased latency beyond
what the application can accept.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KiB or
256 KiB) in the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. A grain is also the unit
to extend the real size of a thin-provisioned volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).
Hop
One segment of a transmission path between adjacent nodes in a routed network.
Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or Internet Small
Computer System Interface (iSCSI) host names for LUN mapping. For each host ID, SCSI
IDs are mapped to volumes separately. The intent is to have a one-to-one relationship
between hosts and host IDs, although this relationship cannot be policed.
Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster (host mapping is equivalent to LUN masking).
HyperSwap
Pertaining to a function that provides continuous, transparent availability against storage
errors and site failures, and is based on synchronous replication.
Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents in the
storage pool (existing LUN or (image mode) MDisk) with the extents in the volume.
Image volume
An image volume is a volume in which a direct block-for-block translation exists from the
managed disk (MDisk) to the volume.
I/O Group
Each pair of Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 canisters is known as an
input/output (I/O) Group. An I/O Group has a set of volumes that are associated with it that
are presented to host systems. Each Lenovo Storage V3700 V2, V3700 V2 XP, or V5030
canister is associated with exactly one I/O Group. The canister in an I/O Group provide a
failover and failback function for each other. A Lenovo Storage V3700 V2, V3700 V2 XP, and
V5030 cluster consists of two I/O groups.
Internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in
Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 enclosures.
786 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
connections). In a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 environment, the
number of ISL hops is counted on the shortest route between the pair of canister that are
farthest apart. The Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 support a maximum
of three ISL hops.
Input/output group
A collection of volumes and canister relationships that present a common interface to host
systems. Each pair of canister is known as an input/output (I/O) group.
iSCSI initiator
An initiator functions as an iSCSI client. An initiator typically serves the same purpose to a
computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI
devices (like hard drives and tape changers), an iSCSI initiator sends SCSI commands over
an IP network.
iSCSI session
An iSCSI Initiator and an iSCSI Target talk with each other and this conversation called an
iSCSI Session.
iSCSI target
An iSCSI target is a storage resource located on an Internet Small Computer System
Interface (iSCSI) server.
Latency
The time interval between the initiation of a send operation by a source task and the
completion of the matching receive operation by the target task. More generally, latency is the
time between a task initiating data transfer and the time that transfer is recognized as
complete at the data destination.
Licensed capacity
The amount of capacity on a storage system that a user is entitled to configure.
License key
An alphanumeric code that activates a licensed function on a product.
Local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect
the components (nodes, hosts, and switches) of the local cluster together.
Machine signature
A string of characters that identifies a system. A machine signature might be required to
obtain a license key.
Managed disk
A managed disk (MDisk) is a SCSI disk that is presented by a RAID controller and managed
by Lenovo Storage V3700 V2, V3700 V2 XP, and V5030. The MDisk is not visible to host
systems on the SAN.
Metro Mirror
Metro Mirror (MM) is a method of synchronous replication that maintains data consistency
across multiple volumes within the system. Metro Mirror is generally used when the write
latency that is caused by the distance between the source site and target site is acceptable to
application performance.
Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 as
copy 0 and the secondary copy is known within the Lenovo Storage V3700 V2, V3700 V2 XP,
and V5030 as copy 1.
Node canister
A node canister is a hardware unit that includes the node hardware, fabric and service
interfaces, and serial-attached SCSI (SAS) expansion ports. Node canisters are specifically
788 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
recognized on Lenovo Storage V series products. In SVC all these components are spread
within the whole system chassis, so we usually do not consider node canisters in SVC, but
just the node as a whole.
Node rescue
The process by which a node that has no valid software installed on its hard disk drive can
copy software from another node connected to the same Fibre Channel fabric.
NPIV
NPIV or N_Port ID Virtualization is a Fibre Channel feature whereby multiple Fibre Channel
node port (N_Port) IDs can share a single physical N_Port.
Object Storage
Object storage is a general term that refers to the entity in which an Cloud Object Storage
(COS) organize, manage and store with units of storage or just objects.
Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections
to the traffic on the most heavily loaded ISLs, where more than one connection is used
between these switches. Oversubscription assumes a symmetrical network, and a specific
workload that is applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.
Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of
the same size. Volumes are created from the extents that are available in the pool. You can
add MDisks to a pool at any time either to increase the number of extents that are available
for new volume copies or to expand existing volume copies. The system automatically
balances volume extents between the MDisks to provide the best performance to the
volumes. See also “Child pool” on page 780.
Partnership
In Metro Mirror or Global Mirror operations, the relationship between two clustered systems.
In a clustered-system partnership, one system is defined as the local system and the other
system as the remote system.
Point-in-time copy
A point-in-time copy is the instantaneous copy that the FlashCopy service makes of the
source volume. See also “FlashCopy service” on page 784.
Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volume’s data from cache in preparation for the FlashCopy
operation.
Primary volume
In a stand-alone Metro Mirror or Global Mirror relationship, the target of write operations
issued by the host application.
Public fabric
Configure one SAN per fabric so that it is dedicated for host attachment, storage system
attachment, and remote copy operations. This SAN is referred to as a public SAN. You can
configure the public SAN to allow Lenovo Storage V series family node-to-node
communication also. You can optionally use the -localportfcmask parameter of the chsystem
command to constrain the node-to-node communication to use only the private SAN.
Quorum disk
A disk that contains a reserved area that is used exclusively for system management. The
quorum disk is accessed when it is necessary to determine which half of the clustered system
continues to read and write data. Quorum disks can either be MDisks or drives.
Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a tie. Nodes
attempt to lock the first quorum disk (index 0), followed by the next disk (index 1), and finally
the last disk (index 2). The tie is broken by the node that locks them first.
RACE engine
The RACE engine compresses data on volumes in real time with minimal effect on
performance. See “Compression” on page 781 or “Real-time Compression”.
Real capacity
Real capacity is the amount of storage that is allocated to a volume copy from a storage pool.
Real-time Compression
Real-time Compression is an IBM integrated software function for storage space efficiency.
The RACE engine compresses data on volumes in real time with minimal effect on
performance. See also “RACE engine”.
RAID 0
RAID 0 is a data striping technique that is used across an array and no data protection is
provided.
RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.
RAID 10
RAID 10 is a combination of a RAID 0 stripe that is mirrored (RAID 1). Therefore, two identical
copies of striped data exist; no parity exists.
790 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.
RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. Therefore, this level can continue to process read and write requests to
all of the array’s virtual disks in the presence of two concurrent disk failures.
Rebuild area
Reserved capacity that is distributed across all drives in a redundant array of drives. If a drive
in the array fails, the lost array data is systematically restored into the reserved capacity,
returning redundancy to the array. The duration of the restoration process is minimized
because all drive members simultaneously participate in restoring the data. See also
“Distributed RAID or DRAID” on page 783.
Relationship
In Metro Mirror or Global Mirror, a relationship is the association between a master volume
and an auxiliary volume. These volumes also have the attributes of a primary or secondary
volume.
Reliability is the degree to which the hardware remains free of faults. Availability is the ability
of the system to continue operating despite predicted or experienced faults. Serviceability is
how efficiently and nondisruptively broken hardware can be fixed.
Remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that
connect the components (nodes, hosts, and switches) of the remote cluster together.
Significant distances can exist between the components in the local cluster and those
components in the remote cluster.
Secondary volume
Pertinent to remote copy, the volume in a relationship that contains a copy of data written by
the host application to the primary volume.
Serial-attached SCSI
Serial-attached Small Computer System Interface (SAS) is a method that is used in
accessing computer peripheral devices that employs a serial (one bit at a time) means of
digital data transfer over thin cables. The method is specified in the American National
Standard Institute standard called SAS. In the business enterprise, SAS is useful for access
to mass storage devices, particularly external hard disk drives.
Snapshot
A snapshot is an image backup type that consists of a point-in-time view of a volume.
Solid-state disk
A solid-state disk (SSD) or Flash Disk is a disk that is made from solid-state memory and
therefore has no moving parts. Most SSDs use NAND-based flash memory technology. It is
defined to the Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 as a disk tier
generic_ssd.
Space efficient
See “Thin provisioning” on page 794.
792 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Spare
An extra storage component, such as a drive or tape, that is predesignated for use as a
replacement for a failed component.
Spare goal
The optimal number of spares that are needed to protect the drives in the array from failures.
The system logs a warning event when the number of spares that protect the array drops
below this number.
Space-efficient volume
For more information about a space-efficient volume, see “Thin-provisioned volume” on
page 794.
Stand-alone relationship
In FlashCopy, Metro Mirror, and Global Mirror, relationships that do not belong to a
consistency group and that have a null consistency-group attribute.
Statesave
Binary data collection that is used for a problem determination by Lenovo service support.
Striped
Pertaining to a volume that is created from multiple managed disks (MDisks) that are in the
storage pool. Extents are allocated on the MDisks in the order specified.
Support Assistant
A function that is used to provide support personnel access to the system to complete
troubleshooting and maintenance tasks.
Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical storage, in the form
of a Redundant Array of Independent Disks (RAID), is split into smaller chunks of storage
known as extents. These extents are then concatenated, by using various policies, to make
volumes. See also “Asymmetric virtualization” on page 779.
Synchronous replication
Synchronous replication is a type of replication in which the application write operation is
made to both the source volume and target volume before control is given back to the
application. See also “Asynchronous replication” on page 779.
Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a “logical” capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Therefore, a thin-provisioned volume is a volume with a virtual capacity that
differs from its real capacity.
Throttles
Throttling is a mechanism to control the amount of resources that are used when the system
is processing I/Os on supported objects. The system supports throttles on hosts, host
clusters, volumes, copy offload operations, and storage pools. If a throttle limit is defined, the
system either processes the I/O for that object, or delays the processing of the I/O to free
resources for more critical I/O operations.
T10 DIF
T10 DIF is a Data Integrity Field (DIF) extension to SCSI to enable end-to-end protection of
data from host application to physical media.
Unique identifier
A unique identifier (UID) is an identifier that is assigned to storage-system logical units when
they are created. It is used to identify the logical unit regardless of the logical unit number
(LUN), the status of the logical unit, or whether alternate paths exist to the same device.
Typically, a UID is used only once.
Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is created that
contains several storage systems. Storage systems from various vendors can be used. The
pool can be split into volumes that are visible to the host systems that use them. See also
“Capacity licensing” on page 780.
Virtualized storage
Virtualized storage is physical storage that has virtualization techniques applied to it by a
virtualization engine.
794 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Vital product data
Vital product data (VPD or VDP) is information that uniquely defines system, hardware,
software, and microcode elements of a processing system.
Volume
A volume is a Lenovo Storage V3700 V2, V3700 V2 XP, and V5030 logical device that
appears to host systems that are attached to the SAN as a SCSI disk. Each volume is
associated with exactly one I/O Group. A volume has a preferred node within the I/O Group.
Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.
Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the system supports a
global setting that prevents these objects from being deleted if the system detects that they
have recent I/O activity. When you delete a volume, the system checks to verify whether it is
part of a host mapping, FlashCopy mapping, or remote-copy relationship. In these cases, the
system fails to delete the volume, unless the -force parameter is specified. Using the -force
parameter can lead to unintentional deletions of volumes that are still active. Active means
that the system detected recent I/O activity to the volume from any host.
Write-through mode
Write-through mode is a process in which data is written to a storage device at the same time
that the data is cached.
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult
your local Lenovo representative for information on the products and services currently available in your area.
Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any Lenovo intellectual property right may be used instead. However, it is the user's responsibility
to evaluate and verify the operation of any other product, program, or service.
Lenovo may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or
third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Note: This document is based on an IBM Redbooks publication. The content was used with permission.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo(logo)® Lenovo®
Celeron, Xeon, and the Intel logo are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Hyper-V, Internet Explorer, Microsoft, Microsoft Edge, SQL Server, Windows, Windows Server, and the
Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
798 Implementing the Lenovo Storage V3700 V2 and V5030 Systems with IBM Spectrum Virtualize V8.1
Back cover
lenovopress.com REDP-1234-00