Raid
Raid
RAID 0, also referred to as drive striping, provides the highest storage performance in all RAID
levels. In RAID 0 array, data is stored on multiple drives, which allows data requests to be
concurrently processed on the drives. Each drive processes data requests that involve data stored
on itself. The concurrent data processing can make the best use of the bus bandwidth, improving the
overall drive read/write performance. However, RAID 0 provides no redundancy or fault tolerance.
The failure of one drive will cause the entire array to fail. RAID 0 applies only to scenarios that
require high I/O rate but low data security.
Working Principle
Figure 1-1 shows how data is distributed into three drives in a RAID 0 array for concurrent
processing.
RAID 0 allows the original sequential data to be processed on three physical drives at the same
time.
Theoretically, the concurrent execution of these operations increases the drive read/write speed
threefold. Although the actual read/write speed may be affected by various factors, such as the bus
bandwidth, and is lower than the theoretical value, the concurrent transmission speed is definitely
higher than the serial transmission speed for large-volume data.
Figure 1-1 RAID 0 data processing
RAID 1
RAID 1 is also referred to as mirroring. In a RAID 1 array, each operating drive has a mirrored drive.
Data is written to and read from both operating and mirrored drives simultaneously. After a failed
drive is replaced, data can be rebuilt from the mirrored drive. RAID 1 provides high reliability, but
only half of the total capacity is available. It applies to scenarios where high fault tolerance is
required, such as finance.
Working Principle
Figure 1-2 shows how data is processed on a RAID 1 array consisting of two physical hard drives.
When writing data into drive 0, the system also automatically copies the data to drive 1.
When reading data, the system obtains data from Drive 0 and Drive 1 simultaneously.
Figure 1-2 RAID 1 data processing
RAID 1 ADM
In a RAID 1 ADM array, each operating drive has two mirrored drives. Data is written to and read
from both operating and mirrored drives simultaneously. After a failed drive is replaced, data can be
rebuilt from the mirrored drive. RAID 1 ADM provides higher reliability than RAID 1, but one third of
the total capacity is available. It applies to scenarios where high fault tolerance is required, such as
the finance field.
Working Principle
Figure 1-3 shows how a RAID 1 ADM works.
When writing data into drive 0, the system also automatically copies the data to drives 1
and 2.
When reading data, the system obtains data from drives 0, 1, and 2 simultaneously.
Figure 1-3 RAID 1 ADM data processing
RAID 5
RAID 5 is a storage solution that balances storage performance, data security, and storage costs. To
ensure data reliability, RAID 5 uses the distributed redundancy check mode and distributes parity
data to member drives. If a drive in a RAID 5 array is faulty, data on the failed drive can be rebuilt
from the data on other member drives in the array. RAID 5 can be used to process a large or small
amount of data. It features high speed, large capacity, and fault tolerance distribution.
Working Principle
Figure 1-4 shows how a RAID 5 array works. As shown in the figure, PA is the parity information of
A0, A1, and A2; PB is the parity information of B0, B1, and B2; and so on.
RAID 5 does not back up the data stored. Instead, data and its parity information are stored on
different member drives in the array. If data on a member drive is damaged, the data can be
restored from the remaining data and its parity information. If data on a RAID 5 member drive is
damaged, RAID 5 can use the remaining data and the corresponding parity information to restore
the damaged data.
RAID 5 can be considered as a compromise between RAID 0 and RAID 1.
RAID 5 provides lower data security level and lower storage costs than RAID 1.
RAID 5 provides slightly lower data read/write speed than RAID 0. However, its read
performance is higher than the single-drive write performance.
Figure 1-4 RAID 5 data processing
RAID 6
Compared with RAID 5, RAID 6 adds a second independent parity block. In RAID 6, two
independent parity systems use different algorithms to ensure high reliability. Data processing is not
affected even if two drives fail at the same time. However, RAID 6 requires larger drive space for
storing parity information and has worse "write hole" effect compared with RAID 5. Therefore, RAID
6 provides lower write performance than RAID 5.
Working Principle
Figure 1-5 shows how data is stored on a RAID 6 array consisting of five drives. PA and QA are the
first and second parity blocks for data blocks A0, A1, and A2; PB and QB are the first and second
parity blocks for data blocks B0, B1, and B2; and so on.
Data blocks and parity blocks are distributed to each RAID 6 member drive. If one or two member
drives fail, the controller card restores or regenerates the lost data by using data on other normal
member drives.
Figure 1-5 RAID 6 data processing
RAID 10
RAID 10 is a combination of RAID 1 and RAID 0. It allows drives to be mirrored (RAID 1) and then
striped (RAID 0). RAID 10 is a solution that provides good storage performance (similar to RAID 0)
and data security (same as RAID 1). (same as RAID 1), as well as storage performance similar to
RAID 0.
Working Principle
In Figure 1-6, drives 0 and 1 form span 0, drives 2 and 3 form span 1, and drives in the same span
are mirrors of each other.
If I/O requests are sent to drives in RAID 10, the sequential data requests are distributed to the two
spans for processing (RAID 0 mode). At the same time, in RAID 1 mode, when data is written to
drive 0, a copy is created on drive 1; when data is written to drive 2, a copy is created on drive 3.
Figure 1-6 RAID 10 data processing
RAID 10 ADM
For MSCC SmartRAID 3152-8i whose firmware version is later than 4.72, RAID 10 ADM is renamed
RAID 10 Triple.
RAID 10 ADM is a combination of RAID 1 ADM and RAID 0 and allows drives to be mirrored (RAID
1 ADM) and then striped (RAID 0). RAID 10 ADM is a solution that provides good storage
performance and data security (same as RAID 1 ADM), as well as storage performance similar to
RAID 0.
Working Principle
In Figure 1-7, drives 0, 1, and 2 form span 0, drives 3, 4, and 5 form span 1, and drives in the same
span are mirrors of each other.
If the system sends I/O requests to drives, the sequential data requests are distributed to the two
spans for processing (RAID 0 mode). At the same time, in RAID 1 ADM mode, when data is written
into drive 0, a copy is created on drives 1 and 2; when data is written into drive 3, a copy is created
on drives 4 and 5.
Figure 1-7 RAID 10 ADM data storage
RAID 50
RAID 50 is a combination of RAID 5 and RAID 0. RAID 0 allows data to be striped and written to
multiple drives simultaneously, and RAID 5 ensures data security by using parity bits evenly
distributed on drives.
Working Principle
Figure 1-9 shows how a RAID 50 array works. As shown in the figure, PA is the parity information of
A0, A1, and A2; PB is the parity information of B0, B1, and B2; and so on.
As a combination of RAID 5 and RAID 0, a RAID 50 array consists of multiple RAID 5 spans, where
data is stored and accessed in RAID 0 mode. With the redundancy function provided by RAID 5,
RAID 50 ensures continued operation and rapid data restoration if a member drive in a span is
faulty. In addition, the replacement of member drives does not affect services. RAID 50 tolerates a
failed drive across multiple spans simultaneously, which cannot be implemented in RAID 5 alone.
What is more, as data is distributed on multiple spans, RAID 50 provides high read/write
performance.
Figure 1-9 RAID 50 data processing
RAID 60
RAID 60 is a combination of RAID 6 and RAID 0. RAID 0 allows data to be striped and written to
multiple drives simultaneously. RAID 6 ensures data security by using two parity blocks distributed
evenly on drives.
Working Principle
In Figure 1-10, PA and QA are respectively the first and second parity information of A0, A1, and A2;
PB and QB are respectively the first and second parity information of B0, B1, and B2; and so on.
As a combination of RAID 6 and RAID 0, a RAID 60 array consists of multiple RAID 6 spans, where
data is stored and accessed in RAID 0 mode. With the redundancy function provided by RAID 6,
RAID 60 ensures continued operation and rapid data restoration if two member drives in a span are
faulty. In addition, the replacement of member drives does not affect services.
Figure 1-10 RAID 60 data processing
I/O Performance
A RAID array can be used as an independent storage unit or multiple virtual units. The I/O read/write
speed for a RAID array is higher than that for a regular drive because a RAID array allows
concurrent access to multiple drives.
RAID 0 provides excellent performance. In RAID 0, data is divided into smaller data
blocks and written into different drives. RAID 0 improves I/O performance because it
allows concurrent read and write of multiple drives.
RAID 1 consists of two drives working in parallel. Data is written to both drives. This
process requires more time and resources, affecting performance.
RAID 5 provides higher data throughput. Each member drive stores both common data
and parity data concurrently. Therefore, each member drive can be read or written
separately. In addition, RAID 5 adopts a comprehensive cache algorithm. All these
features make RAID 5 ideal for many scenarios.
RAID 6 is ideal for scenarios that demand high reliability, response rate, and transmission
rate. It provides high data throughput, redundancy, and I/O performance. However, RAID
6 requires two sets of parity data to be written into each member drive, leading to low
performance during write operations.
RAID 10 provides a high data transmission rate because of the RAID 0 span. In addition,
RAID 10 provides excellent data storage capabilities. As the number of spans increases,
the I/O performance improves.
RAID 50 delivers the best performance in scenarios that require high reliability, response
rate, and transmission rate. As the number of spans increases, the I/O performance
improves.
RAID 60 applies to scenarios similar to those of RAID 50. However, RAID 60 is not suited
for large-volume write tasks because it requires two sets of parity data to be written into
each member drive, which affects performance during write operations.
Storage Capacity
Storage capacity is an important factor for RAID level selection.
RAID 0
With the same circumstances for a specified group of drives, RAID 0 provides the
maximum storage capacity.
Available storage capacity = Minimum member drive capacity x Number of member
drives
RAID 1
When data is written into a drive, the data must also be written into the other drive in the
same pair, which consumes storage space.
Available storage capacity = Minimum capacity of all member drives
RAID 5
The check data block is isolated from common data blocks. Therefore, check data
occupies the capacity of one member drive.
Available storage capacity = Minimum capacity of a member drive x (Number of member
drives – 1)
RAID 6
Two check data blocks are isolated from common data blocks. Therefore, check data
occupies the capacity of two member drives.
Available storage capacity = Minimum capacity of a member drive x (Number of member
drives – 2)
RAID 10
Available storage capacity = Total capacity of all spans
RAID 50
Available storage capacity = Total capacity of all spans
RAID 60
Available storage capacity = Total capacity of all spans
Common Function
Consistency Check
For RAID arrays with redundancy (RAID 1, 5, 6, 10, 50, and 60), the RAID controller card can check
data consistency for the drives in a RAID array. It can also verify and compute drive data, and
compare the drive data and its redundant data. If any inconsistency is found, the RAID controller
card automatically attempts to recover data and saves error information.
For RAID arrays without redundancy (RAID 0), consistency checks are not supported.
Hot Spares
The RAID controller card supports two types of hot spares: hot spare and emergency spare.
Emergency Spare
If no hot spare drive is specified for a RAID array with redundancy, emergency spare allows an idle
drive managed by the RAID controller card to automatically replace a failed member drive and
rebuild data to avoid data loss.
The idle drive used for emergency spare must be of the same medium type as that of the failed
member drive and have at least the capacity of the failed member drive.
RAID Rebuild
If a faulty drive occurs on a RAID array, you can use the data rebuild function of the RAID controller
card to rebuild data on a new drive for the faulty drive. The data rebuild function applies only to RAID
1, 5, 6, 10, 50, and 60 with redundancy.
The RAID controller card provides the function of automatically rebuilding data on a hot spare drive
for a faulty member drive. If the RAID array is configured with an available hot spare drive, and one
of its member drives becomes faulty, the hot spare drive automatically replaces the faulty drive and
rebuilds data. If the RAID array has no available hot spare drive, data can be rebuilt only after the
faulty drive is replaced with a new drive. When the hot spare drive starts rebuilding data, the faulty
member drive enters the removable state. If the system is powered off during the data rebuild, the
RAID controller card continues the data rebuild task after the system restart.
The data rebuild rate indicates the proportion of CPU resources occupied by a data rebuild task to
the overall CPU resources. The data rebuild rate can be set to 0%–100%. The value 0% indicates
that the RAID controller card starts the data rebuild task only when no other task is running in the
system. The value 100% indicates that the data rebuild task occupies all CPU resources. You can
customize the data rebuild rate. It is recommended that you set it to an appropriate value based on
the site requirements.
Drive States
Table 1-2 describes the states of physical drives managed by the RAID controller card.
Table 1-2 Physical drive states
State Description
Available (AVL) The physical drive may not be ready, and is not suitable for use in a
logical drive or hot spare pool.
Degraded (DGD) The physical drive is a part of the logical drive and is in the degraded
state.
Inactive The drive is a member of an inactive logical drive and can be used only
after the logical drive is activated.
Missing (MIS) When a drive in the Online state is removed, the drive enters
the Missing state.
Offline The drive is a member drive of a virtual drive. It cannot work properly
and is offline.
Online (ONL) The drive is a member drive of a virtual drive. It is online and is working
properly.
Optimal The physical drive is in the normal state and is a part of the logical drive.
Out of Sync (OSY) As a part of the IR logical drive, the physical drive is not synchronized
with other physical drives in the IR logical drive.
Predictive Failure The drive is about to Failed. You need to back up the data on the drive
and replace the drive.
Ready (RDY) This state applies to the RAID/Mixed mode of the RAID controller card.
The drives can be used to configure a RAID array. In RAID mode, the
drives in the Ready state are not reported to the operating system (OS). In
Mixed mode, the drives in the Ready state are reported to the OS.
Rebuild/Rebuilding Data is being reconstructed on the drive to ensure data redundancy and
(RBLD) integrity of the virtual drive. In this case, the performance of the virtual
drive is affected to a certain extent.
Reconstructing Data is being reconstructed on the drive to ensure data redundancy and
State Description
integrity of the virtual drive. In this case, the performance of the virtual
drive is affected to a certain extent.
Spare The drive is a hot spare drive and is in the normal state.
Shield State This is a temporary state when a physical drive is being diagnosed.
Unconfigured Good The drive is in a normal state but is not a member drive of a virtual drive
(ugood/ucfggood) or hot spare drive.
Unconfigured Bad (ubad) If an unrecoverable error occurs on a drive in the Unconfigured Good or
uninitialized state, the drive enters the Unconfigured Bad state.
Table 1-3 describes the states of virtual drives created under the RAID controller card.
Table 1-3 Virtual drive states
State Description
Degrade/ The virtual drive is available but abnormal, and certain member drives are faulty
Degraded (DGD) or offline. User data is not protected.
Inactive The virtual drive is inactive and can be used only after being activated.
Inc RAID The virtual drive does not support the SMART or SATA expansion command.
State Description
Interim Recovery The RAID array is degraded because it contains faulty drives. As a result, the
running performance of the RAID array deteriorates and data may be lost. To
solve this problem, check whether the drive is correctly connected to the device
or replace the faulty drive.
Max Dsks The number of drives in the RAID array has reached the upper limit. The drive
cannot be added to the RAID array.
Not Syncd The data on the physical drive is not synchronized with the data on other
physical drives in the logical drive.
Normal The virtual drive is in the normal state, and all member drives are online.
Offline If the number of faulty or offline physical drives in a RAID array exceeds the
maximum number of faulty drives supported by the RAID array, the RAID array
will be displayed as offline.
Optimal The virtual drive is in a sound state, and all member drives are online.
Okay (OKY) The virtual drive is active and the physical drives are running properly. If the
current RAID level provides data protection, user data is protected.
Partial Degraded If the number of faulty or offline physical drives in a RAID array does not
exceed the maximum number of faulty drives allowed by the RAID array, the
RAID array will be displayed as partially degraded.
Primary The drive is the primary drive in RAID 1 and is in normal state.
Secondary The drive is the secondary drive in RAID 1 and is in normal state.
Tool Small The drive capacity is insufficient and cannot be used as the hot spare of the
current RAID array.
Wrg Intfc The drive interface is different from that in the current RAID array.
Wrg Type The drive cannot be used as a member drive of the RAID array. The drive may
be incompatible or faulty.
Drive Striping
Multiple processes accessing a drive at the same time may cause drive conflicts. Most drives are
specified with thresholds for the access count (I/O operations per second) and data transmission
rate (data volume transmitted per second). If the thresholds are reached, new access requests will
be suspended.
Striping technology evenly distributes I/O loads across multiple physical drives. It divides continuous
data into multiple blocks and saves them to different drives. This allows multiple processes to access
these data blocks concurrently without causing any drive conflicts. Striping also optimizes concurrent
processing performance in sequential access to data.
Drive striping is to divide the space of a drive into multiple strips based on the specified strip size.
When data is written into the drive, the data is divided into data blocks based on the strip size.
For example, RAID 0 consists of four member drives. The first data block is written into the first
member drive, the second data block is written into the second member drive, and so on, as shown
in Figure 1-11. In this way, multiple drives are concurrently written, significantly improving system
performance. However, data redundancy is not guaranteed by drive striping.
Figure 1-11 Drive striping example
Drive Mirroring
Applicable to RAID 1 and RAID 10, drive mirroring is to write the same data simultaneously into two
drives to achieve 100% data redundancy. Since the same data is read and written into two drives at
the same time, the data does not get lost and the data flow is not interrupted if one drive becomes
faulty.
However, drive mirroring is expensive because it requires a backup drive for each drive, as shown
in Figure 1-12.
Figure 1-12 Drive mirroring example
Foreign Configuration
Foreign configuration is different from the configuration of the current RAID controller card and is
specified as Foreign Configuration on the configuration screen.
Generally, foreign configuration is involved in the following scenarios:
If RAID configuration exists on a physical drive newly installed on a server, the RAID
controller card identifies the RAID configuration as foreign configuration.
After the RAID controller card of a server is replaced, the new RAID controller card
identifies the existing RAID configuration as foreign configuration.
After the hot swap of a member drive in a RAID array, the member drive is identified as
containing foreign configuration.
You can process detected foreign configuration as required. For example, if the RAID configuration
existing on the newly installed drive does not meet requirements, you can delete it. If you want to
use the RAID configuration of a RAID controller card that has been replaced, you can import the
RAID configuration and make it take effect on the new RAID controller card.
Drive Passthrough
Drive passthrough (JBOD), also called instruction-based transparent transmission, allows data to be
transmitted without being processed by the transmission devices. It is a data transmission method
used to ensure the transmission quality only.
With this function, the RAID controller card allows user commands to be directly transmitted to
connected drives, facilitating drive access and control by upper-layer services or management
software.
For example, you can install an OS on the drives mounted to a RAID controller card. But if the RAID
controller card does not support the passthrough feature, you can install the OS only on the virtual
drives configured under the RAID controller card.
Name Meaning
Table 2-1 describes the meanings of suffixes in RAID controller card names.
Table 2-1 Meanings of suffixes in RAID controller card names