0% found this document useful (0 votes)
28 views21 pages

Raid

Uploaded by

tanzir ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views21 pages

Raid

Uploaded by

tanzir ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

RAID 0

RAID 0, also referred to as drive striping, provides the highest storage performance in all RAID
levels. In RAID 0 array, data is stored on multiple drives, which allows data requests to be
concurrently processed on the drives. Each drive processes data requests that involve data stored
on itself. The concurrent data processing can make the best use of the bus bandwidth, improving the
overall drive read/write performance. However, RAID 0 provides no redundancy or fault tolerance.
The failure of one drive will cause the entire array to fail. RAID 0 applies only to scenarios that
require high I/O rate but low data security.

Working Principle
Figure 1-1 shows how data is distributed into three drives in a RAID 0 array for concurrent
processing.
RAID 0 allows the original sequential data to be processed on three physical drives at the same
time.
Theoretically, the concurrent execution of these operations increases the drive read/write speed
threefold. Although the actual read/write speed may be affected by various factors, such as the bus
bandwidth, and is lower than the theoretical value, the concurrent transmission speed is definitely
higher than the serial transmission speed for large-volume data.
Figure 1-1 RAID 0 data processing

RAID 1
RAID 1 is also referred to as mirroring. In a RAID 1 array, each operating drive has a mirrored drive.
Data is written to and read from both operating and mirrored drives simultaneously. After a failed
drive is replaced, data can be rebuilt from the mirrored drive. RAID 1 provides high reliability, but
only half of the total capacity is available. It applies to scenarios where high fault tolerance is
required, such as finance.

Working Principle
Figure 1-2 shows how data is processed on a RAID 1 array consisting of two physical hard drives.
 When writing data into drive 0, the system also automatically copies the data to drive 1.
 When reading data, the system obtains data from Drive 0 and Drive 1 simultaneously.
Figure 1-2 RAID 1 data processing

RAID 1 ADM
In a RAID 1 ADM array, each operating drive has two mirrored drives. Data is written to and read
from both operating and mirrored drives simultaneously. After a failed drive is replaced, data can be
rebuilt from the mirrored drive. RAID 1 ADM provides higher reliability than RAID 1, but one third of
the total capacity is available. It applies to scenarios where high fault tolerance is required, such as
the finance field.

Working Principle
Figure 1-3 shows how a RAID 1 ADM works.
 When writing data into drive 0, the system also automatically copies the data to drives 1
and 2.
 When reading data, the system obtains data from drives 0, 1, and 2 simultaneously.
Figure 1-3 RAID 1 ADM data processing

RAID 5
RAID 5 is a storage solution that balances storage performance, data security, and storage costs. To
ensure data reliability, RAID 5 uses the distributed redundancy check mode and distributes parity
data to member drives. If a drive in a RAID 5 array is faulty, data on the failed drive can be rebuilt
from the data on other member drives in the array. RAID 5 can be used to process a large or small
amount of data. It features high speed, large capacity, and fault tolerance distribution.

Working Principle
Figure 1-4 shows how a RAID 5 array works. As shown in the figure, PA is the parity information of
A0, A1, and A2; PB is the parity information of B0, B1, and B2; and so on.
RAID 5 does not back up the data stored. Instead, data and its parity information are stored on
different member drives in the array. If data on a member drive is damaged, the data can be
restored from the remaining data and its parity information. If data on a RAID 5 member drive is
damaged, RAID 5 can use the remaining data and the corresponding parity information to restore
the damaged data.
RAID 5 can be considered as a compromise between RAID 0 and RAID 1.
 RAID 5 provides lower data security level and lower storage costs than RAID 1.
 RAID 5 provides slightly lower data read/write speed than RAID 0. However, its read
performance is higher than the single-drive write performance.
Figure 1-4 RAID 5 data processing

RAID 6
Compared with RAID 5, RAID 6 adds a second independent parity block. In RAID 6, two
independent parity systems use different algorithms to ensure high reliability. Data processing is not
affected even if two drives fail at the same time. However, RAID 6 requires larger drive space for
storing parity information and has worse "write hole" effect compared with RAID 5. Therefore, RAID
6 provides lower write performance than RAID 5.

Working Principle
Figure 1-5 shows how data is stored on a RAID 6 array consisting of five drives. PA and QA are the
first and second parity blocks for data blocks A0, A1, and A2; PB and QB are the first and second
parity blocks for data blocks B0, B1, and B2; and so on.
Data blocks and parity blocks are distributed to each RAID 6 member drive. If one or two member
drives fail, the controller card restores or regenerates the lost data by using data on other normal
member drives.
Figure 1-5 RAID 6 data processing

RAID 10
RAID 10 is a combination of RAID 1 and RAID 0. It allows drives to be mirrored (RAID 1) and then
striped (RAID 0). RAID 10 is a solution that provides good storage performance (similar to RAID 0)
and data security (same as RAID 1). (same as RAID 1), as well as storage performance similar to
RAID 0.

Working Principle
In Figure 1-6, drives 0 and 1 form span 0, drives 2 and 3 form span 1, and drives in the same span
are mirrors of each other.
If I/O requests are sent to drives in RAID 10, the sequential data requests are distributed to the two
spans for processing (RAID 0 mode). At the same time, in RAID 1 mode, when data is written to
drive 0, a copy is created on drive 1; when data is written to drive 2, a copy is created on drive 3.
Figure 1-6 RAID 10 data processing

RAID 10 ADM

For MSCC SmartRAID 3152-8i whose firmware version is later than 4.72, RAID 10 ADM is renamed
RAID 10 Triple.
RAID 10 ADM is a combination of RAID 1 ADM and RAID 0 and allows drives to be mirrored (RAID
1 ADM) and then striped (RAID 0). RAID 10 ADM is a solution that provides good storage
performance and data security (same as RAID 1 ADM), as well as storage performance similar to
RAID 0.

Working Principle
In Figure 1-7, drives 0, 1, and 2 form span 0, drives 3, 4, and 5 form span 1, and drives in the same
span are mirrors of each other.
If the system sends I/O requests to drives, the sequential data requests are distributed to the two
spans for processing (RAID 0 mode). At the same time, in RAID 1 ADM mode, when data is written
into drive 0, a copy is created on drives 1 and 2; when data is written into drive 3, a copy is created
on drives 4 and 5.
Figure 1-7 RAID 10 ADM data storage

RAID 50
RAID 50 is a combination of RAID 5 and RAID 0. RAID 0 allows data to be striped and written to
multiple drives simultaneously, and RAID 5 ensures data security by using parity bits evenly
distributed on drives.

Working Principle
Figure 1-9 shows how a RAID 50 array works. As shown in the figure, PA is the parity information of
A0, A1, and A2; PB is the parity information of B0, B1, and B2; and so on.
As a combination of RAID 5 and RAID 0, a RAID 50 array consists of multiple RAID 5 spans, where
data is stored and accessed in RAID 0 mode. With the redundancy function provided by RAID 5,
RAID 50 ensures continued operation and rapid data restoration if a member drive in a span is
faulty. In addition, the replacement of member drives does not affect services. RAID 50 tolerates a
failed drive across multiple spans simultaneously, which cannot be implemented in RAID 5 alone.
What is more, as data is distributed on multiple spans, RAID 50 provides high read/write
performance.
Figure 1-9 RAID 50 data processing

RAID 60
RAID 60 is a combination of RAID 6 and RAID 0. RAID 0 allows data to be striped and written to
multiple drives simultaneously. RAID 6 ensures data security by using two parity blocks distributed
evenly on drives.

Working Principle
In Figure 1-10, PA and QA are respectively the first and second parity information of A0, A1, and A2;
PB and QB are respectively the first and second parity information of B0, B1, and B2; and so on.
As a combination of RAID 6 and RAID 0, a RAID 60 array consists of multiple RAID 6 spans, where
data is stored and accessed in RAID 0 mode. With the redundancy function provided by RAID 6,
RAID 60 ensures continued operation and rapid data restoration if two member drives in a span are
faulty. In addition, the replacement of member drives does not affect services.
Figure 1-10 RAID 60 data processing

Fault Tolerance Capabilities


 RAID 0
RAID 0 does not provide the fault tolerance function. The failure of any member drive will
cause data loss. Data is written into multiple member drives through striping. RAID 0 is
ideal for scenarios that demand high performance without fault tolerance.
 RAID 1
RAID 1 provides 100% data redundancy. If a member drive fails, data on a corresponding
drive in the same RAID array can be used to run the system and rebuild the failed drive.
In RAID 1, data on a member drive is also completely written into another drive, therefore,
the failure of one member drive does not cause data loss. Member drives in a pair always
contain the same data. RAID 1 is ideal for scenarios that demand the maximum fault
tolerance capability but minimum capacity requirements.
 RAID 5
RAID 5 combines distributed parity check and drive striping. With parity check, the RAID
array provides redundancy for one drive without the need to back up all data on the drive.
If a member drive fails, the RAID controller card uses parity check data to rebuild all lost
data. RAID 5 provides sufficient fault tolerance capability using a small amount of system
overhead.
 RAID 6
RAID 6 combines distributed parity check and drive striping. With parity check, the RAID
array provides redundancy for two drives without the need to back up data on all drives. If
a member drive fails, the RAID controller card uses parity check data to rebuild all lost
data. RAID 6 provides sufficient fault tolerance capability using a small amount of system
overhead.
 RAID 10
RAID 10 uses multiple RAID 1 arrays to provide comprehensive data redundancy
capabilities. RAID 10 is applicable to all scenarios that require 100% redundancy based
on mirror drive groups.
 RAID 50
RAID 50 provides data redundancy based on the distributed parity check of multiple RAID
5 arrays. Each RAID 5 array allows one failed member drive, if data integrity is ensured.
 RAID 60
RAID 60 provides data redundancy based on the distributed parity check of multiple RAID
6 arrays. Each RAID 6 array allows two failed member drives, if data integrity is ensured.

I/O Performance
A RAID array can be used as an independent storage unit or multiple virtual units. The I/O read/write
speed for a RAID array is higher than that for a regular drive because a RAID array allows
concurrent access to multiple drives.
 RAID 0 provides excellent performance. In RAID 0, data is divided into smaller data
blocks and written into different drives. RAID 0 improves I/O performance because it
allows concurrent read and write of multiple drives.
 RAID 1 consists of two drives working in parallel. Data is written to both drives. This
process requires more time and resources, affecting performance.
 RAID 5 provides higher data throughput. Each member drive stores both common data
and parity data concurrently. Therefore, each member drive can be read or written
separately. In addition, RAID 5 adopts a comprehensive cache algorithm. All these
features make RAID 5 ideal for many scenarios.
 RAID 6 is ideal for scenarios that demand high reliability, response rate, and transmission
rate. It provides high data throughput, redundancy, and I/O performance. However, RAID
6 requires two sets of parity data to be written into each member drive, leading to low
performance during write operations.
 RAID 10 provides a high data transmission rate because of the RAID 0 span. In addition,
RAID 10 provides excellent data storage capabilities. As the number of spans increases,
the I/O performance improves.
 RAID 50 delivers the best performance in scenarios that require high reliability, response
rate, and transmission rate. As the number of spans increases, the I/O performance
improves.
 RAID 60 applies to scenarios similar to those of RAID 50. However, RAID 60 is not suited
for large-volume write tasks because it requires two sets of parity data to be written into
each member drive, which affects performance during write operations.
Storage Capacity
Storage capacity is an important factor for RAID level selection.
 RAID 0
With the same circumstances for a specified group of drives, RAID 0 provides the
maximum storage capacity.
Available storage capacity = Minimum member drive capacity x Number of member
drives
 RAID 1
When data is written into a drive, the data must also be written into the other drive in the
same pair, which consumes storage space.
Available storage capacity = Minimum capacity of all member drives
 RAID 5
The check data block is isolated from common data blocks. Therefore, check data
occupies the capacity of one member drive.
Available storage capacity = Minimum capacity of a member drive x (Number of member
drives – 1)
 RAID 6
Two check data blocks are isolated from common data blocks. Therefore, check data
occupies the capacity of two member drives.
Available storage capacity = Minimum capacity of a member drive x (Number of member
drives – 2)
 RAID 10
Available storage capacity = Total capacity of all spans
 RAID 50
Available storage capacity = Total capacity of all spans
 RAID 60
Available storage capacity = Total capacity of all spans

Common Function

Consistency Check
For RAID arrays with redundancy (RAID 1, 5, 6, 10, 50, and 60), the RAID controller card can check
data consistency for the drives in a RAID array. It can also verify and compute drive data, and
compare the drive data and its redundant data. If any inconsistency is found, the RAID controller
card automatically attempts to recover data and saves error information.
For RAID arrays without redundancy (RAID 0), consistency checks are not supported.

Hot Spares
The RAID controller card supports two types of hot spares: hot spare and emergency spare.

Hot Spare Drives


A hot spare drive is an independent drive in a drive system. When a member drive of a RAID array is
faulty, the hot spare drive automatically replaces the faulty drive to serve as the new member drive
and reconstructs the data of the faulty drive on the new member drive.
On RAID controller card management screens or command-line interface (CLI), you can specify an
idle drive as the hot spare drive of the RAID array. The idle drive must have at least the capacity of
the member drive and is of the same medium type and interface as those of the member drive.
The RAID controller card supports two types of hot spare drives:
 Global hot spare drive: shared by all RAID arrays of a RAID controller card, which can be
configured with one or more global hot spare drives. A global hot spare drive
automatically replaces a faulty drive in any RAID array.
 Dedicated hot spare drive: replaces a failed drive only in a specified RAID array, which
can be configured with one or more dedicated hot spare drives. The dedicated hot spare
drive automatically takes over services of a faulty member drive in the specified RAID
array.
Hot spare drives feature the following:
 They are applicable only to RAID arrays with redundancy, for example, RAID 1, 5, 6, 10,
50, and 60.
 They can replace only failed drives that are managed by the same RAID controller card
as themselves.

Emergency Spare
If no hot spare drive is specified for a RAID array with redundancy, emergency spare allows an idle
drive managed by the RAID controller card to automatically replace a failed member drive and
rebuild data to avoid data loss.
The idle drive used for emergency spare must be of the same medium type as that of the failed
member drive and have at least the capacity of the failed member drive.

RAID Rebuild
If a faulty drive occurs on a RAID array, you can use the data rebuild function of the RAID controller
card to rebuild data on a new drive for the faulty drive. The data rebuild function applies only to RAID
1, 5, 6, 10, 50, and 60 with redundancy.
The RAID controller card provides the function of automatically rebuilding data on a hot spare drive
for a faulty member drive. If the RAID array is configured with an available hot spare drive, and one
of its member drives becomes faulty, the hot spare drive automatically replaces the faulty drive and
rebuilds data. If the RAID array has no available hot spare drive, data can be rebuilt only after the
faulty drive is replaced with a new drive. When the hot spare drive starts rebuilding data, the faulty
member drive enters the removable state. If the system is powered off during the data rebuild, the
RAID controller card continues the data rebuild task after the system restart.
The data rebuild rate indicates the proportion of CPU resources occupied by a data rebuild task to
the overall CPU resources. The data rebuild rate can be set to 0%–100%. The value 0% indicates
that the RAID controller card starts the data rebuild task only when no other task is running in the
system. The value 100% indicates that the data rebuild task occupies all CPU resources. You can
customize the data rebuild rate. It is recommended that you set it to an appropriate value based on
the site requirements.

Drive States
Table 1-2 describes the states of physical drives managed by the RAID controller card.
Table 1-2 Physical drive states

State Description

Available (AVL) The physical drive may not be ready, and is not suitable for use in a
logical drive or hot spare pool.

Cfgincompat Drive configuration is incompatible.

Copyback/Copybacking The new drive is replacing the hot spare drive.

Diagnosing The drive is being diagnosed.

Degraded (DGD) The physical drive is a part of the logical drive and is in the degraded
state.

Erase In Progress The drive is being erased.

Failed (FLD) If an unrecoverable error occurs on a drive in the Online or Hot


Spare state, the drive enters the Failed state.
State Description

Fault The drive is faulty.

Foreign The drive contains a foreign configuration.

Hot Spare (HSP) The drive is configured as a hot spare drive.

Inactive The drive is a member of an inactive logical drive and can be used only
after the logical drive is activated.

JBOD Drive works in passthrough mode.

Missing (MIS) When a drive in the Online state is removed, the drive enters
the Missing state.

Offline The drive is a member drive of a virtual drive. It cannot work properly
and is offline.

Online (ONL) The drive is a member drive of a virtual drive. It is online and is working
properly.

Optimal The physical drive is in the normal state and is a part of the logical drive.

Out of Sync (OSY) As a part of the IR logical drive, the physical drive is not synchronized
with other physical drives in the IR logical drive.

Predictive Failure The drive is about to Failed. You need to back up the data on the drive
and replace the drive.

Raw (Pass Through) Pass-through drive in HBA mode.

Raw Drive The drive is used as a raw drive.

Ready (RDY) This state applies to the RAID/Mixed mode of the RAID controller card.
The drives can be used to configure a RAID array. In RAID mode, the
drives in the Ready state are not reported to the operating system (OS). In
Mixed mode, the drives in the Ready state are reported to the OS.

Rebuild/Rebuilding Data is being reconstructed on the drive to ensure data redundancy and
(RBLD) integrity of the virtual drive. In this case, the performance of the virtual
drive is affected to a certain extent.

Reconstructing Data is being reconstructed on the drive to ensure data redundancy and
State Description

integrity of the virtual drive. In this case, the performance of the virtual
drive is affected to a certain extent.

Spare The drive is a hot spare drive and is in the normal state.

Shield State This is a temporary state when a physical drive is being diagnosed.

Standby (SBY) The device is not a drive.

Unconfigured Good The drive is in a normal state but is not a member drive of a virtual drive
(ugood/ucfggood) or hot spare drive.

Unconfigured Bad (ubad) If an unrecoverable error occurs on a drive in the Unconfigured Good or
uninitialized state, the drive enters the Unconfigured Bad state.

Unknown The drive state is unknown.

Unsupport The specifications of drives exceed the specifications of the RAID


controller card.

Table 1-3 describes the states of virtual drives created under the RAID controller card.
Table 1-3 Virtual drive states

State Description

Degrade/ The virtual drive is available but abnormal, and certain member drives are faulty
Degraded (DGD) or offline. User data is not protected.

Delete fail Deletion failed.

Deleting The virtual drive is being deleted.

Failed (FLD) The virtual drive is faulty.

Fault The virtual drive is faulty.

Formatting The virtual drive is being formatted.

Inactive The virtual drive is inactive and can be used only after being activated.

Inc RAID The virtual drive does not support the SMART or SATA expansion command.
State Description

Initializing (INIT) The virtual drive is being initialized.

Initialize fail Initialization failed.

Interim Recovery The RAID array is degraded because it contains faulty drives. As a result, the
running performance of the RAID array deteriorates and data may be lost. To
solve this problem, check whether the drive is correctly connected to the device
or replace the faulty drive.

Max Dsks The number of drives in the RAID array has reached the upper limit. The drive
cannot be added to the RAID array.

Missing (MIS) The virtual drive is lost.

Not formatted The drive is not formatted.

Not Syncd The data on the physical drive is not synchronized with the data on other
physical drives in the logical drive.

Normal The virtual drive is in the normal state, and all member drives are online.

Offline If the number of faulty or offline physical drives in a RAID array exceeds the
maximum number of faulty drives supported by the RAID array, the RAID array
will be displayed as offline.

Online (ONL) The virtual drive is online.

Optimal The virtual drive is in a sound state, and all member drives are online.

Okay (OKY) The virtual drive is active and the physical drives are running properly. If the
current RAID level provides data protection, user data is protected.

Partial Degraded If the number of faulty or offline physical drives in a RAID array does not
exceed the maximum number of faulty drives allowed by the RAID array, the
RAID array will be displayed as partially degraded.

Primary The drive is the primary drive in RAID 1 and is in normal state.

Ready for Rebuild The array is ready for rebuilding.

Sanitizing Data on the virtual drive is being erased.


State Description

Secondary The drive is the secondary drive in RAID 1 and is in normal state.

Tool Small The drive capacity is insufficient and cannot be used as the hot spare of the
current RAID array.

Wrg Intfc The drive interface is different from that in the current RAID array.

Wrg Type The drive cannot be used as a member drive of the RAID array. The drive may
be incompatible or faulty.

Write_protect The virtual drive is write-protected.

Virtual Drive Read and Write


Policies
When creating a virtual drive, you need to define its data read and write policies to standardize
subsequent data read and write operations on it.

Data Read Policy


Generally, this parameter is displayed as Read Policy on the configuration screen. The RAID
controller card supports the following data read policies:
 Read-ahead policy: This policy is usually specified as Always Read Ahead, Read
Ahead, or Ahead on the configuration screen. If this policy is used, the RAID controller
card caches the data that follows the data being read for faster access. This policy
reduces drive seeks and shortens read time.
The read-ahead policy is applicable only when the RAID controller card supports power
failure protection for data. If the supercapacitor of the RAID controller that adopts the
read-ahead policy is abnormal, data may get lost.
 Non-read-ahead policy: If this policy is used, the RAID controller card does not read data
ahead. Instead, it reads data from a virtual drive only when it receives a data read
command.

Data Write Policy


Generally, this parameter is displayed as Write Policy on the configuration screen. The RAID
controller card supports the following data write policies:
 Write-back: This policy is usually specified as Write Back on the configuration screen. If
this policy is used, data is directly written to the cache. The RAID controller card then
updates accumulated cache data to drives in batches. The overall data write speed is
higher since writing to the cache is faster than writing to the drive. When the cache
receives all data, the RAID controller card signals the host that the data transmission is
complete.
The write-back policy is applicable only when the RAID controller card supports power
failure protection for data. If the supercapacitor of the RAID controller that adopts the
write-back policy is abnormal, data may get lost.
 Write-through: This policy is usually specified as Write Through on the configuration
screen. If this policy is used, the RAID controller card writes data directly into a virtual
drive, without passing through the cache. When the drive subsystem receives all data that
needs to be transmitted, the RAID controller card signals the host that the data
transmission is complete.
This mode does not require the RAID controller card to support power failure protection.
Even if the supercapacitor is faulty, services are not affected. The disadvantage of this
write policy is a low write speed.
 BBU-related write-back: This policy is usually specified as Write Back with BBU on the
configuration screen. This policy involves two scenarios: If the BBU of the RAID controller
card is present and normal, the write operation from the RAID controller card to a virtual
drive will go through the cache (write-back mode). If the BBU of the RAID controller card
is absent or faulty, the write operation from the RAID controller card to a virtual drive does
not go through the cache (write-through mode).
 Forcible write back: Write Back Enforce and Always Write Back are displayed on the
configuration page. If the RAID controller card has no supercapacitor or the
supercapacitor is damaged, the Write Back mode is forcibly used.
If the server is powered off unexpectedly and the supercapacitor is not installed or is
being charged, the data written to the DDR cache of the RAID controller card will be lost.
This mode is not recommended.

Drive Striping
Multiple processes accessing a drive at the same time may cause drive conflicts. Most drives are
specified with thresholds for the access count (I/O operations per second) and data transmission
rate (data volume transmitted per second). If the thresholds are reached, new access requests will
be suspended.
Striping technology evenly distributes I/O loads across multiple physical drives. It divides continuous
data into multiple blocks and saves them to different drives. This allows multiple processes to access
these data blocks concurrently without causing any drive conflicts. Striping also optimizes concurrent
processing performance in sequential access to data.
Drive striping is to divide the space of a drive into multiple strips based on the specified strip size.
When data is written into the drive, the data is divided into data blocks based on the strip size.
For example, RAID 0 consists of four member drives. The first data block is written into the first
member drive, the second data block is written into the second member drive, and so on, as shown
in Figure 1-11. In this way, multiple drives are concurrently written, significantly improving system
performance. However, data redundancy is not guaranteed by drive striping.
Figure 1-11 Drive striping example

Drive striping involves the following concepts:


 Strip width: indicates the number of drives in a drive group used for striping. For example,
for a drive group consisting of four member drives, the strip width is 4.
 Strip size of a drive group: indicates the total size of data blocks that the RAID controller
card concurrently writes into all drives in a drive group.
 Strip size of a drive: indicates the size of the data block written by the RAID controller
card to each drive.
For example, if a 1 MB data strip is written into a drive group, a 64 KB data block is allocated to each
member drive. Then, the strip size of the drive group is 1 MB, and the strip size of a drive is 64 KB.

Drive Mirroring
Applicable to RAID 1 and RAID 10, drive mirroring is to write the same data simultaneously into two
drives to achieve 100% data redundancy. Since the same data is read and written into two drives at
the same time, the data does not get lost and the data flow is not interrupted if one drive becomes
faulty.
However, drive mirroring is expensive because it requires a backup drive for each drive, as shown
in Figure 1-12.
Figure 1-12 Drive mirroring example
Foreign Configuration
Foreign configuration is different from the configuration of the current RAID controller card and is
specified as Foreign Configuration on the configuration screen.
Generally, foreign configuration is involved in the following scenarios:
 If RAID configuration exists on a physical drive newly installed on a server, the RAID
controller card identifies the RAID configuration as foreign configuration.
 After the RAID controller card of a server is replaced, the new RAID controller card
identifies the existing RAID configuration as foreign configuration.
 After the hot swap of a member drive in a RAID array, the member drive is identified as
containing foreign configuration.
You can process detected foreign configuration as required. For example, if the RAID configuration
existing on the newly installed drive does not meet requirements, you can delete it. If you want to
use the RAID configuration of a RAID controller card that has been replaced, you can import the
RAID configuration and make it take effect on the new RAID controller card.

Drive Power Saving


RAID controller cards provide the power saving feature for drives. This feature allows drives to be
stopped based on drive configurations and I/O activities. All SAS and SATA HDDs support this
function.
Enabling this feature puts drives in the idle state and idle hot spare drives into the power saving
state. Operations, such as RAID array creation, hot spare drive creation, dynamic capacity
expansion, and rebuild, will wake the drives from power saving.

Drive Passthrough
Drive passthrough (JBOD), also called instruction-based transparent transmission, allows data to be
transmitted without being processed by the transmission devices. It is a data transmission method
used to ensure the transmission quality only.
With this function, the RAID controller card allows user commands to be directly transmitted to
connected drives, facilitating drive access and control by upper-layer services or management
software.
For example, you can install an OS on the drives mounted to a RAID controller card. But if the RAID
controller card does not support the passthrough feature, you can install the OS only on the virtual
drives configured under the RAID controller card.
Name Meaning
Table 2-1 describes the meanings of suffixes in RAID controller card names.
Table 2-1 Meanings of suffixes in RAID controller card names

Suffix RAID Array Out-of-Band Supercapacitor/ Cache


Management Battery

IT Not supported Supported Not supported Not supported

IR Supported Not supported Not supported Not supported

iMR Supported Supported Not supported Not supported

MRa Supported Supported Supported

You might also like