Managing Software RAID
This section discusses software RAID configuration and management after the installation, and covers the following topics:
- Reviewing existing software RAID configuration.
- Creating a new RAID device.
- Replacing a faulty device in an array.
- Adding a new device to an existing array.
- Deactivating and removing an existing RAID device.
- Saving the configuration.
All examples in this section use the software RAID configuration from the previous section.
When a software RAID is in use, basic information about all presently active RAID devices are stored in the To determine whether a certain device is a RAID device or a component device, run the command in the following form as In order to examine a RAID device in more detail, use the following command:
Similarly, to examine a component device, type:
While the The Assume the system uses configuration from Figure 5.7, "Sample RAID Configuration". You can verify that As you can see, the above command produces only a brief overview of the RAID device and its configuration. To display more detailed information, use the following command instead:
Finally, to list all presently active RAID devices, type:
To create a new RAID device, use the command in the following form as This is the simplest way to create a RAID array. There are many more options that allow you to specify the number of spare devices, the block size of a stripe array, if the array has a write-intent bitmap, and much more. All these options can have a significant impact on the performance, but are beyond the scope of this document. For more detailed information, refer to the CREATE MODE section of the Assume that the system has two unused SCSI disk drives available, and that each of these devices has exactly one partition of the same size: To create To replace a particular device in a software RAID, first make sure it is marked as faulty by running the following command as Then remove the faulty device from the array by using the command in the following form:
Once the device is operational again, you can re-add it to the array:
Assume the system has an active RAID device, Imagine the first disk drive fails and needs to be replaced. To do so, first mark the Then remove it from the RAID device:
As soon as the hardware is replaced, you can add the device back to the array by using the following command:
To add a new device to an existing array, use the command in the following form as This will add the device as a spare device. To grow the array to use this device actively, type the following at a shell prompt:
Assume the system has an active RAID device, Also assume that a new SCSI disk drive, This will add To remove an existing RAID device, first deactivate it by running the following command as Once deactivated, remove the RAID device itself:
Finally, zero superblocks on all devices that were associated with the particular array:
Assume the system has an active RAID device, In order to remove this device, first stop it by typing the following at a shell prompt:
Once stopped, you can remove the Finally, to remove the superblocks from all associated devices, type:
By default, changes made by the Allows you to identify a particular array.
Allows you to specify a list of devices to scan for a RAID component (for example, "/dev/hda1"). You can also use the keyword To list what Use the output of this command to determine which lines to add to the By redirecting the output of this command, you can add such a line to the configuration file with a single command:
By default, the Assuming you have created the Reviewing RAID Configuration
/proc/mdstat special file. To list these devices, display the content of this file by typing the following at a shell prompt:
cat /proc/mdstatroot:
mdadm --query device…mdadm --detail raid_device…mdadm --examine component_device…mdadm --detail command displays information about a RAID device, mdadm --examine only relays information about a RAID device as it relates to a given component device. This distinction is particularly important when working with a RAID device that itself is a component of another RAID device.
mdadm --query command, as well as both mdadm --detail and mdadm --examine commands allow you to specify multiple devices at once.
Example 5.1. Reviewing RAID configuration
/dev/md0 is a RAID device by typing the following at a shell prompt:~]#
mdadm --query /dev/md0
/dev/md0: 125.38MiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
/dev/md0: No md super block found, not an md component.~]#
mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Tue Jun 28 16:05:49 2011
Raid Level : raid1
Array Size : 128384 (125.40 MiB 131.47 MB)
Used Dev Size : 128384 (125.40 MiB 131.47 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Jun 30 17:06:34 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 49c5ac74:c2b79501:5c28cb9c:16a6dd9f
Events : 0.6
Number Major Minor RaidDevice State
0 3 1 0 active sync /dev/hda1
1 3 65 1 active sync /dev/hdb1~]$
cat /proc/mdstat
Personalities : [raid0] [raid1]
md0 : active raid1 hdb1[1] hda1[0]
128384 blocks [2/2] [UU]
md1 : active raid0 hdb2[1] hda2[0]
1573888 blocks 256k chunks
md2 : active raid0 hdb3[1] hda3[0]
19132928 blocks 256k chunks
unused devices: <none>Creating a New RAID Device
root:
mdadm --create raid_device --level=level --raid-devices=number component_device…mdadm(8) manual page.
Example 5.2. Creating a new RAID device
~]#
ls /dev/sd*
/dev/sda /dev/sda1 /dev/sdb /dev/sdb1/dev/md3 as a new RAID level 1 array from /dev/sda1 and /dev/sdb1, run the following command:
~]#
mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: array /dev/md3 started.Replacing a Faulty Device
root:
mdadm raid_device --fail component_devicemdadm raid_device --remove component_devicemdadm raid_device --add component_deviceExample 5.3. Replacing a faulty device
/dev/md3, with the following layout (that is, the RAID device created in Example 5.2, "Creating a new RAID device"):~]#
mdadm --detail /dev/md3 | tail -n 3
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1/dev/sdb1 device as faulty:
~]#
mdadm /dev/md3 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md3~]#
mdadm /dev/md3 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1~]#
mdadm /dev/md3 --add /dev/sdb1
mdadm: added /dev/sdb1Extending a RAID Device
root:
mdadm raid_device --add component_devicemdadm --grow raid_device --raid-devices=numberExample 5.4. Extending a RAID device
/dev/md3, with the following layout (that is, the RAID device created in Example 5.2, "Creating a new RAID device"):~]#
mdadm --detail /dev/md3 | tail -n 3
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1/dev/sdc, has been added and has exactly one partition. To add it to the /dev/md3 array, type the following at a shell prompt:
~]#
mdadm /dev/md3 --add /dev/sdc1
mdadm: added /dev/sdc1/dev/sdc1 as a spare device. To change the size of the array to actually use it, type:
~]#
mdadm --grow /dev/md3 --raid-devices=3Removing a RAID Device
root:
mdadm --stop raid_devicemdadm --remove raid_devicemdadm --zero-superblock component_device…Example 5.5. Removing a RAID device
/dev/md3, with the following layout (that is, the RAID device created in Example 5.4, "Extending a RAID device"):~]#
mdadm --detail /dev/md3 | tail -n 4
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1~]#
mdadm --stop /dev/md3
mdadm: stopped /dev/md3/dev/md3 device by running the following command:
~]#
mdadm --remove /dev/md3~]#
mdadm --zero-superblock /dev/sda1 /dev/sdb1 /dev/sdc1Preserving the Configuration
mdadm command only apply to the current session, and will not survive a system restart. At boot time, the mdmonitor service reads the content of the /etc/mdadm.conf configuration file to see which RAID devices to start. If the software RAID was configured during the graphical installation process, this file contains directives listed in Table 5.1, "Common mdadm.conf directives" by default.
Table 5.1. Common mdadm.conf directives
Option
Description
ARRAY
DEVICE
partitions to use all partitions listed in /proc/partitions, or containers to specify an array container.
MAILADDR
Allows you to specify an email address to use in case of an alert.
ARRAY lines are presently in use regardless of the configuration, run the following command as root:
mdadm --detail --scan/etc/mdadm.conf file. You can also display the ARRAY line for a particular device:
mdadm --detail --brief raid_devicemdadm --detail --brief raid_device >> /etc/mdadm.confExample 5.6. Preserving the configuration
/etc/mdadm.conf contains the software RAID configuration created during the system installation:# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=49c5ac74:c2b79501:5c28cb9c:16a6dd9f
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=76914c11:5bfa2c00:dc6097d1:a1f4506d
ARRAY /dev/md2 level=raid0 num-devices=2 UUID=2b5d38d0:aea898bf:92be20e2:f9d893c5
/dev/md3 device as shown in Example 5.2, "Creating a new RAID device", you can make it persistent by running the following command:
~]#
mdadm --detail --brief /dev/md3 >> /etc/mdadm.conf