Software Raid (mdadm)
http://www.devil-linux.org/documentation/1.0.x/ch01s05.html
Setting up RAID devices and config files
Prepare /etc/mdadm.conf
echo 'DEVICE /dev/hd* /dev/sd*' > /etc/mdadm.conf
Preparing the harddisk
For now we assume that we either want to create a RAID-0 or RAID-1 array. For RAID-5 you just have to add more partitions (and therefor harddisks). Create a partition on each disk with the maximal size fdisk -l should show you something like this:
# fdisk -l
Disk /dev/hda: 16 heads, 63 sectors, 79780 cylinders Units = cylinders of 1008 * 512 bytes
Device Boot Start End Blocks Id System /dev/hda1 1 79780 40209088+ fd Linux raid autodetect
Disk /dev/hdc: 16 heads, 63 sectors, 79780 cylinders Units = cylinders of 1008 * 512 bytes
Device Boot Start End Blocks Id System /dev/hdc1 1 79780 40209088+ fd Linux raid autodetect
[Note]
As Devil-Linux does not include raid autodetect, there's really no need to (read the linux-raid mailinglist!), we do just use the partition type "fd, Linux raid autodetect" to mark those partitions for ourselfs. You can of course use the standard partition type "83, Linux" – but hey someone might format it 😉
RAID-0 (no redundancy!!)
Use mdadm to create a RAID-0 device:
mdadm --create /dev/md0 --chunk=64 --level=raid0 --raid-devices=2 /dev/hda1 /dev/hdc1
Instead of /dev/md0 use any other md device if /dev/md0 is already in use by another array. You might also want to experiment with the chunk size (eg. 8, 16, 32, 64, 128). Use a harddisk benchmark to check or stay with the default of 64k chunk size. You probably have to change the device names specified here to the ones which reflect the setup of your system.
# cat /proc/mdstat
Personalities : [raid0] read_ahead 1024 sectors md0 : active raid0 hdc1[1] hda1[0]
80418048 blocks 128k chunks
Ok, that looks fine.
RAID-1 (with redundancy!!)
Use mdadm to create a RAID-1 device:
mdadm --create /dev/md0 --chunk=64 --level=raid1 --raid-devices=2 /dev/hda1 /dev/hdc1
Instead of /dev/md0 use any other md device if /dev/md0 is already in use by another array. You might also want to experiment with the chunk size (eg. 8, 16, 32, 64, 128). Use a harddisk benchmark to check or stay with the default of 64k chunk size. You probably have to change the device names specified here to the ones which reflect the setup of your system.
# cat /proc/mdstat
Personalities : [raid0] [raid1] read_ahead 1024 sectors md0 : active raid1 hdc1[1] hda1[0]
40209024 blocks [2/2] [UU] [>....................] resync = 0.7% (301120/40209024) finish=17.6min speed=37640K/sec
Ok, that looks fine.
[Note]
Before rebooting you will have to wait till this resync is done otherwise it will start again after the system is up again.
Save the information about the just created array(s)
# mdadm --detail --scan >> /etc/mdadm.conf
- cat /etc/mdadm.conf
DEVICE /dev/hd* /dev/sd* ARRAY /dev/md0 level=raid1 num-devices=2 UUID=d876333b:694e852b:e9a6f40f:0beb90f9
Looks good too!
Now you can use the just created arrays to put LVM on top of them to facilitate the auto-mounting of logical volumes in the devil-linux volume group.
Don't forgett to run a final save-config !
Gathering information about RAID devices and disks
Show current status of raid devices
cat /proc/mdstat
Output for a currently degraded RAID-1 with a failed disk:
Personalities : [raid0] [raid1]
read_ahead 1024 sectors md0 : active raid1 hdc1[2](F) hda1[0]
40209024 blocks [2/1] [U_]
unused devices: <none>
Output for a currently degraded RAID-1 with the faulty disk removed:
Personalities : [raid0] [raid1]
read_ahead 1024 sectors md0 : active raid1 hda1[0]
40209024 blocks [2/1] [U_]
unused devices: <none>
Output for a currently rebuilding RAID-1:
Personalities : [raid0] [raid1]
read_ahead 1024 sectors md0 : active raid1 hdc1[2] hda1[0]
40209024 blocks [2/1] [U_] [=======>.............] recovery = 37.1% (14934592/40209024) finish=11.7min speed=35928K/sec
unused devices: <none>
Get more in detail info about RAID devices
# mdadm --query /dev/md0
/md0: 38.35GiB raid1 2 devices, 3 spares. Use mdadm –detail for more detail. /dev/md0: No md super block found, not an md component.
- mdadm –detail /dev/md0
/dev/md0:
Version : 00.90.00 Creation Time : Mon Jan 20 22:53:28 2003 Raid Level : raid1 Array Size : 40209024 (38.35 GiB 41.22 GB) Device Size : 40209024 (38.35 GiB 41.22 GB) Raid Devices : 2 Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Jan 21 00:49:47 2003 State : dirty, no-errors Active Devices : 2
Working Devices : 2
Failed Devices : 0 Spare Devices : 0
Number Major Minor RaidDevice State 0 3 1 0 active sync /dev/hda1 1 22 1 1 active sync /dev/hdc1 UUID : d876333b:694e852b:e9a6f40f:0beb90f9
Get more info about disks
# mdadm --query /dev/hda1
/dev/hda1: is not an md array /dev/hda1: device 0 in 2 device active raid1 md0….
- mdadm –query /dev/hdc1
/dev/hdc1: is not an md array /dev/hdc1: device 1 in 2 device active raid1 md0….
Managing RAID devices (RAID-1 and up!!)
Setting a disk faulty/failed:
# mdadm --fail /dev/md0 /dev/hdc1
[Caution]
DO NOT run this every on a raid0 or linear device or your data is toasted!
Removing a faulty disk from an array:
# mdadm --remove /dev/md0 /dev/hdc1
Clearing any previous raid info on a disk (eg. reusing a disk from another decommissioned raid array)
# mdadm --zero-superblock /dev/hdc1
Adding a disk to an array
# mdadm --add /dev/md0 /dev/hdc1
Commands
# cat /proc/mdstat
- mdadm –detail /dev/md0
- mdadm –add /dev/md0 /dev/hda1
Rebuilding failed Linux software RAID raidhotadd
Recently I had a hard drive fail. It was part of a Linux software RAID 1 (mirrored drives), so we lost no data, and just needed to replace hardware. However, the raid does requires rebuilding. A hardware array would usually automatically rebuild upon drive replacement, but this needed some help.
When you look at a "normal" array, you see something like this: cartoon
# cat /proc/mdstat
Personalities : [raid1] read_ahead 1024 sectors md2 : active raid1 hda3[1] hdb3[0]
262016 blocks [2/2] [UU]
md1 : active raid1 hda2[1] hdb2[0]
119684160 blocks [2/2] [UU]
md0 : active raid1 hda1[1] hdb1[0]
102208 blocks [2/2] [UU]
nused devices: <none>
That's the normal state – what you want it to look like. When a drive has failed and been replaced, it looks like this:
Personalities : [raid1]
read_ahead 1024 sectors md0 : active raid1 hda1[1]
102208 blocks [2/1] [_U]
md2 : active raid1 hda3[1]
262016 blocks [2/1] [_U]
md1 : active raid1 hda2[1]
119684160 blocks [2/1] [_U]
unused devices: <none>
Notice that it doesn't list the failed drive parts, and that an underscore appears beside each U. This shows that only one drive is active in these arrays – we have no mirror. Link Partners
Another command that will show us the state of the raid drives is "mdadm"
# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.00 Creation Time : Thu Aug 21 12:22:43 2003 Raid Level : raid1 Array Size : 102208 (99.81 MiB 104.66 MB) Device Size : 102208 (99.81 MiB 104.66 MB) Raid Devices : 2 Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Oct 15 06:25:45 2004 State : dirty, no-errors Active Devices : 1
Working Devices : 1
Failed Devices : 0 Spare Devices : 0
Number Major Minor RaidDevice State 0 0 0 0 faulty removed 1 3 1 1 active sync /dev/hda1 UUID : f9401842:995dc86c:b4102b57:f2996278
As this shows, we presently only have one drive in the array.
Although I already knew that /dev/hdb was the other part of the raid array, you can look at /etc/raidtab to see how the raid was defined:
raiddev /dev/md1
raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 nr-spare-disks 0
device /dev/hda2 raid-disk 0 device /dev/hdb2 raid-disk 1
raiddev /dev/md0 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 nr-spare-disks 0
device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1
raiddev /dev/md2 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 nr-spare-disks 0
device /dev/hda3 raid-disk 0 device /dev/hdb3 raid-disk 1
To get the mirrored drives working properly again, we need to run fdisk to see what partitions are on the working drive:
# fdisk /dev/hda
Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 14946 cylinders Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 fd Linux raid autodetect /dev/hda2 14 14913 119684250 fd Linux raid autodetect /dev/hda3 14914 14946 265072+ fd Linux raid autodetect
Duplicate that on /dev/hdb. Use "n" to create the parttions, and "t" to change their type to "fd" to match. Once this is done, use "raidhotadd":
# raidhotadd /dev/md0 /dev/hdb1
- raidhotadd /dev/md1 /dev/hdb2
- raidhotadd /dev/md2 /dev/hdb3
The rebuilding can be seen in /proc/mdstat:
# cat /proc/mdstat
Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 hdb1[0] hda1[1]
102208 blocks [2/2] [UU]
md2 : active raid1 hda3[1]
262016 blocks [2/1] [_U]
md1 : active raid1 hdb2[2] hda2[1]
119684160 blocks [2/1] [_U] [>....................] recovery = 0.2% (250108/119684160) finish=198.8min speed=10004K/sec
unused devices: <none>
The md0, a small array, has already completed rebuilding (UU), while md1 has only begun. After it finishes, it will show:
# mdadm -D /dev/md1
/dev/md1:
Version : 00.90.00 Creation Time : Thu Aug 21 12:21:21 2003 Raid Level : raid1 Array Size : 119684160 (114.13 GiB 122.55 GB) Device Size : 119684160 (114.13 GiB 122.55 GB) Raid Devices : 2 Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Fri Oct 15 13:19:11 2004 State : dirty, no-errors Active Devices : 2
Working Devices : 2
Failed Devices : 0 Spare Devices : 0
Number Major Minor RaidDevice State 0 3 66 0 active sync /dev/hdb2 1 3 2 1 active sync /dev/hda2 UUID : ede70f08:0fdf752d:b408d85a:ada8922b
I was a little surprised that this process wasn't entirely automatic. There's no reason it couldn't be. This is an older Linux install; I don't know if more modern versions will just automatically rebuild.