{"id":46,"date":"2013-03-10T11:39:18","date_gmt":"2013-03-10T11:39:18","guid":{"rendered":"https:\/\/notiz.comanet.xyz\/?p=46"},"modified":"2019-03-03T19:42:32","modified_gmt":"2019-03-03T18:42:32","slug":"linux-raid","status":"publish","type":"post","link":"https:\/\/notiz.comanet.xyz\/?p=46","title":{"rendered":"Linux Raid"},"content":{"rendered":"<h2> \t<span class=\"mw-headline\" id=\"Software_Raid_.28mdadm.29\">Software Raid <b>(mdadm)<\/b> <\/span><\/h2>\n<p> \t<a class=\"external free\" href=\"http:\/\/www.devil-linux.org\/documentation\/1.0.x\/ch01s05.html\" rel=\"nofollow\">http:\/\/www.devil-linux.org\/documentation\/1.0.x\/ch01s05.html<\/a><\/p>\n<p> \t&nbsp;<\/p>\n<h3> \t<span class=\"mw-headline\" id=\"Setting_up_RAID_devices_and_config_files\">Setting up RAID devices and config files <\/span><\/h3>\n<p> \t<b>Prepare \/etc\/mdadm.conf<\/b><\/p>\n<pre>echo &#39;DEVICE \/dev\/hd* \/dev\/sd*&#39; &gt; \/etc\/mdadm.conf <\/pre>\n<p> \t<b>Preparing the harddisk<\/b><\/p>\n<p> \tFor now we assume that we either want to create a RAID-0 or RAID-1 array. For RAID-5 you just have to add more partitions (and therefor harddisks). Create a partition on each disk with the maximal size fdisk -l should show you something like this:<\/p>\n<pre># fdisk -l <\/pre>\n<p> \tDisk \/dev\/hda: 16 heads, 63 sectors, 79780 cylinders Units = cylinders of 1008 * 512 bytes<\/p>\n<p> \tDevice Boot Start End Blocks Id System \/dev\/hda1 1 79780 40209088+ fd Linux raid autodetect<\/p>\n<p> \tDisk \/dev\/hdc: 16 heads, 63 sectors, 79780 cylinders Units = cylinders of 1008 * 512 bytes<\/p>\n<p> \tDevice Boot Start End Blocks Id System \/dev\/hdc1 1 79780 40209088+ fd Linux raid autodetect<\/p>\n<p> \t<b>[Note]<\/b><\/p>\n<p> \tAs Devil-Linux does not include raid autodetect, there&#39;s really no need to (read the linux-raid mailinglist!), we do just use the partition type &quot;fd, Linux raid autodetect&quot; to mark those partitions for ourselfs. You can of course use the standard partition type &quot;83, Linux&quot; &#8211; but hey someone might format it&nbsp;\ud83d\ude09<\/p>\n<h3> \t<span class=\"mw-headline\" id=\"RAID-0_.28no_redundancy.21.21.29\">RAID-0 (no redundancy!!) <\/span><\/h3>\n<p> \t<b>Use mdadm to create a RAID-0 device:<\/b><\/p>\n<pre>mdadm --create \/dev\/md0 --chunk=64 --level=raid0 --raid-devices=2 \/dev\/hda1 \/dev\/hdc1 <\/pre>\n<p> \tInstead of \/dev\/md0 use any other md device if \/dev\/md0 is already in use by another array. You might also want to experiment with the chunk size (eg. 8, 16, 32, 64, 128). Use a harddisk benchmark to check or stay with the default of 64k chunk size. You probably have to change the device names specified here to the ones which reflect the setup of your system.<\/p>\n<pre># cat \/proc\/mdstat <\/pre>\n<p> \tPersonalities&nbsp;: [raid0] read_ahead 1024 sectors md0&nbsp;: active raid0 hdc1[1] hda1[0]<\/p>\n<pre>     80418048 blocks 128k chunks <\/pre>\n<p> \tOk, that looks fine.<\/p>\n<h3> \t<span class=\"mw-headline\" id=\"RAID-1_.28with_redundancy.21.21.29\">RAID-1 (with redundancy!!) <\/span><\/h3>\n<p> \t<b>Use mdadm to create a RAID-1 device:<\/b><\/p>\n<pre>mdadm --create \/dev\/md0 --chunk=64 --level=raid1 --raid-devices=2 \/dev\/hda1 \/dev\/hdc1 <\/pre>\n<p> \tInstead of \/dev\/md0 use any other md device if \/dev\/md0 is already in use by another array. You might also want to experiment with the chunk size (eg. 8, 16, 32, 64, 128). Use a harddisk benchmark to check or stay with the default of 64k chunk size. You probably have to change the device names specified here to the ones which reflect the setup of your system.<\/p>\n<pre># cat \/proc\/mdstat <\/pre>\n<p> \tPersonalities&nbsp;: [raid0] [raid1] read_ahead 1024 sectors md0&nbsp;: active raid1 hdc1[1] hda1[0]<\/p>\n<pre>     40209024 blocks [2\/2] [UU]      [&gt;....................]  resync =  0.7% (301120\/40209024) finish=17.6min speed=37640K\/sec <\/pre>\n<p> \tOk, that looks fine.<\/p>\n<p> \t<b>[Note]<\/b><\/p>\n<p> \tBefore rebooting you will have to wait till this resync is done otherwise it will start again after the system is up again.<\/p>\n<p> \t<b>Save the information about the just created array(s)<\/b><\/p>\n<pre># mdadm --detail --scan &gt;&gt; \/etc\/mdadm.conf <\/pre>\n<ol>\n<li> \t\tcat \/etc\/mdadm.conf<\/li>\n<\/ol>\n<pre> DEVICE \/dev\/hd* \/dev\/sd*  ARRAY \/dev\/md0 level=raid1 num-devices=2 UUID=d876333b:694e852b:e9a6f40f:0beb90f9 <\/pre>\n<p> \tLooks good too!<\/p>\n<p> \tNow you can use the just created arrays to put LVM on top of them to facilitate the auto-mounting of logical volumes in the devil-linux volume group.<\/p>\n<p> \tDon&#39;t forgett to run a final save-config&nbsp;!<\/p>\n<h3> \t<span class=\"mw-headline\" id=\"Gathering_information_about_RAID_devices_and_disks\">Gathering information about RAID devices and disks <\/span><\/h3>\n<p> \t<b>Show current status of raid devices<\/b><\/p>\n<pre>cat \/proc\/mdstat <\/pre>\n<p> \tOutput for a currently degraded RAID-1 with a failed disk:<\/p>\n<pre>Personalities&nbsp;: [raid0] [raid1] <\/pre>\n<p> \tread_ahead 1024 sectors md0&nbsp;: active raid1 hdc1[2](F) hda1[0]<\/p>\n<pre>     40209024 blocks [2\/1] [U_] <\/pre>\n<p> \tunused devices: &lt;none&gt;<\/p>\n<p> \tOutput for a currently degraded RAID-1 with the faulty disk removed:<\/p>\n<pre>Personalities&nbsp;: [raid0] [raid1] <\/pre>\n<p> \tread_ahead 1024 sectors md0&nbsp;: active raid1 hda1[0]<\/p>\n<pre>         40209024 blocks [2\/1] [U_] <\/pre>\n<p> \tunused devices: &lt;none&gt;<\/p>\n<p> \tOutput for a currently rebuilding RAID-1:<\/p>\n<pre>Personalities&nbsp;: [raid0] [raid1] <\/pre>\n<p> \tread_ahead 1024 sectors md0&nbsp;: active raid1 hdc1[2] hda1[0]<\/p>\n<pre>   40209024 blocks [2\/1] [U_]    [=======&gt;.............]  recovery = 37.1% (14934592\/40209024) finish=11.7min speed=35928K\/sec <\/pre>\n<p> \tunused devices: &lt;none&gt;<\/p>\n<p> \t<b>Get more in detail info about RAID devices<\/b><\/p>\n<pre># mdadm --query \/dev\/md0 <\/pre>\n<p> \t\/md0: 38.35GiB raid1 2 devices, 3 spares. Use mdadm &#8211;detail for more detail. \/dev\/md0: No md super block found, not an md component.<\/p>\n<ol>\n<li> \t\tmdadm &#8211;detail \/dev\/md0<\/li>\n<\/ol>\n<p> \t\/dev\/md0:<\/p>\n<pre>       Version&nbsp;: 00.90.00  Creation Time&nbsp;: Mon Jan 20 22:53:28 2003     Raid Level&nbsp;: raid1     Array Size&nbsp;: 40209024 (38.35 GiB 41.22 GB)    Device Size&nbsp;: 40209024 (38.35 GiB 41.22 GB)   Raid Devices&nbsp;: 2  Total Devices&nbsp;: 2 <\/pre>\n<p> \tPreferred Minor&nbsp;: 0<\/p>\n<pre>   Persistence&nbsp;: Superblock is persistent  <\/pre>\n<pre>    Update Time&nbsp;: Tue Jan 21 00:49:47 2003          State&nbsp;: dirty, no-errors Active Devices&nbsp;: 2 <\/pre>\n<p> \tWorking Devices&nbsp;: 2<\/p>\n<pre>Failed Devices&nbsp;: 0  Spare Devices&nbsp;: 0   <\/pre>\n<pre>   Number   Major   Minor   RaidDevice State       0       3        1        0      active sync   \/dev\/hda1       1      22        1        1      active sync   \/dev\/hdc1           UUID&nbsp;: d876333b:694e852b:e9a6f40f:0beb90f9 <\/pre>\n<p> \t<b>Get more info about disks<\/b><\/p>\n<pre># mdadm --query \/dev\/hda1 <\/pre>\n<p> \t\/dev\/hda1: is not an md array \/dev\/hda1: device 0 in 2 device active raid1 md0&#8230;.<\/p>\n<ol>\n<li> \t\tmdadm &#8211;query \/dev\/hdc1<\/li>\n<\/ol>\n<p> \t\/dev\/hdc1: is not an md array \/dev\/hdc1: device 1 in 2 device active raid1 md0&#8230;.<\/p>\n<p> \t&nbsp;<\/p>\n<h3> \t<span class=\"mw-headline\" id=\"Managing_RAID_devices_.28RAID-1_and_up.21.21.29\">Managing RAID devices (RAID-1 and up!!) <\/span><\/h3>\n<p> \t<b>Setting a disk faulty\/failed:<\/b><\/p>\n<pre># mdadm --fail \/dev\/md0 \/dev\/hdc1 <\/pre>\n<p> \t<b>[Caution]<\/b><\/p>\n<p> \tDO NOT run this every on a raid0 or linear device or your data is toasted!<\/p>\n<p> \t<b>Removing a faulty disk from an array:<\/b><\/p>\n<pre># mdadm --remove \/dev\/md0 \/dev\/hdc1 <\/pre>\n<p> \t<b>Clearing any previous raid info on a disk<\/b> (eg. reusing a disk from another decommissioned raid array)<\/p>\n<pre># mdadm --zero-superblock \/dev\/hdc1 <\/pre>\n<p> \t<b>Adding a disk to an array<\/b><\/p>\n<pre># mdadm --add \/dev\/md0 \/dev\/hdc1 <\/pre>\n<p> \t&nbsp;<\/p>\n<h3> \t<span class=\"mw-headline\" id=\"Commands\">Commands <\/span><\/h3>\n<pre># cat \/proc\/mdstat <\/pre>\n<ol>\n<li> \t\tmdadm &#8211;detail \/dev\/md0<\/li>\n<li> \t\tmdadm &#8211;add \/dev\/md0 \/dev\/hda1<\/li>\n<\/ol>\n<hr \/>\n<h2> \t<span class=\"mw-headline\" id=\"Rebuilding_failed_Linux_software_RAID_raidhotadd\">Rebuilding failed Linux software RAID <b>raidhotadd<\/b> <\/span><\/h2>\n<p> \tRecently I had a hard drive fail. It was part of a Linux software RAID 1 (mirrored drives), so we lost no data, and just needed to replace hardware. However, the raid does requires rebuilding. A hardware array would usually automatically rebuild upon drive replacement, but this needed some help.<\/p>\n<p> \tWhen you look at a &quot;normal&quot; array, you see something like this: cartoon<\/p>\n<pre># cat \/proc\/mdstat <\/pre>\n<p> \tPersonalities&nbsp;: [raid1] read_ahead 1024 sectors md2&nbsp;: active raid1 hda3[1] hdb3[0]<\/p>\n<pre>    262016 blocks [2\/2] [UU]      <\/pre>\n<p> \tmd1&nbsp;: active raid1 hda2[1] hdb2[0]<\/p>\n<pre>    119684160 blocks [2\/2] [UU]      <\/pre>\n<p> \tmd0&nbsp;: active raid1 hda1[1] hdb1[0]<\/p>\n<pre>    102208 blocks [2\/2] [UU]      <\/pre>\n<p> \tnused devices: &lt;none&gt;<\/p>\n<p> \tThat&#39;s the normal state &#8211; what you want it to look like. When a drive has failed and been replaced, it looks like this:<\/p>\n<pre>Personalities&nbsp;: [raid1] <\/pre>\n<p> \tread_ahead 1024 sectors md0&nbsp;: active raid1 hda1[1]<\/p>\n<pre>    102208 blocks [2\/1] [_U] <\/pre>\n<p> \tmd2&nbsp;: active raid1 hda3[1]<\/p>\n<pre>    262016 blocks [2\/1] [_U] <\/pre>\n<p> \tmd1&nbsp;: active raid1 hda2[1]<\/p>\n<pre>    119684160 blocks [2\/1] [_U] <\/pre>\n<p> \tunused devices: &lt;none&gt;<\/p>\n<p> \tNotice that it doesn&#39;t list the failed drive parts, and that an underscore appears beside each U. This shows that only one drive is active in these arrays &#8211; we have no mirror. Link Partners<\/p>\n<p> \tAnother command that will show us the state of the raid drives is &quot;mdadm&quot;<\/p>\n<pre># mdadm -D \/dev\/md0 <\/pre>\n<p> \t\/dev\/md0:<\/p>\n<pre>       Version&nbsp;: 00.90.00  Creation Time&nbsp;: Thu Aug 21 12:22:43 2003     Raid Level&nbsp;: raid1     Array Size&nbsp;: 102208 (99.81 MiB 104.66 MB)    Device Size&nbsp;: 102208 (99.81 MiB 104.66 MB)   Raid Devices&nbsp;: 2  Total Devices&nbsp;: 1 <\/pre>\n<p> \tPreferred Minor&nbsp;: 0<\/p>\n<pre>   Persistence&nbsp;: Superblock is persistent <\/pre>\n<pre>   Update Time&nbsp;: Fri Oct 15 06:25:45 2004          State&nbsp;: dirty, no-errors Active Devices&nbsp;: 1 <\/pre>\n<p> \tWorking Devices&nbsp;: 1<\/p>\n<pre>Failed Devices&nbsp;: 0  Spare Devices&nbsp;: 0 <\/pre>\n<p> \t&nbsp;<\/p>\n<pre>  Number   Major   Minor   RaidDevice State      0       0        0        0      faulty removed      1       3        1        1      active sync   \/dev\/hda1          UUID&nbsp;: f9401842:995dc86c:b4102b57:f2996278 <\/pre>\n<p> \tAs this shows, we presently only have one drive in the array.<\/p>\n<p> \tAlthough I already knew that \/dev\/hdb was the other part of the raid array, you can look at \/etc\/raidtab to see how the raid was defined:<\/p>\n<pre>raiddev             \/dev\/md1 <\/pre>\n<p> \traid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 nr-spare-disks 0<\/p>\n<pre>   device          \/dev\/hda2    raid-disk     0    device          \/dev\/hdb2    raid-disk     1 <\/pre>\n<p> \traiddev \/dev\/md0 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 nr-spare-disks 0<\/p>\n<pre>   device          \/dev\/hda1    raid-disk     0    device          \/dev\/hdb1    raid-disk     1 <\/pre>\n<p> \traiddev \/dev\/md2 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 nr-spare-disks 0<\/p>\n<pre>   device          \/dev\/hda3    raid-disk     0    device          \/dev\/hdb3    raid-disk     1 <\/pre>\n<p> \tTo get the mirrored drives working properly again, we need to run fdisk to see what partitions are on the working drive:<\/p>\n<pre># fdisk \/dev\/hda <\/pre>\n<pre>Command (m for help): p <\/pre>\n<p> \tDisk \/dev\/hda: 255 heads, 63 sectors, 14946 cylinders Units = cylinders of 16065 * 512 bytes<\/p>\n<pre>  Device Boot    Start       End    Blocks   Id  System <\/pre>\n<p> \t\/dev\/hda1 * 1 13 104391 fd Linux raid autodetect \/dev\/hda2 14 14913 119684250 fd Linux raid autodetect \/dev\/hda3 14914 14946 265072+ fd Linux raid autodetect<\/p>\n<p> \tDuplicate that on \/dev\/hdb. Use &quot;n&quot; to create the parttions, and &quot;t&quot; to change their type to &quot;fd&quot; to match. Once this is done, use &quot;raidhotadd&quot;:<\/p>\n<p> \t&nbsp;<\/p>\n<pre># raidhotadd \/dev\/md0 \/dev\/hdb1 <\/pre>\n<ol>\n<li> \t\traidhotadd \/dev\/md1 \/dev\/hdb2<\/li>\n<li> \t\traidhotadd \/dev\/md2 \/dev\/hdb3<\/li>\n<\/ol>\n<p> \tThe rebuilding can be seen in \/proc\/mdstat:<\/p>\n<pre># cat \/proc\/mdstat <\/pre>\n<p> \tPersonalities&nbsp;: [raid1] read_ahead 1024 sectors md0&nbsp;: active raid1 hdb1[0] hda1[1]<\/p>\n<pre>     102208 blocks [2\/2] [UU]       <\/pre>\n<p> \tmd2&nbsp;: active raid1 hda3[1]<\/p>\n<pre>     262016 blocks [2\/1] [_U]       <\/pre>\n<p> \tmd1&nbsp;: active raid1 hdb2[2] hda2[1]<\/p>\n<pre>     119684160 blocks [2\/1] [_U]      [&gt;....................]  recovery =  0.2% (250108\/119684160) finish=198.8min speed=10004K\/sec <\/pre>\n<p> \tunused devices: &lt;none&gt;<\/p>\n<p> \tThe md0, a small array, has already completed rebuilding (UU), while md1 has only begun. After it finishes, it will show:<\/p>\n<pre>#  mdadm -D \/dev\/md1 <\/pre>\n<p> \t\/dev\/md1:<\/p>\n<pre>       Version&nbsp;: 00.90.00  Creation Time&nbsp;: Thu Aug 21 12:21:21 2003     Raid Level&nbsp;: raid1     Array Size&nbsp;: 119684160 (114.13 GiB 122.55 GB)    Device Size&nbsp;: 119684160 (114.13 GiB 122.55 GB)   Raid Devices&nbsp;: 2  Total Devices&nbsp;: 2 <\/pre>\n<p> \tPreferred Minor&nbsp;: 1<\/p>\n<pre>   Persistence&nbsp;: Superblock is persistent <\/pre>\n<pre>   Update Time&nbsp;: Fri Oct 15 13:19:11 2004          State&nbsp;: dirty, no-errors Active Devices&nbsp;: 2 <\/pre>\n<p> \tWorking Devices&nbsp;: 2<\/p>\n<pre>Failed Devices&nbsp;: 0  Spare Devices&nbsp;: 0 <\/pre>\n<p> \t&nbsp;<\/p>\n<pre>   Number   Major   Minor   RaidDevice State       0       3       66        0      active sync   \/dev\/hdb2       1       3        2        1      active sync   \/dev\/hda2           UUID&nbsp;: ede70f08:0fdf752d:b408d85a:ada8922b <\/pre>\n<p> \tI was a little surprised that this process wasn&#39;t entirely automatic. There&#39;s no reason it couldn&#39;t be. This is an older Linux install; I don&#39;t know if more modern versions will just automatically rebuild.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Software Raid (mdadm) http:\/\/www.devil-linux.org\/documentation\/1.0.x\/ch01s05.html &nbsp; Setting up RAID devices and config files Prepare \/etc\/mdadm.conf echo &#39;DEVICE \/dev\/hd* \/dev\/sd*&#39; &gt; \/etc\/mdadm.conf Preparing the harddisk For now we assume that we either want to create a RAID-0 or RAID-1 array. For RAID-5 you just have to add more partitions (and therefor harddisks). Create a partition on each&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-46","post","type-post","status-publish","format-standard","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=\/wp\/v2\/posts\/46","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=46"}],"version-history":[{"count":1,"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=\/wp\/v2\/posts\/46\/revisions"}],"predecessor-version":[{"id":470,"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=\/wp\/v2\/posts\/46\/revisions\/470"}],"wp:attachment":[{"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=46"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=46"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/notiz.comanet.xyz\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=46"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}