Disclaimer

This information HAS errors and is made available WITHOUT ANY WARRANTY OF ANY KIND and without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. It is not permissible to be read by anyone who has ever met a lawyer or attorney. Use is confined to Engineers with more than 370 course hours of engineering.
If you see an error contact:
+1(785) 841 3089
inform@xtronics.com

Raid


Regenerate /etc/mdadm/mdadm.conf

edit /etc/mdadm/mdadm.conf

Delete all but the top line

DEVICE /dev/sda* /dev/sdb*

Then run:

mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Your final file should look like:

DEVICE /dev/sda* /dev/sdb*
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=4d58ade4:dd80faa9:19f447f8:23d355e3
  devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=3f1bdce2:c55460b0:9262fd47:3c94b6ab
  devices=/dev/sda5,/dev/sdb5
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=7dfd7fcb:d65245d6:f9da98db:f670d7b6
  devices=/dev/sdb6,/dev/sda6

Note - you need the top DEVICE line and the lower devices lines

Setup the monitoring daemon

Just run:

dpkg-reconfigure mdadm

Test Test and Test

Test boot from both drives

Kill a drive and see if you get a email about the event.

Write up a step by step procedure to restore from a drive outage. (send a copy this way for this page!)

You should be all finished!

Please send notes of any typos/corrections to the email address below.

Special thanks to Onni Koskinen of Finland, whose gentle yet expert emails removed several glaring errors on this page and resulted in a vastly improved document.

Growing Raid1 Arrays

Moved to it's own page Growing_Partitions_and_file_systems

Re-adding Faulted drive

First, look at proc:

cat /proc/mdstat
Personalities : [raid1]
  md1 : active raid1 sda2[2](F) sdb2[1]
 70645760 blocks [2/1] [_U]
  md0 : active raid1 sda1[0] sdb1[1]
 9767424 blocks [2/2] [UU]
  unused devices: <none>


This shows raid md1 has drive sda2 stopped with a fault.

To re- add:

# mdadm /dev/md1 -r /dev/sda2
 mdadm: hot removed /dev/sda2
# mdadm /dev/md1 -a /dev/sda2
 mdadm: re-added /dev/sda2

Now you will see it regenerate in mdstat:

Personalities : [raid1]
md1 : active raid1 sda2[2] sdb2[1]
70645760 blocks [2/1] [_U]
[>....................] recovery = 0.3% (268800/70645760) finish=21.8min speed=53760K/sec
md0 : active raid1 sda1[0] sdb1[1]
9767424 blocks [2/2] [UU]
unused devices: <none>


If you have to re-add a drive more than once you need to find out why.

Q & Answers

Yes, IF you install Grub on both drives and your BIOS will roll over to the first bootable drive.

mount -o loop /tmp/myinitrd /mnt/myinitrd


Top Page wiki Index