Convert a single drive system to RAID
From ArchWiki
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.
- Create a single-disk RAID-1 array with our new disk
- Move all your data from the old-disk to the new RAID-1 array
- Verify the data move was successful
- Wipe the old disk and add it to the new RAID-1 array
Contents |
Assumptions
- I will assume for the sake of the guide that the disk currently in your system is /dev/sda and your new disk is /dev/sdb.
- We will create the following configuration:
- 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)
- 2 x Swap Partitions using 1 partition on each disk.
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to this article for reasons why.
- To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the init 1 command.
- You will need to be the root user for the entire process.
Create New RAID Array
First we need to create a single-disk RAID array using the new disk.
Partition the Disk
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).
Drop to single user mode:
init 1
To see the current partitions:
fdisk -l
To partition the new disk
fdisk /dev/sdb
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:
n # new p # primary 1 # first partition 1 # start at first cylinder 101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk. n # new p # primary 2 # second partition press enter # Uses the default start from the end of the first partition press enter # Uses the default of using all the remain space on the disk. t # set the partition type 1 # for partition number 1 82 # ... and set it to be swap t # set the partition type 2 # for partition number 2 ... fd # ... and set it to be "linux raid auto" p # print what the partition table will look like w # now write all of the above changes to disk
At the end of partitioning, your partitions should look something like this:
[root@arch ~]# fdisk -l /dev/sdb Disk /dev/sdb: 80.0 GB, 80025280000 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 66 530113+ 82 Linux swap / Solaris /dev/sdb2 67 9729 77618047+ fd Linux raid autodetect
Make sure your your partition types are set correctly. "Linux Swap" is type 82 and "Linux raid autodetect" is type FD.
Create the RAID Device
Next, create the single-disk RAID-1 array. Note the "missing" keyword is specified as one of our devices.
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2 mdadm: array /dev/md0 started.
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.
Make sure the array has been created correctly by checking /proc/mdstat:
[root@arch ~]# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] md0 : active raid1 sdb2[1] 40064 blocks [2/1] [_U] unused devices: <none>
The devices are intact, however in a degraded state. (Because it's missing half the array!)
Make File Systems
Use whatever filesystem is your preference here. I'll use ext3 for this guide.
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0 mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 10027008 inodes, 20027008 blocks 1001350 blocks (5.00%) reserved for the super user First data block=0 612 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 25 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Make a file system on the swap partition:
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP Setting up swapspace version 1, size = 271314 kB no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9
Copy Data
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system
Mount the Array
[root@arch ~]# mkdir /mnt/new-raid [root@arch ~]# mount /dev/md0 /mnt/new-raid
Copy the Data
[root@arch ~]# cd /mnt/new-raid [root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: tar -C / -clspf - . | tar -xlspvf -
Update GRUB
Use your preferred text editor to open /mnt/new-raid/boot/grub/menu.lst.
--- SNIP --- default 0 color light-blue/black light-cyan/blue ## fallback fallback 1 # (0) Arch Linux title Arch Linux - Original Disc root (hd0,0) kernel /vmlinuz26 root=/dev/sda1 # (1) Arch Linux title Arch Linux - New RAID root (hd1,0) #kernel /vmlinuz26 root=/dev/sda1 ro kernel /vmlinuz26 root=/dev/md0 --- SNIP ---
Notice we added the fallback line and duplicated the Arch Linux entry with a different root directive on the kernel line.
Also update the "kopt" and "groot" sections, as shown below, if they are in your /mnt/new-raid/boot/grub/menu.lst file, because it will make applying distribution kernel updates easier:
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro + # kopt=root=/dev/md0 ro ## default grub root device ## e.g. groot=(hd0,0) - # groot=(hd0,0) + # groot=(hd0,1)
Alter fstab
You need to tell fstab on the new disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.
To find the UUID to use:
[root@arch ~]# blkid /dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" /dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" /dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" /dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" /dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid"
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.
[root@arch ~]# cat /mnt/new-raid/etc/fstab /dev/md0 / ext3 defaults 0 1 UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0
Rebuild initcpio or initramfs
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys [root@arch ~]# mount --bind /proc /mnt/new-raid/proc [root@arch ~]# mount --bind /dev /mnt/new-raid/dev [root@arch ~]# chroot /mnt/new-raid/ [root ~]#
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).
For Arch linux: Rebuild initcpio
Edit /etc/mkinitcpio.conf to include raid in the HOOKS array. Place it before autodetect but after sata, scsi and pata (whichever is appropriate for your hardware).
[root ~]# mkinitcpio -g /boot/kernel26.img [root ~]# exit
For Ubuntu or Debian: Rebuild initramfs
nano /etc/mdadm/mdadm.conf
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Then rebuild initramfs, incorporating the above two changes:
update-initramfs -k `uname -r` -c -t
This will rebuild your running version - to rebuild others, this will show a listing (in Ubuntu):
ls /boot/ | perl -lne "/^[A-z\.\-]+/m && print $'" | egrep -e 'openvz$|generic$|server$' | sort -u
Then substitute/script in these others so that all are available for use with the new RAID setup.
Install GRUB on the RAID Array
Start grub:
[root@arch /]# grub --no-floppy
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:
grub> find /boot/grub/stage1 (hd0,0) (hd1,1)
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.
grub> device (hd0) /dev/sdb
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:
grub> root (hd0,1) Filesystem type is ext2fs, partition type 0xfd grub> setup (hd0) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/e2fs_stage1_5" exists... yes Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded Done grub> quit
Verify Success
Reboot your computer, making sure it boots from the new raid disk (/dev/sdb) and not the original disk (/dev/sda). You may need to change the boot device priorities in your BIOS to do this.
Once the GRUB on the new disk loads, make sure you select to boot the new entry you created in menu.lst earlier.
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:
[root@arch ~]# mount /dev/md0 on / type ext3 (rw)
Also swapon -s:
[root@arch ~]# swapon -s Filename Type Size Used Priority /dev/sdb1 partition 4000144 16 -1
Note it is the swap partition on sdb that is in use, nothing from sda.
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.
Add Original Disk to Array
Partition Original Disk
Take the output of fdisk -l on your new disk, and make the partitions on your original disk look the same.
[root@arch ~]# fdisk -l /dev/sda Disk /dev/sda: 80.0 GB, 80025280000 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 66 530113+ 82 Linux swap / Solaris /dev/sda2 67 9729 77618047+ fd Linux raid autodetect
Another way - this will output /dev/sda partition layout to a file, then it's used as input for partioning /dev/sdb. Use the --force if needed...
[root@arch ~]# sfdisk -d /dev/sda > raidinfo-partitions.sda [root@arch ~]# sfdisk /dev/sdb < raidinfo-partitions.sda [root@arch ~]# sfdisk --force /dev/sdb < raidinfo-partitions.sda
To verify that the partitioning is identical:
[root@arch ~]# fdisk -l
Note
If you get an error when attempting to add the parition to the array:
mdadm: /dev/sdb1 not large enough to join array
You might have seen an earlier warning message when partitioning this disk that the kernel still sees the old disk size - a reboot ought to fix this, then try adding again to the array.
Add Disk Partition to Array
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2 mdadm: hot added /dev/sda2
Verify that the RAID array is being rebuilt.
[root@arch ~]# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] md0 : active raid1 sda2[2] sdb2[1] 80108032 blocks [2/1] [_U] [>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec unused devices: <none>