Hardware
 
Cerca
 
Articoli recenti
Debian Squeeze RAID boot
Bromografo
Java
Ripristino Linux RAID
User-Agent recognition
HTML tooltip
Linux RAID-1 + GRUB

Java packages summary
 
Links
jtech
Download NetBeans!
 
logo unige
logo dist
 
© 2005 texSoft.it


P. IVA 01424860094

This document gives an example on how to setup a clean Linux system which can boot directly from a software RAID device.

The procedure uses Debian 6.0 (Squeeze 6.0.0, released on 2011.02.06), and it's quite straightforward as it's mostly guided by the Debian installer itself. The GRUB-2 boot loader provided is version 1.98, which supports md level 1, 5, 6 as boot devices and the new md superblock formats > 1.x .

The target test system is a virtual machine with 6 "factory new" 1 Gb SCSI disks. The system uses 3 RAID devices, with version 1.2 superblock:

Device RAID level Mount point
md0 RAID-1 /boot
md1 RAID-6 swap
md2 RAID-6

The RAID-1 device is mounted as /boot, in order to allow a safe boot even if the first disk fails or it's removed. The root filesystem and the a swap partition uses RAID-6 devices. It's possible to use RAID-5 instead of RAID-6, but in production environments is safer to use RAID-6 as it guarantees a higher level of redundancy.

The original goal of this test was to use a RAID-6 array as boot device. Unfortunately the system fails to boot if the RAID-6 array get degraded or has a missing drive, making it unreliable. I did not test directly, but I guess that boot from RAID-5 has the same behaviour. GRUB-2 v. 1.99 should resolve this issue. By now it's safer to use a RAID-1 array as boot device.


Install

Proceed installing the Debian system as usual using text mode install (select language, locale, keyboard layout, root password, etc...), up to the "Partition disks" form.

In the "Partition disks" form select "Manual":

The list of all detected disks is shown:

This example assumes that the disks used are factory new and contain no existing partitions or data (if you use disks already containing data, be sure to make a backup of them, all existing data will be destroyed). To create a new partition table select the disk and press Enter:

Repeat the operation on each empty disk.

Once all disks have an empty partition table, start creating the partitions which will be used to assemble the RAID arrays (partitions of type "Physical volume for RAID", 0xFD). This test configuration uses 1.1 Gb disks, and creates the same 3 partitions on each disk:

Primary/Logical Size Bootable Type
Primary 250 MB YES Physical volume for RAID
Primary 250 MB NO Physical volume for RAID
Primary 570 MB NO Physical volume for RAID

For example the first partition of the first disk should be configured as follow:

The following screenshot shows the (partial) list of all created partitions:

The next step is to create the RAID arrays from the "Configure software RAID" menu. The system asks to confirm the changes to the partition tables:

The three RAID arrays in this configuration use 6 disks and no spares. The first array uses RAID-1, the others use RAID-6:

The partition list now shows the created RAID devices:

The RAID arrays construction is already started. It's possible to see it proceed pressing Alt-F2 to go the the console window:

The RAID arrays in this example are few Gb, and complete construction in few minutes. Real arrays with big disks may take many hours to complete. It's still possible to use the arrays while they are constructing, but it's safer to wait that the RAID-1 array, to be used as /boot, is fully constructed.

Now it's possible to create the partitions for /boot, /, and for the swap space on the RAID devices:

Device RAID level Filesystem Mount point
md0 RAID-1 ext3 /boot
md1 RAID-6 swap swap
md2 RAID-6 ext3 /

The partitioning is now complete, and it's possible to apply the changes and to proceed installing the system as usual.

At the end of the installation, the installer must setup the GRUB boot loader:

Select to install GRUB to the master boot record:

The Debian installer does not seem to recognize that GRUB should be installed on each disk participating to the RAID-1 array, and it just installs to the first disk /dev/sda. It's possible to install manually on the other disks with the grub-install command (see further on):

The installation is now complete, and the system can do its first boot:


First boot

The first boot shows the GRUB menu:

Then GRUB start loading the init-rd:

Once logged in it's possible to verify the system actually booted from the RAID-1 array:


Install GRUB to all disks

The Debian installer installs GRUB just to the first disk /dev/sda. To allow booting from other disk in the array GRUB must be manually installed to each disk with the grub-install command:

root@debian:~# grub-install /dev/sdb
Installation finished. No error reported.
root@debian:~# grub-install /dev/sdc
Installation finished. No error reported.
root@debian:~# grub-install /dev/sdd
Installation finished. No error reported.
root@debian:~# grub-install /dev/sde
Installation finished. No error reported.
root@debian:~# grub-install /dev/sdf
Installation finished. No error reported.

Having GRUB installed on each disk allow the system to boot even if /dev/sda fails or if disks are swapped.

These commands must be reissued after every kernel or grub packages update.


Testing

To test the system it's possible to swap some disks and remove up to 2 disks from the array: the system still boot. For example, removing the first two disks, the system boot even if the RAID arrays are degraded: