Eclectica Daybreak over Colburne Passage near Sidney, BC, Canada filler
shim shim shim shim
shim Home shim Software shim Avocations shim Services  
shim shim shim shim
Software
shim
shim
Documentation
 SSL Certificates
 OpenBSD RAID
 Self-Check Digits
 Bare Metal Reload
shim
Linux
 popbsmtpd
      Reference
      Installation
      Changelog
      FAQ
      Mailing Lists
      Download
shim
 Postfix
shim
 EnGarde
shim
iSeries
 CPYTOIFSF
 FTP Backup
shim
Windows
shim
shim

Software RAID on OpenBSD

This document describes how to set up RAID mirroring on OpenBSD with the RAIDframe driver built into the kernel.

Note: RAIDFrame has been deprecated by the OpenBSD project on their Disk Setup page. New installations should instead use softraid(4). Having not yet used it, I cannot comment.

Overview

This procedure assumes OpenBSD 3.7, the i386 architecture, and two 80GB IDE disks; wd0 and wd1. The steps involved are:

  • Install OpenBSD on wd0
  • Compile a kernel that supports RAID
  • Reboot into RAID-enabled kernel
  • Configure RAID partitions on wd1 as half of a broken mirror
  • Copy all files onto the RAID partitions
  • Reboot into broken mirror
  • Reallocate wd0, hot-add it, and reconstruct the broken mirror
  • Reboot into complete RAID environment
OpenBSD will not boot from a RAIDframe RAID set at present, so wd0a and wd1a will be set aside as partitions to boot from. Kernel RAID autoconfiguration will prepare the RAID set at boot time, and mount the RAID partitions as per /etc/fstab.

Back to top

Disk Preparation

I like to start with completely clean disks. If they have been used before, I wipe them before doing anything else, using dd(1). Boot from the OpenBSD distribution CD to get a shell, and execute:


# dd if=/dev/zero of=/dev/rwd0c bs=1024000 &
# dd if=/dev/zero of=/dev/rwd1c bs=1024000 &

This will take a while. To keep busy, you can set up the Master Boot Record and the BSD disklabel on each disk.

wd0 will get a temporary installation of OpenBSD, but we will only keep the first partition when we are done. We will use this installation to configure and compile a kernel with RAID support. The following sizes will work:


Part   Size  FS Type  Purpose
----  -----  -------  -------------------------------
 a:    512M  4.2BSD   /  (becomes boot partition)
 c:   -----  unused   Entire drive
 d:   1024M  4.2BSD   /usr
 e:    512M  4.2BSD   /tmp
 f:   1024M  4.2BSD   /var
 g:    512M  4.2BSD   /home

On wd1, we need two partitions; a boot partition, and a partition to contain the RAID set:


Part   Size  FS Type  Purpose
----  -----  -------  -------------------------------
 a:    512M  4.2BSD   Boot partition
 c:   -----  unused   Entire drive
 d:       *  RAID     Everything except boot kernel

Use the following commands to implement this:


# fdisk -i wd0
# disklabel -E wd0
...

# fdisk -i wd1
# disklabel -E wd1
...

Reboot from media, and follow the normal installation procedure to install OpenBSD onto wd0. If you followed the steps above, all partitions will already be set up; simply quit disklabel(8) and move on to assigning partitions to their mount points. If you started here, use the instructions above while in disklabel.

Back to top

Make a RAID-Capable Kernel

To build a kernel, we need the source code, and a configuration. Source code is on the CD:


# mount /dev/cd0a /mnt
# cd /usr/src
# tar xzpf /mnt/src.tar.gz
# cd sys/arch/i386/conf

The configuration is the same as GENERIC, with two additions. Create a file called GENERIC.RAID in /usr/src/sys/arch/i386/conf with the following contents:

---Begin---
#
#	GENERIC.RAID - Add kernelized RAIDframe driver
#

include "arch/i386/conf/GENERIC"

option		RAID_AUTOCONFIG	# Automatically configure RAID at boot

pseudo-device	raid		4	# RAIDframe disk driver

----End----
The following steps will build and install the new kernel:


# config GENERIC.RAID
# cd ../compile/GENERIC.RAID
# make depend && make
# mv /bsd /bsd.original
# cp bsd /bsd
# chmod 644 /bsd

Now reboot into the RAID-enabled kernel.

Back to top

Second Disk Setup - Make A Broken Mirror

To prepare the second disk, we need to accomplish the following:

  • Make it bootable
  • Create a RAID configuration
  • Apply the configuration to the disk
  • Create a disklabel for the RAID device
  • Create filesystems in the RAID device partitions
  • Make the RAID device autoconfigurable
  • Copy our installation onto the new partitions
  • Update fstab to refer to the RAID device
  • Reboot into the RAID environment
Make the second disk (wd1) bootable:


# newfs /dev/rwd1a
# mount /dev/wd1a /mnt
# cp /bsd /mnt/bsd
# cp /usr/mdec/boot /mnt/boot
# /usr/mdec/installboot -v /mnt/boot /usr/mdec/biosboot wd1
# 

Create /etc/raid0.conf, which should look like this:

---Begin---
START array
# rows (must be 1), columns, spare disks
1 2 0

START disks
/dev/wd2d
/dev/wd1d

START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
128 1 1 1

START queue
# queue mode, outstanding request count
fifo 100

----End----
Notice that under disks, a (non-existent) wd2 is specified. When the RAID set is created, this serves as a placeholder, and will appear as failed. Once we are ready, we will hot-add wd0 as a spare and reconstruct the mirror on it. Once that is done, this configuration file will be updated to reflect the actual disk assignments.

Now that you have this configuration file, implement it with:


# raidctl -C /etc/raid0.conf raid0
# raidctl -I 20050900 raid0

The argument to -I is a unique identifier that permits RAIDframe to determine which disks belong to which RAID sets; I used the year and month, plus '00' to indicate the first RAID set.

It is time to define the partitions on the RAID device. For my purposes, I used the following:


Part   Size  FS Type  Purpose
----  -----  -------  -------------------------------
 a:    512M  RAID     /
 b:   1024M  swap     swap
 c:   -----  unused   Entire drive
 d:      5G  RAID     /usr
 e:      5G  RAID     /tmp
 f:     42G  RAID     /var
 g:       *  RAID     /home

Again, we use disklabel(8) to put this into effect. Once that is done, create filesystems in the partitions that need them, and make the RAID set auto-configurable as a root filesystem:


# disklabel -E raid0
...

# newfs /dev/rraid0a
# newfs /dev/rraid0d
# newfs /dev/rraid0e
# newfs /dev/rraid0f
# newfs /dev/rraid0g
# raidctl -A root raid0

Copy the data from the initial installation over to the RAID partitions, making certain to preserve permissions:


# mount /dev/raid0a /mnt
# (cd /; tar -Xcpf - .) | (cd /mnt; tar -xpf -)
tar: Ustar cannot archive a socket ./dev/log
# umount /mnt

# mount /dev/raid0d /mnt
# (cd /usr; tar -cpf - .) | (cd /mnt; tar -xpf -)
# umount /mnt

# mount /dev/raid0f /mnt
# (cd /var; tar -cpf - .) | (cd /mnt; tar -xpf -)
tar: Ustar cannot archive a socket ./cron/tabs/.sock
tar: Ustar cannot archive a socket ./empty/dev/log
# umount /mnt

# mount /dev/raid0g /mnt
# (cd /home; tar -cpf - .) | (cd /mnt; tar -xpf -)
# umount /mnt

(You can ignore the errors; the sockets will be recreated.)

Update /etc/fstab on the broken mirror to point to the raid0 partitions instead of the old wd0 partitions, and reboot:


# mount /dev/raid0a /mnt
# sed 's/wd0/raid0/g' /mnt/etc/fstab > /mnt/etc/fstab.tmp
# mv /mnt/etc/fstab.tmp /mnt/etc/fstab
# umount /mnt
# reboot

...

boot> boot wd1a:/bsd

Back to top

Complete the Mirror Pair

The last step is to integrate the first disk into the mirror pair. We need to update the wd0 disklabel to match wd1, which is easily done by copying:


# disklabel wd1 >disklabel.wd1
# disklabel -R wd0 disklabel.wd1

There is no need to put a disklabel into the RAID partition, as this will be taken care of when the mirror is reconstructed.

To reconstruct the array, hot add wd0 as a new spare, and then start reconstruction. When reconstructions is complete, rebuild the parity.

Note: Reconstruction and parity rewrite can take a long time. Do not reboot while this is in progress, or you may lose everything and need to start over.


# raidctl -a /dev/wd0d raid0
# raidctl -vF component0 raid0
# raidctl -vP raid0

One last step remains; the RAID configuration file must be updated to remove the bogus wd2 entry and replace it with an entry for wd0:


# sed 's/wd2/wd0/g' /etc/raid0.conf > /etc/raid0.conf.tmp
# mv /etc/raid0.conf.tmp /etc/raid0.conf

Reboot, and check the RAID status:


# reboot

...

# raidctl -s raid0
raid0 Components:
           /dev/wd0d: optimal
           /dev/wd1d: optimal
No spares.
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.
#

Back to top

References

More information is available at the following sites (opens in new window):

Back to top

Researched and written by Marcus Redivo.
Permission to use this document for any purpose is hereby granted, providing that the copyright information and this disclaimer is retained. Author accepts no responsibility for any consequences arising from the use of this information.


shim