Subject: Re: practical RAIDframe questions
To: Geert Hendrickx <ghen@netbsd.org>
From: Manuel Bouyer <bouyer@antioche.eu.org>
List: netbsd-users
Date: 01/26/2006 21:10:02
On Thu, Jan 26, 2006 at 12:20:21PM +0100, Geert Hendrickx wrote:
> Hello,
>
> I'm planning to move our mail+file server to a software RAID-1. I've been
> reading about RAIDframe, and even toyed with it in qemu[*], but I have no
> "real life" experience with it, so I still have a few questions:
>
> - Partitions. Some people divide their physical disks (wd0, wd1, ...) into
> multiple partitions, create multiple raid* devices on them, and then put
> one (or more) filesystem partition(s) on each. Others just create one
> big partition on each physical drive, building one big raid0 device, and
> put all their filesystem partitions on that (so raid0a, raid0b, raid0e,
> ...). Are there any specific advantages to either setup? The only thing
> I could think off is that you'll have more work recovering in the former
> situation (more raid sets to rebuild).
I setup 2 raid sets: one for root+swap and one for the rest. This way,
after a reboot because of the swap partition the root+swap raid1 will be
dirtly and have parity rebuilt, but as it's small it's not an issue.
>
> - Swap. Should I swap onto raid0b, or onto wd0b and wd1b? In case of a
> disk failure, swap on raid0b will keep working, whereas swap on wd?b will
> not. But I've read about problems with swap-on-raid in the past.
> (I know I should set swapoff=YES when swapping on raid, and I know how to
> setup crash dumps onto a physical partition.)
For reliability I recommend swap on raid. Using a small raid for swap makes
the parity issue a non-issue :)
>
> - Configuration. I've been using the configuration from the NetBSD guide:
> > START array
> > 1 2 0
> >
> > START disks
> > /dev/wd0a
> > /dev/wd1a
> >
> > START layout
> > 128 1 1 1
> >
> > START queue
> > fifo 100
>
> Is this ok? I'm not sure whether/how the "layout" or "queue" sections
> could be optimized.
Nothing for layout, and as you have IDE disks, nothing for queues.
With SCSI disks and a smart SCSI controller, you can bump queue to 255 to
have as much queued command as possible at the drive level.
--
Manuel Bouyer <bouyer@antioche.eu.org>
NetBSD: 26 ans d'experience feront toujours la difference
--