Subject: RAID, ccd, and vinum.
To: None <netbsd-help@netbsd.org>
From: Richard Rauch <rkr@olib.org>
List: netbsd-help
Date: 12/20/2004 07:56:18
I've been playing around with RAID and ccd---and made a pass at vinum,
but vinum required an unspecified kernel option, it seems.  (I had used
a kernel with the one vinum option in GENERIC enabled, but still got
errors with /dev/vinum/control or whatever (an extant node) not being
configured.  Unless I somehow rebooted with the wrong kernel, I figured
that something in that under-cooked a condition was not a pressing need.
(^&)

ccd and RAID both look like they are servicable.  I did have two questions
about them:


1) With ccd, I *always* got kernel messages.  At first, I only left 63
blocks at the front for the disklabel, as "real" disks use.  Then
after re-reading the documentation and googling around, I decided to try
using the more or less fictional cylinder size reported by disklabel
(1008).  Either I couldn't do arithmetic (possible) or that was still
not enough, so I bumped it up to a larger, slightly rounder number.
Now, ccdconfig and friends no longer complain in most ways.  Except for
a curious other warning: It now seems to be warning me that my disk label
is not using the entire disk.  (Well, duh.  It threw a hissy fit when I
used the whole disk!  (^&  Besides, as somone in a previous mailing list
message observed, one may want to use a portion of a disk in a ccd, and
mount other parts as more conventional filesystems.)

Other than changing the fstype to RAID (from ccd), I now have it labeled
the same way.  raidctl does not raise any warnings, nor do I see warnings
during newfs or mount.  So it appears that raidctl is "cleaner" with the
same disklabel.

Because I get no complaints under RAID, and thought that I'd double-checked
my arithmetic (even triple-checked), I think that my partitions are okay.
But maybe they aren't, and the wording for the warning is simply phrased
very poorly.

So, question #1:

  Should I worry about those warnings with ccd---or their absence in RAID?


2) I would have thought that a RAID 0 and a ccd, with the same
stripe size on the same partitions of the same disks, would perform
very nearly identically.  Yet with ccd and a stripe size of about
1000, I was getting (according to bonnie++) a solid 250 (248 to
268 range, I think) seeks per second.  With the same stripe size
on a RAID 0, I was getting 200 or so (in the best config, upper
190 to 220 range; others more routinely around 160).  With RAID 0,
there is essentially no overhead for computing parity.

So, question #2:

  Why is there such a disparity (ahem) between the two benchmarks?

The disks were newfsed the same, using ffs.  No use of tunefs was made.
(I tried lfs, for giggles, but due to comments about stability when
lfs gets around 70% full or so, I will stick to ffs.)


If there is interest, I can post the bonnie++ results from 25 to 30
runs, including notes about the configuration of the disks.  It's not
a huge pool of samples, and few runs were repeated on a single
configuration.  But it may be of interest.  Or not.  (^&

-- 
  "I probably don't know what I'm talking about."  http://www.olib.org/~rkr/