tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
changing raid level?
Okay, I've got a 4.0.1 machine with a RAID 0 configured:
# raidctl -G raid1
# raidctl config file for /dev/rraid1d
START array
# numRow numCol numSpare
1 2 0
START disks
/dev/raid2e
/dev/raid3e
START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
128 1 1 0
START queue
fifo 100
#
I want, for reasons irrelevant here, to change this to a RAID 1 [%].
So I create a conf file just like the above, except the layout section
says "128 1 1 1". I run "raidctl -u raid1" to unconfigure the old raid
and "raidctl -C new-config-file raid1" to configure the new.
But, when I check, the newly-configured raid1 is still raid level 0,
not the level 1 I want! Apparently the RAIDframe administrative data
on the disks overrides the config file. This seems wrong to me; with
-c, I'd expect this to be a fatal error, and with -C, the config file
to override. (I tried using -c to configure the new raid1 too, and it
works the same as -C.)
Is there some reason it was done this way, or should I send- it -PR, or
what? For the moment, I'll destroy the first meg or so of raid2e and
raid3e, which I expect will make -C work the way I want....
[%] Because of other changes, redundancy becoems more important than
space. (The use of RAID as the underlying units is for easy disk
replacement and protection against disk reordering; raid2 and raid3
are one-member RAID1s, set autoconfiguring.)
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse%rodents-montreal.org@localhost
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
Home |
Main Index |
Thread Index |
Old Index