Subject: Marking failed RAID 0 drive as not failed without replacement?
To: None <port-alpha@netbsd.org>
From: Paul Mather <paul@gromit.dlib.vt.edu>
List: port-alpha
Date: 09/21/2002 15:13:21
I have a CCD array that is made up of a RAID 0 (consisting of three
drives) plus two other miscellaneous drives. On Friday, one of the
RAID 0 drives apparently failed. I say "apparently" because the CCD
(and RAID) continues to work, seemingly without error.
Here's what appeared in my logs:
>>>>>
sd8(asc1:0:6:0): Check Condition on CDB: 0x28 00 01 0a 8d 40 00 00 40 00
SENSE KEY: Media Error
INFO FIELD: 17468771
ASC/ASCQ: Unrecovered Read Error
raid0: IO Error. Marking /dev/sd8c as failed.
raid0: node (R ) returned fail, rolling backward
raid0: DAG failure: r addr 0x31fa7c0 (52406208) nblk 0x80 (128) buf 0xfffffe0001f84000
<<<<<
When I do "raidctl -s raid0", part of the output contains the line
"/dev/sd8c status is: failed. Skipping label." However, between the
time it failed and the time I noticed the failure (via a logcheck
e-mail), data have been written to and read from the RAID 0. Indeed,
I just checked several large files on it and they MD5 verify okay.
So, I'm confused and a bit worried. The confusion is that I thought
if any component of a RAID 0 failed, the RAID should be rendered
inoperable by the driver (i.e., all read calls should return a hard
error like EIO, or similar). Obviously, this hasn't happened, and I'm
not sure why.
I'm a bit worried because I'm not sure how consistent the data are on
that RAID, now (although things seem to be fine). I don't want to
mess around too much in case I *DO* break it before I can get what
data is on there off to another machine. (The space is a kind of
"scratch" area, but a hassle to restore.) For example, I don't know
if doing a "raidctl -g" will actually wake the system up into taking
it offline for good. I'm also figuring if I reboot the system, the
RAID will definitely fail to configure again because of the component
marked as failed.
So, aside from wondering why it *DIDN'T* go offline when the drive was
failed, I'm wondering if it is possible to mark that drive as optimal
again without physically replacing it? All the docs regarding disk
failures appear to point to having to add a spare, reconstruct onto
that first, and then copyback to the original disk. I don't have a
spare drive to do this. I know I can just rebuild the array from
scratch, but, given its apparent superhuman robustness thus far :-),
it would be nice to see if I could just fake the component label as
being optimal and cut out all that rebuilding time.
Is it possible to do this on NetBSD?
FWIW, the system is: NetBSD 1.6H (CHUMBY) #3: Tue Sep 10 20:18:34 EDT
2002. Userland is also completely rebuilt shortly after that date.
(In the long term I'll probably remove and rebuild the array without
the disk that failed, if this error happens again, but for now I'd
like to carry on as-is.)
Cheers,
Paul.
e-mail: paul@gromit.dlib.vt.edu
"Without music to decorate it, time is just a bunch of boring production
deadlines or dates by which bills must be paid."
--- Frank Vincent Zappa