Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
raidframe performance question
This is my first time playing with raidframe, so I've probably gone and
done something totally stupid. However...
Using a 5.99.51 kernel built from sources checked out on 2011-05-05 at
01:36:47 UTC, so I think I've picked up most of the recent mutex-related
changes in raidframe(4).
I have two identical disks on wd4 and wd5:
...
ahcisata0 at pci0 dev 17 function 0: vendor 0x1002 product 0x4391
ahcisata0: interrupting at ioapic0 pin 22
ahcisata0: 64-bit DMA
ahcisata0: AHCI revision 1.1, 6 ports, 32 command slots, features 0xf722e080
...
atabus4 at ahcisata0 channel 4
atabus5 at ahcisata0 channel 5
...
ahcisata0 port 4: device present, speed: 3.0Gb/s
wd4 at atabus4 drive 0
wd4: <WDC WD5000AADS-00M2B0>
wd4: drive supports 16-sector PIO transfers, LBA48 addressing
wd4: 465 GB, 969021 cyl, 16 head, 63 sec, 512 bytes/sect x 976773168 sectors
wd4: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd4(ahcisata0:4:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
(using DMA)
ahcisata0 port 5: device present, speed: 3.0Gb/s
wd5 at atabus5 drive 0
wd5: <WDC WD5000AADS-00M2B0>
wd5: drive supports 16-sector PIO transfers, LBA48 addressing
wd5: 465 GB, 969021 cyl, 16 head, 63 sec, 512 bytes/sect x 976773168 sectors
wd5: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd5(ahcisata0:5:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
(using DMA)
I used disklabel(1) to label both drives identically:
5 partitions:
# size offset fstype [fsize bsize cpg/sgs]
c: 976768002 63 unused 0 0 # (Cyl. 0*- 969015*)
d: 976773168 0 unused 0 0 # (Cyl. 0 - 969020)
e: 976768002 63 RAID # (Cyl. 0*- 969015*)
I created a /etc/raid1.conf
START array
#numrow numcol numspare
1 2 0
# Identify physical disks
START disks
/dev/wd4e
/dev/wd5e
# Layout is simple - 64 sectors per stripe
START layout
#Sect/StripeUnit StripeUnit/ParityUnit StripeUnit/ReconUnit RaidLevel
64 1 1 1
# No spares
# START spare
# Command queueing
START queue
fifo 100
Then I used raidctl -C to create the raidset and raidctl -I, and it all
looked good.
The next step was to initialize the raidset parity, so I used raidctl -i
That was almost 10 hours ago, and so far it has done only 26% of the
job! It appears that the writes are taking ~forever to complete!
systat(1) shows that wd5 (I'm assuming that's the "parity" drive being
written) has twice the transfer-ops and twice the bytes-transferred, but
the drive is 100% busy!
Disks: seeks xfers bytes %busy
cd0
wd2
nfs0
nfs1
wd0
wd5 165 5286K 100
wd4 83 2643K 2.2
raid1
Does anyone have a clue why the drive might be so busy? Is this normal?
At this rate it will take another day to initialize a 465GB mirror!
With a SATA-II transfer speed of 3Gbytes/sec (300MBytes/sec) it really
should be a little bit faster. :)
Oh, one more thing - I am running smartd, and smartd doesn't show any
errors or problems.
-------------------------------------------------------------------------
| Paul Goyette | PGP Key fingerprint: | E-mail addresses: |
| Customer Service | FA29 0E3B 35AF E8AE 6651 | paul at whooppee.com |
| Network Engineer | 0786 F758 55DE 53BA 7731 | pgoyette at juniper.net |
| Kernel Developer | | pgoyette at netbsd.org |
-------------------------------------------------------------------------
Home |
Main Index |
Thread Index |
Old Index