tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
RAIDframe level 5 write performance(was: tstile lockup)
There seems to be a fundamental problem with writing to a level 5 RAIDframe set,
at least to the block device.
I've created five small wedges in the spared-out region of my 3TB SAS discs.
In case it matters, they are connected to an mpt(4) controller.
Then I configured a 5-component, 32-SpSU, level 5 RAID set.
Unless I've gone mad, 32 sectors per SU equal 16k per SU, i.e. 64k per stripe.
Writing to that RAID's block device (raid2d) in 64k blocks gives me a dazzling
troughput of 2.4MB/s and a dd mostly waiting in vnode.
Writing to the raw device (rraid2d) gives 240MB/s, i.e. is two orders of
magnitude faster.
Reading is at 22MB/s for the block device and 315MB/s for the raw device.
Writing 16k chunks to the block dev drops to 530kB/s.
To make sure there's nothing wrong with my discs, I the configured a two-
component level 1 array with 128 SpSU (again giving a 64k stripe size, I hope).
With that, I write 57MB/s to the block and 85MB/s to the raw device.
Reading is 21MB/s for the block dev and 87MB/s for the raw dev.
So there's nothing wrong with my discs, I think; there's something wrong with
level 5 RAIDframe. My impression is that NetBSD 4 didn't have that issue but
I can't test because the discs on that machine are too large.
How can I analyze/debug this? Performance is so insanely bad I cannot possibly
put this file server into production.
Home |
Main Index |
Thread Index |
Old Index