NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: poor write performances with RAID 5 Raidframe
On Sat, Nov 29, 2014 at 06:29:29PM -0500, Atticus wrote:
> Regardless of hardware/software configuration, RAID 5 is going to have
> horrific performance.
Horsepuckey. I can only assume you expect RAID 6 would perform even worse.
Here's 80GB of data written in 5 interleaved streams of 16k writes to a
RAID6 set on a 5 year old controller:
# date ; for i in 0 1 2 3 4; do dd if=/dev/zero of=test$i bs=16k count=1048576 &done ; wait ; date
Mon Dec 1 08:24:42 UTC 2014
1048576+0 records in
1048576+0 records out
17179869184 bytes transferred in 488.377 secs (35177473 bytes/sec)
1048576+0 records in
1048576+0 records out
17179869184 bytes transferred in 489.194 secs (35118724 bytes/sec)
1048576+0 records in
1048576+0 records out
17179869184 bytes transferred in 493.197 secs (34833685 bytes/sec)
1048576+0 records in
1048576+0 records out
17179869184 bytes transferred in 493.270 secs (34828530 bytes/sec)
1048576+0 records in
1048576+0 records out
17179869184 bytes transferred in 493.378 secs (34820906 bytes/sec)
Mon Dec 1 08:32:56 UTC 2014
[1] Done dd if=/dev/zero of=test${i} bs=16k count=1048576
[2] Done dd if=/dev/zero of=test${i} bs=16k count=1048576
[3] Done dd if=/dev/zero of=test${i} bs=16k count=1048576
[4] Done dd if=/dev/zero of=test${i} bs=16k count=1048576
[5] Done dd if=/dev/zero of=test${i} bs=16k count=1048576
That's 33MB/sec per stream or 165MB/sec. Doesn't look too shabby to me,
particularly given the deliberately small size of each write.
The problem with parity RAID configurations is lining everything up neatly
to the stripe size and properly buffering data on write. Good hardware
controllers (like, say, Areca) do this well even when confronted with
many simultaneous streams of small writes; bad ones (like, say, LSI) do it
only when given a benchmarketing workload of a single stream of large
writes or a giant nonvolatile cache to use for coalescing. In software,
it all depends on workload, but there are well known approaches like LFS
for write gathering to effectively use parity RAID (it's not really all
that surprising that LFS and RAID came from the same project at Berkeley)
which can produce hardware-like results when applied correctly.
Unfortunately one thing highly tunable systems like RAIDframe, coupled
with small maximum I/O sizes and old filesystems, let you see is just how
wrong it can all get when you don't configure it just right.
> It is also nearly useless in terms of redundancy.
Also horsepuckey. The RAID 6 set I ran the above test on has 12 drives;
it's had two disk failures in three years, and no double failures. In
other words, a RAID 5 set would have provided 100% availability on that
system despite the failure of 1/6 of its spindles. To believe that RAID 5
is "nearly useless in terms of redundancy" you have to either know nothing
about probability or believe that all, or almost all, disk failures are
highly correlated. It's just not so.
> If you're looking for good performance and redundancy, RAID 10 is the way to
> go.
Sure, if you've got enough space and power to hold twice the available
amount of storage -- and do try not to buy one of those controllers that builds
mirrors-of-stripes instead of stripes-of-mirrors...
Real world problems seldom have one-size-fits-all solutions.
Thor
Home |
Main Index |
Thread Index |
Old Index