I have a newly minted & commissioned NetBSD 6.1.5 server, featuring a
3.6 TiB raidframe RAID5 mounted as /home. I decided to check I/O
performance as follows:
[wam@4256EE1, ~, 5:48:00pm] 408 % time dd if=/dev/zero of=./testfile
bs=16k count=32768
32768+0 records in
32768+0 records out
536870912 bytes transferred in 45.772 secs (11729243 bytes/sec)
whew !!!! that took (0.096 cpu + 2.585 sys) sec., 0:45.77 elapsed
time tot, 5.8% CPU efficiency
(0 text, 0 data, 0 max) KB, (0+25) io, 0 pfs + 0 swaps
[wam@4256EE1, ~, 5:48:48pm] 409 %
The RAID in question has 4 active drives, 1 parity drive & 1 spare,
created from identical ~900 GiB partitions on each of 6 7200 RPM 1 TB
SATA3 HDD's. Those drives purportedly ave platter I/O speeds of around
120 MiB/s (observed on other boxen). With 4 drive in parallel, that
would be 480-ish MiB/s sustainable, under ideal conditions. I see
about 11 MiB/s above. That implies somewhat non-ideal conditions,
which might not be surprising :-/. I *thought* I setup the RAID for
reasonably optimum performance during provisioning of the machine, as
breath-takingly/tediously documented onlist. What sort of online
diagnostics can I do (dumpfs, etc.) on the mounted filesystem to
assess where I might reconfigure/tune the RAID for better performance.
I logged in using SSH as my regular user, then su'ed to root, so I
(think I) need to keep the fs mounted. Any clues appreciated. TIA &
have a good one.