tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Where is the component queue depth actually used in the raidframe system?
On Thu, 14 Mar 2013 10:32:26 -0400
Thor Lancelot Simon <tls%panix.com@localhost> wrote:
> On Wed, Mar 13, 2013 at 09:36:07PM -0400, Thor Lancelot Simon wrote:
> > On Wed, Mar 13, 2013 at 03:32:02PM -0700, Brian Buhrow wrote:
> > > hello. What I'm seeing is that the underlying disks
> > > under both a raid1 set and a raid5 set are not seeing anymore
> > > than 8 active requests at once across the entire bus of disks.
> > > This leaves a lot of disk bandwidth unused, not to mention less
> > > than stellar disk performance. I see that RAIDOUTSTANDING is
> > > defined as 6 if not otherwise defined, and this suggests that
> > > this is the limiting factor, rather than the actual number of
> > > requests allowed to be sent to a component's queue.
> >
> > It should be the sum of the number of openings on the underlying
> > components, divided by the number of data disks in the set. Well,
> > roughly. Getting it just right is a little harder than that, but I
> > think it's obvious how.
>
> Actually, I think the simplest correct answer is that it should be the
> minimum number of openings presented by any individual underlying
> component. I cannot see any good reason why it should be either more
> nor less than that value.
Consider the case when a read spans two stripes... Unfortunately, each
of those reads will be done independently, requiring two IOs for a given
disk, even though there is only one request.
The reason '6' was picked back in the day was that it seemed to offer
reasonable performance while not requiring a huge amount of memory to
be reserved for the kernel. And part of the issue there was that
RAIDframe had no way to stop new requests from coming in and consuming
all kernel resources :( '6' is probably a reasonable hack for older
machines, but if we can come up with something self-tuning I'm all for
it... (Having this self-tuning is going to be even more critical when
MAXPHYS gets sent to the bitbucket and the amount of memory needed for
a given IO increases...)
Later...
Greg Oster
Home |
Main Index |
Thread Index |
Old Index