Subject: Re: Multiprocessor with NetBSD ?
To: NetBSD-current <current-users@netbsd.org>
From: mike stone <bsdusr@yawp.com>
List: current-users
Date: 06/05/2001 23:20:37
> This notwithstanding, if we *don't* attempt SMP, I think we're going
> to get completely buried. I'm intent on picking up a dual-proc or
> quad-proc box at some point, and I'd like to be able to use BSD --
> NetBSD, specifically -- once I do.
*grin*
that brings us right back around to the original subject of this thread,
namely the various (and confusing) uses of the term 'SMP'.
it would be entirely possible to build a shared-nothing system that can
run on a box with any number of CPUs. the main point is that each
CPU would have sole authority over any resources it touches, and that
any other CPU would have to go through the authoritative CPU for access
to those resources.
that would entail some kind of peer-to-peer arbitration protocol, and
and we might see some amount of performance hit because CPUs would have
to work through the current SOA for access to locked resources.
i don't think the penalty would be all that bad in practice, though,
because pure (shared-everything) SMP systems tend to migrate to
that kind of arrangement in operation.
if CPU-1 controls some resource, CPUs 2-N will find other things to do
while it works. as they do, each CPU will lock the resources necessary
for its own job. when CPU-1 finishes its job, and is ready to release
its resource, there are good odds that CPUs 2-N will still be working at
that exact moment. when CPU-1 goes to get its next job, it can't take
anything that requires a resource locked by CPUs 2-N, but it *can* take
any job that requires the resource it just used. and even if another
CPU is free at the same moment, CPU-1 can take jobs for its own resource
faster than any other CPU can. so over time, pure shared-everything
SMP systems drift into a nearly pure shared-nothing arrangement, because
that works best statistically at the per-job level.
the other advantage of having shared-nothing arbitration protocols built
into the kernel is that you can define a 'virtual CPU' that acts as the
gateway to resources on other machines. with suitable tweaking, a
process could migrate from one machine to another exactly the same way
it would migrate from one CPU to another. granted, such migrations
would probably be rare, because a process would run *much* faster on the
machine whose physical memory holds its context. OTOH, if you had a
networked swap space, a process might swap out on a busy machine and
swap back in on an idle one.
i believe SGI uses (or used) that kind of arrangement in its MP
hardware.
mike
.