True enough. But that would require a Qbus backplane with at least
eight PMI slots (four for CPUs and four for RAM), which as far as I
know does not exist. Of course one could be built.
Very NUMA! I suppose for DEC's designed-for use case it was fine.
Yes. It would also appear to be the only reasonable way to do it
within the restrictions of the Qbus.
Well...the uV2 already ignores the QBus's restrictions in at least one
respect, with its private memory interconnect. There's no inherent
reason the interconnect couldn't've been designed to accept multiple
CPUs instead of just one, in which case the only limit on memory
sharing would be the 16MB addressing limit.
More interesting to go with the NUMA, and have each machine have
[its] own memory space. Consider each processors memory as just a
part of the total physical memory that exist. Costly with cross
processor memory access, but that's what NUMA is all about.
Indeed. It might make a good NUMA research platform _because_
interprocessor access is so costly.
And while only one processor will deal with interrupts, DMA could
potentially go to any memory, and you could even potentially let any
processor initiate I/O. All you need is proper synchronization. :-)
Just a SMP...er, SMOP. :-)
And then implement a way for memory contents to migrate from one CPU
to another, if needed.
I've been toying with the idea of experimenting with that already, but
with Ethernet as the interconnect rather than Qbus. Perhaps the uV2,
where you don't have to worry about CPUs (and associated resources)
appearing and disappearing at runtime, would be a good first step.
But I don't know if/how well NetBSD can deal with NUMA architectures?
Neither do I. But then, I know very little about how NetBSD deals with
multiprocessing of an sort.
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse%rodents-montreal.org@localhost
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B