On Tue, 27 Sep 2011 18:57:06 -0400, David Howland wrote:
On 9/27/2011 5:01 PM, Jean-Yves Migeon wrote:It might be possible, but you won't get far anyway: there is minimalwork to get done (mostly code refactorization) before PV drivers can beused under a HVM domain.Minimal you say? How much might it cost? I haven't the time to learn Xen internals and do it myself, but I have been planning to donate some money...
Alas, I can't give any sort of "money" estimate. So let's say, for someone familiar with kernel work (not necessarily NetBSD), it's about a week of work, including testing: - all PV drivers are there, they need to be made more portable by using the proper abstractions (a game of 's/foo/bar/'), - write the code that will check for Xen presence while booting and map the shared_info struct in memory (you have an example of it in x86/xen/enlighten.c in Linux. Avoid copy/pasting though, it's GPLed :) ). - once the shared_info struct is available, the attach routines from autoconf(9) needs very small adaptations, and it will take care of everything for us.
This is the kind of stuff you do when you want to have the "best of both worlds": memory is managed at hardware level via HVM, and devices use PV to avoid emulation overhead. As NetBSD got PV support very early, there wasn't much push to mix HVM and PV; we will get there eventually though.Totally. In my case I want to use it for builds, so I want fast IO _and_ MP. I think I will try a test tonight... build the kernel in both environments, and see which one is faster.
For mono-CPU work, I don't think you will see much difference between HVM and PV, except for I/O (PV should clearly win). For multi-CPU number crunching stuff, things will be different (although this depends on the situation): sometimes, the mono-CPU job is _faster_ that the two-CPUs one. Lock overhead and contention, maybe.
Up to date results would be interesting. Thanks in advance if you have some!
-- Jean-Yves Migeon jeanyves.migeon%free.fr@localhost