Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Lightweight virtualization - the rump approach
On Sat May 15 2010 at 12:44:11 +0200, Jean-Yves Migeon wrote:
> >>- the basic architecture; how did/do you achieve such a functionality?
> >>Adding an extra layer of indirection within the kernel?
> >
> >There's no artificial extra layer of indirection in the code (and,
> >compared to other virtualization technologies, there's one less at
> >runtime). It's mostly about plugging into key places and understanding
> >what you can use directly from the host (e.g. locking, which is of
> >course very intertwined with scheduling) and what you really don't want
> >to be emulating in userspace at all (e.g. virtual memory). Due to some
> >magical mystery reason, even code which was written 20 years ago tends
> >to allow for easy separation.
>
> "Magical mystery reason?" Could you elaborate on that one? Is it due to
> architecture decisions made in the past to build up operating systems,
> or is it rather specific to the code base you are working with?
If I could, it wouldn't be a magical mystery reason ;)
But what I meant is that (good) code tends to naturally be structured so
that a rump-like construct where pieces are cherry-picked is possible.
Seriously, how many people would have believed 2 years ago that you
can boot and run a NetBSD kernel without vfs? Yet, it just works.
I find that quite magical and mysterious.
> >>Suppose we have
> >>improvements in one part of it, like TCP, IP stacks, could it directly
> >>benefit the rumpnet component, or any service sitting above it?
> >
> >Could you elaborate this question?
>
> Just giving examples:
> - suppose that TCPCT support is added in NetBSD's TCP stack. Is it
> trivial to port/adapt/use such improvements within rump kernel and systems?
> - extending USB stack (bringing in USBv3, or UVC class) within kernel
> can be "easily" used (or ported) through rump?
Hmm, I think there is some confusion as to how rump works. The best
way to find out is probably by running some programs and seeing what
they do and what routines in what files they call (easy now thanks to
Paul Koning fixing our forever-broken gdb).
When you get above rumpkern (and some code in the faction libs, like
vm support support routines in rumpvfs), the only code not from the
regular kernel is (off the top of my head):
a) devices specific to rump (like virtif already mentioned in the
previous email)
b) some module improvements (supporting dynamic attach where the kernel
code is not yet module-ready, e.g. most usb drivers)
c) things dealing with the lack of devfs. a regular system can assume
MAKEDEV has been run. a rump kernel cannot, so drivers create
necessary device nodes when they attach (if the device node files
cannot be encoded with my recent conf/majors stuff)
"b" and "c" will go away when NetBSD is fixed. So there's really nothing
rump-specific left.
So to answer your original question, for tcpct it's probably a matter of
adding a source file to SRCS. OTOH, I already have most of the support
done to write rump Makefiles in terms of config, e.g. ffs looks like this:
=== snip ===
include "conf/files"
file-system FFS
options WAPBL
options APPLE_UFS
options UFS_DIRHASH
options FFS_EI
options QUOTA
=== snip ===
(that can and should be used for the kernel module too, of course.
full exploitation requires some of the stuff uebs was writing about in
his tech-kern marathon 2 months ago.)
For USB it depends on what USB3 demands from the host controller.
You might need to modify ugenhc. Notably, that doesn't have anything to
do with running the USB stack in a rump kernel, "only" with accessing the
devices. ... I guess the usb stack without devices is a little boring.
Another more generic but maybe less flexible (in terms of runtime) way
would be to use IOMMUs and run the hardware host controller driver in
a rump kernel.
> On a more general perspective, consider that a specific driver, file
> system, network protocol... is ported within NetBSD. Would that make it
> easy to port to another OS via rump? Does the code has to be written
> following specific guidelines, so it can take advantage from the
> "faction" layer?
I'm not really sure what you mean, but I think "port" is the wrong word.
The portability of rump and the drivers within is defined in terms
of rumpuser.
> - smaller code size.
> - smaller components, so you reach higher assurance. Running different
> processes under one general purpose OS represents tons of states.
> - having bugs within processes in a dom0 could potentially compromise
> the underlying OS, and as a consequence, the rest of the host, including
> domUs, due to dom0 privileges.
>
> > The only reason I can think of
> >is that you don't trust your dom0 OS enough, but then again you can't
> >really trust Xen guests either?
>
> Well, the dom0 should be minimal, for security reasons. In a "perfect"
> world, all devices would be exported to "semi privileged domUs" and
> protected from others with different mechanisms, like IOMMU. The dom0
> would just be an OS running the XenStore, and some simple backends.
Well, maybe. That does make some sense if you already are capable
of hosting domU's. I never found Xen very convenient for my use case
(occasional testing) because it needs a special dom0 kernel. Why isn't
that stuff in GENERIC again? IIRC there were some device support issues
years ago, but do they still remain?
That's not to say I wouldn't love to see a rump process vs. xen hypercall
hosting comparison if someone wants to play with it.
Not fully related but came to mind here perhaps due to security concerns:
one thing I've been thinking of is that it would be quite nice to run
a separate "web search" firefox instance with a rump networking stack
with it's own ip so it would be less trivial for websearch providers to
be evil.
> >>- If rump could be used both for lightweight virtualization (like rump
> >>fs servers), or more heavyweight one (netbsd-usermode...)?
> >
> >Usermode = rump+more, although paradoxically rump = usermode+more also
> >applies (where more != 0 and usermode != rump ;).
> >
> >I'm no longer really sure why someone would want to use a usermode
> >operating system on top of POSIX at all, since scheduling in them seems
> >quite inefficient in my experience. But if someone does, the overlap is
> >that lot of the drivers written for rump can be used, for example ugenhc
> >which accesses host USB devices or virtif which shovels network traffic
> >to/from /dev/tap. rumpfs could probably be extended into a fullblown
> >hostfs, etc ...
>
> Wild guess: from an admin PoV (not a developer), compared to other
> approaches like capabilities, or rump, the containers/zones approach are
> familiar; you "deprivilege" a full blown kernel in an
> implementation-defined namespace protected by the host kernel; then, you
> configure the guest in the same way as you would do it with a
> traditional general OS sitting above.
Oh I wasn't arguing against full virtualization, that is definitely
useful. Just that the usermode way of doing it doesn't seem to fit
any bill. It's not particularly fast (vs. e.g. Xen) and it's not
particularly convenient due to limited hostability (vs. e.g. qemu).
For these reasons, I don't find it better than the alternatives. Plus,
it has the full virtualization admin drag where you need to setup and
maintain an entire system regardless of how little you are actually
interested in virtualizing. Sure, it's "familiar" maintenance, but
maintenance nonetheless.
Home |
Main Index |
Thread Index |
Old Index