NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: nvmm users - experience
I haven’t done any testing of NVMM under 10.x or -current, so my observations may be a bit dated. I’m also making an assumption, maybe an incorrect one, that KVM in Linux is very similar to NVMM in NetBSD. Based on that I’d expect to see similar responsiveness and support in both, which hasn’t been what I’ve observed. My testing has all been done on the same machine and I’ve tried to use the same setups in both cases, i.e. the basic change was which type of acceleration is used.
(BTW, assuming that KVM and NVMM are indeed quite similar then a test for the completeness of NVMM would seem to be can it install, run and offer the same level of performance as KVM does on identical HW running the same guest systems. The acid test(s) seem to be OS X which generally installs and runs on KVM but won't with NVMM, although most are probably not interested in these systems. One interesting observation in this regard is that OS X 10.6 (Snow Leopard) installs and runs with KVM but doesn’t install with NVMM. However the KVM installed OS will run with NVMM.)
For those systems which ran in both setups I’d generally see poorer mouse and keyboard response in NVMM than I did in KVM. In some cases I’d have to change the mouse/keyboard emulation from PS2 to USB or visa versa to get them to work or be a bit more responsive. I’m not sure it this is a problem with NVMM, the way it interfaces with the kernel or the NetBSD kernel itself though. In all my tests I was interfacing to the guest OS from the actual keyboard on my system, not using networking access.
Commenting on your follow-up on Xen startup and how I considered doing this in NVMM. I have a rudamentory script that I use to define/create guest systems which also includes some hooks for starting up guest systems. There’s not currently any way I found in NVMM to send in shutdown commands or instructions though which seem to be available in Xen and KVM. I did put hooks in my script to backup and restore guest systems.
-bob
On May 23, 2023, at 8:11 AM, Mathew, Cherry G.* <c%bow.st@localhost> wrote:
> Hi Robert, Matthias,
>
> (taking current-users@ off Cc:)
>
>
> Thank you so much for your respective replies. Replying further inline
> below.
>
>>>>>> "bob" == Robert Nestor <rnestor%mac.com@localhost> writes:
>
> bob> My experience with nvmm is limited and was mainly trying to use
> bob> it on 9.x, but I have the feeling that development on it has
> bob> pretty much ceased and there’s very little interest in
> bob> improving it. I’ve been running a comparison experiment seeing
> bob> what it takes to get as many systems as possible running in
> bob> various environments - NVMM, plain qemu, Linux KVM and
> bob> eventually Xen. For the time being I’ve given up on NVMM and
> bob> have been concentrating on Linux KVM as I’ve found a number of
> bob> systems that seem to install and run fine there which don’t
> bob> under NVMM.
>
> Ok, so it looks like there might be some emulation completeness issues
> (related to your variety of workloads) - but I was wondering if you saw
> any difference in the "responsiveness" of the guest OS - for eg: when
> you login using ssh, have you ever noticed jitter on your keystrokes, or
> intermittent freezes, for eg: - I'm specifically asking about the
> NetBSD/nvmm case.
>
>
> [...]
>
>
>>>>>> "MP" == Matthias Petermann <mp%petermann-it.de@localhost> writes:
>
>
> [...]
>
>
> MP> I came across Qemu/NVMM more or less out of necessity, as I had
> MP> been struggling for some time to set up a proper Xen
> MP> configuration on newer NUCs (UEFI only). The issue I encountered
> MP> was with the graphics output on the virtual host, meaning that
> MP> the screen remained black after switching from Xen to NetBSD
> MP> DOM0. Since the device I had at my disposal lacked a serial
> MP> console or a management engine with Serial over LAN
> MP> capabilities, I had to look for alternatives and therefore got
> MP> somewhat involved in this topic.
>
> MP> I'm using the combination of NetBSD 9.3_STABLE + Qemu/NVMM on
> MP> small low-end servers (Intel NUC7CJYHN), primarily for classic
> MP> virtualization, which involves running multiple independent
> MP> virtual servers on a physical server. The setup I have come up
> MP> with works stably and with acceptable performance.
>
> I have a follow-on question about this - Xen has some config tooling
> related to startup - so you can say something like
>
> 'xendomains = dom1, dom2' in /etc/rc.conf, and these domains will be
> started during bootup.
>
> If you did want that for nvmm, what do you use ?
>
> Regarding the hardware issues, I think I saw some discussion on
> port-xen@ so will leave it for there.
>
>
> MP> Scenario:
>
> MP> I have a small root filesystem with FFS on the built-in SSD, and
> MP> the backing store for the VMs is provided through ZFS ZVOLs. The
> MP> ZVOLs are replicated alternately every night (full and
> MP> incremental) to an external USB hard drive.
>
> Are these 'zfs send' style backups ? or is the state on the backup USB
> hard drive ready for swapping, if the primary fails for eg ?
>
> I have been using a spindisk as a mirror component with NVMe - bad idea!
> It slows down the entire pool.
>
> MP> There are a total of 5 VMs:
>
> MP> net (DHCP server, NFS and SMB server, DNS server) app
> MP> (Apache/PHP-FPM/PostgreSQL hosting some low-traffic web apps)
> MP> comm (ZNC) iot (Grafana, InfluxDB for data collection from two
> MP> smart meters every 10 seconds) mail (Postfix/Cyrus IMAP for a
> MP> handful of mailboxes)
>
> MP> Most of the time, the Hosts CPU usage of the host with this
> MP> "load" is around 20%. The provided services consistently respond
> MP> quickly.
>
> Ok - and these are accounted as the container qemu processes' quota
> scheduling time, I assume ? What about RAM ? Have you had a situation
> where the host OS has to swap out ? Does this cause trouble ? Or does
> qemu/nvmm only use pinned memory ?
>
> MP> However, I have noticed that depending on the load, the clocks
> MP> of the VMs can deviate significantly. This can be compensated
> MP> for by using a higher HZ in the host kernel (HZ=1000) and
> MP> tolerant ntdps configuration in the guests. I have also tried
> MP> various settings with schedctl, especially with the FIFO
> MP> scheduler, which helped in certain scenarios with high I/O
> MP> load. However, this came at the expense of stability.
>
> I assume this is only *within* your VMs, right ? Do you see this across
> guest Operating Systems, or just specific ones ?
>
> MP> Furthermore, in my system configuration, granting a guest more
> MP> than one CPU core does not seem to provide any
> MP> advantage. Particularly in the VMs where I am concerned about
> MP> performance (net with Samba/NFS), my impression is that
> MP> allocating more CPU cores actually decreases performance even
> MP> further. I should measure this more precisely someday...
>
> ic - this is interesting - are you able to run some tests to nail this
> down more precisely ?
>
>
> [...]
>
>
> MP> If you have specific questions or need assistance, feel free to
> MP> reach out. I have documented everything quite well, as I
> MP> intended to contribute it to the wiki someday. By the way, I am
> MP> currently working on a second identical system where I plan to
> MP> test the combination of NetBSD 10.0_BETA and Xen 4.15.
>
> There's quite a bit of goodies wrt Xen in 10.0 - mainly you can now run
> accelerated as a Xen guest (hvm with the PV drivers active).
>
> Thanks again for both of your feedback!
>
> --
> ~cherry
Home |
Main Index |
Thread Index |
Old Index