NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: nvmm users - experience
Hi Robert, Matthias,
(taking current-users@ off Cc:)
Thank you so much for your respective replies. Replying further inline
below.
>>>>> "bob" == Robert Nestor <rnestor%mac.com@localhost> writes:
bob> My experience with nvmm is limited and was mainly trying to use
bob> it on 9.x, but I have the feeling that development on it has
bob> pretty much ceased and there’s very little interest in
bob> improving it. I’ve been running a comparison experiment seeing
bob> what it takes to get as many systems as possible running in
bob> various environments - NVMM, plain qemu, Linux KVM and
bob> eventually Xen. For the time being I’ve given up on NVMM and
bob> have been concentrating on Linux KVM as I’ve found a number of
bob> systems that seem to install and run fine there which don’t
bob> under NVMM.
Ok, so it looks like there might be some emulation completeness issues
(related to your variety of workloads) - but I was wondering if you saw
any difference in the "responsiveness" of the guest OS - for eg: when
you login using ssh, have you ever noticed jitter on your keystrokes, or
intermittent freezes, for eg: - I'm specifically asking about the
NetBSD/nvmm case.
[...]
>>>>> "MP" == Matthias Petermann <mp%petermann-it.de@localhost> writes:
[...]
MP> I came across Qemu/NVMM more or less out of necessity, as I had
MP> been struggling for some time to set up a proper Xen
MP> configuration on newer NUCs (UEFI only). The issue I encountered
MP> was with the graphics output on the virtual host, meaning that
MP> the screen remained black after switching from Xen to NetBSD
MP> DOM0. Since the device I had at my disposal lacked a serial
MP> console or a management engine with Serial over LAN
MP> capabilities, I had to look for alternatives and therefore got
MP> somewhat involved in this topic.
MP> I'm using the combination of NetBSD 9.3_STABLE + Qemu/NVMM on
MP> small low-end servers (Intel NUC7CJYHN), primarily for classic
MP> virtualization, which involves running multiple independent
MP> virtual servers on a physical server. The setup I have come up
MP> with works stably and with acceptable performance.
I have a follow-on question about this - Xen has some config tooling
related to startup - so you can say something like
'xendomains = dom1, dom2' in /etc/rc.conf, and these domains will be
started during bootup.
If you did want that for nvmm, what do you use ?
Regarding the hardware issues, I think I saw some discussion on
port-xen@ so will leave it for there.
MP> Scenario:
MP> I have a small root filesystem with FFS on the built-in SSD, and
MP> the backing store for the VMs is provided through ZFS ZVOLs. The
MP> ZVOLs are replicated alternately every night (full and
MP> incremental) to an external USB hard drive.
Are these 'zfs send' style backups ? or is the state on the backup USB
hard drive ready for swapping, if the primary fails for eg ?
I have been using a spindisk as a mirror component with NVMe - bad idea!
It slows down the entire pool.
MP> There are a total of 5 VMs:
MP> net (DHCP server, NFS and SMB server, DNS server) app
MP> (Apache/PHP-FPM/PostgreSQL hosting some low-traffic web apps)
MP> comm (ZNC) iot (Grafana, InfluxDB for data collection from two
MP> smart meters every 10 seconds) mail (Postfix/Cyrus IMAP for a
MP> handful of mailboxes)
MP> Most of the time, the Hosts CPU usage of the host with this
MP> "load" is around 20%. The provided services consistently respond
MP> quickly.
Ok - and these are accounted as the container qemu processes' quota
scheduling time, I assume ? What about RAM ? Have you had a situation
where the host OS has to swap out ? Does this cause trouble ? Or does
qemu/nvmm only use pinned memory ?
MP> However, I have noticed that depending on the load, the clocks
MP> of the VMs can deviate significantly. This can be compensated
MP> for by using a higher HZ in the host kernel (HZ=1000) and
MP> tolerant ntdps configuration in the guests. I have also tried
MP> various settings with schedctl, especially with the FIFO
MP> scheduler, which helped in certain scenarios with high I/O
MP> load. However, this came at the expense of stability.
I assume this is only *within* your VMs, right ? Do you see this across
guest Operating Systems, or just specific ones ?
MP> Furthermore, in my system configuration, granting a guest more
MP> than one CPU core does not seem to provide any
MP> advantage. Particularly in the VMs where I am concerned about
MP> performance (net with Samba/NFS), my impression is that
MP> allocating more CPU cores actually decreases performance even
MP> further. I should measure this more precisely someday...
ic - this is interesting - are you able to run some tests to nail this
down more precisely ?
[...]
MP> If you have specific questions or need assistance, feel free to
MP> reach out. I have documented everything quite well, as I
MP> intended to contribute it to the wiki someday. By the way, I am
MP> currently working on a second identical system where I plan to
MP> test the combination of NetBSD 10.0_BETA and Xen 4.15.
There's quite a bit of goodies wrt Xen in 10.0 - mainly you can now run
accelerated as a Xen guest (hvm with the PV drivers active).
Thanks again for both of your feedback!
--
~~cherry
Home |
Main Index |
Thread Index |
Old Index