pkgsrc-Changes archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
CVS commit: pkgsrc/emulators/qemu
Module Name: pkgsrc
Committed By: ryoon
Date: Mon May 24 14:22:08 UTC 2021
Modified Files:
pkgsrc/emulators/qemu: Makefile PLIST distinfo
pkgsrc/emulators/qemu/patches: patch-configure
patch-hw_mips_meson.build patch-include_sysemu_nvmm.h
patch-meson.build patch-meson__options.txt patch-qemu-options.hx
patch-target_i386_meson.build
Added Files:
pkgsrc/emulators/qemu/patches: patch-accel_Kconfig
patch-include_sysemu_hw__accel.h patch-nvmm-accel-ops.c
patch-nvmm-accel-ops.h patch-nvmm-all.c
patch-target_i386_nvmm_meson.build
patch-target_i386_nvmm_nvmm-accel-ops.c
patch-target_i386_nvmm_nvmm-accel-ops.h
patch-target_i386_nvmm_nvmm-all.c
Removed Files:
pkgsrc/emulators/qemu/patches: patch-accel_stubs_nvmm-stub.c
patch-contrib_ivshmem-client_ivshmem-client.c
patch-contrib_ivshmem-server_ivshmem-server.c
patch-include_sysemu_hw_accel.h patch-target_i386_helper.c
patch-target_i386_kvm-stub.c patch-target_i386_nvmm_all.c
patch-target_i386_nvmm_cpus.c patch-target_i386_nvmm_cpus.h
Log Message:
qemu: Update to 6.0.0
* Add zstd dependency.
Changelog:
== System emulation ==
=== Incompatible changes ===
Consult the [https://qemu-project.gitlab.io/qemu/system/removed-features.html 'Removed features' ] page for details of suggested replacement functionality
* The deprecated ''pc-1.0'', ''pc-1.1'', ''pc-1.2'' and ''pc-1.3'' machine types have been removed (they likely could not be used for live migration from old QEMU versions anymore anyway). Use a
newer ''pc-i440fx-...'' machine type instead.
* TileGX emulation has been removed without replacement
* The ''change'' QMP command has been removed. Use ''blockdev-change-medium'' or ''change-vnc-password'' instead.
* The ''-show-cursor'' option has been removed. Use ''-display sdl,show-cursor=on'' instead.
* The ''-realtime'' option has been removed. Use ''-overcommit mem-lock=on|off' instead.
* The ''-tb-size'' option has been removed. Use ''-accel tcg,tb-size=...'' instead.
* The configure script --enable/disable-git-update args have been replaced with --with-git-submodules
* The ''-usbdevice audio'' option has been removed. Use ''-device usb-audio'' instead.
* The ''-usbdevice ccid'' option has been removed with no replacement
* The ''-vnc'' parameter ''acl'' option, and ''acl_*'' monitor commands have been removed.
* The ''pretty'' option is no longer accepted when used with the human monitor
* The ''change'' QMP command has been removed. Use ''blockdev-change-medium'' or ''change-vnc-password'' instead.
* The ''query-events'' QMP command has been removed
* The ''migrate_set_speed'', ''migrate_set_downtime'' and ''migrate-set-cache-size'' QMP/HMP commands have been removed.
* The ''query-cpus'' QMP command has been removed
* The ''arch'' field in the ''query-cpus-fast'' command has been removed
* The ''-chardev'' parameter ''wait'' option is no longer accepted for socket clients
* The ''ide-drive'' device type has been removed
* The ''scsi-disk'' device type has been removed
* The ''encryption_key_missing'' field has been removed from block device info data
* The ''status'' field has been removed from dirty bitmap info
* The ''dirty-bitmaps'' field has been removed from the ''BlockInfo'' struct
* The ''file'' block driver no longer permits use with block devices
* The use of ''-global'' to set floppy controllers is removed. Use ''-device floppy,...'' instead.
* The ''-drive'' option must now use ''if=none'' for drives the onboard device does not pick up.
* The ''object-add'' QMP command member ''props'' has been removed. Its contents may be used with less nesting instead.
* The mips ''fulong2e'' machine alias has been removed. Use ''fuloong2e'' instead.
=== New deprecated options and features ===
Consult the [https://www.qemu.org/docs/master/system/deprecated.html "Deprecated Features"] chapter of the QEMU System Emulation User's Guide for further details of the deprecations and their
suggested replacements.
* The --enable-fips option has been deprecated. Consumers wishing to have FIPS compliance must build QEMU with libcrypt and gnutls, NOT nettle.
* The ''-writeconfig'' option has been deprecated. The functionality of ''-writeconfig'' is limited and the code does not even try to detect cases where it prints incorrect syntax (for example if
values have a quote in them). It will be removed without replacement.
* Boolean parameters such as ''share=on'' / ''share=off'' could be written in short form as ''share'' and ''noshare''. This is now deprecated and will cause a warning.
* ''-chardev'' backend aliases ''tty'' and ''parport'' are aliases that will be removed. Instead, the actual backend names ''serial'' and ''parallel'' should be used.
* The ''delay'' option for socket character devices is now deprecated.
* Userspace local APIC with KVM (''-M kernel-irqchip=off'')
* hexadecimal sizes with scaling multipliers (e.g. ''0x20M'')
* ''-spice password=string'' is deprecated now. Use ''password-secret'' option instead.
* ''opened'' property of ''rng-*'' objects
* ''loaded'' property of ''secret'' and ''secret_keyring''
* MIPS ''Trap-and-Emulate'' KVM support
=== 68k ===
* Add a new machine, virt, based on virtio devices
=== Alpha ===
=== Arm ===
* QEMU now supports emulation of the Arm-v8.1M architecture and the Cortex-M55 CPU
* Emulation of the ARMv8.4-TTST extension is now supported
* Emulation of the ARMv8.4-SEL2 extension is now supported
* Emulation of the FEAT_SSBS extension is now supported
* Emulation of the PAuth extension now supports an optional IMPDEF pauth algorithm which is not cryptographically secure but is much faster to compute
* Emulation of the ARMv8.4-DIT extension is now supported. (Note that QEMU's implementation does not in fact provide any timing guarantees; emulation of the extension is purely to support guests
which query its presence and work with the PSTATE.DIT bit.)
* Emulation of the ARMv8.5-MemTag extension is now supported for linux-user. (It was already supported for system emulation.)
* xlnx-zynqmp boards now support the Xilinx ZynqMP CAN controllers
* the sbsa-ref board now supports Cortex-A53/57/72 cpus
* the xlnx-versal board now has USB support, and a model of the XRAMs and the XRAM controller
* the sabrelite board emulation has been improved and it can now run U-Boot
* the npcm7xx boards support more devices: ADC, PWM, SMBus, EMC, MFT
* the gdbstub's representation of SVE registers allows GDB to properly handle aliasing
* the 'virt' board now provides a mechanism for secure (EL3) firmware to power down or reset the system
* documentation for vexpress/versatile has been updated with example kernel configuration/command lines
* A new board model mps3-an524 (using Cortex-M33) is now implemented
* A new board model mps3-an547 (using Cortex-M55) is now implemented
=== AVR ===
=== Hexagon ===
* QEMU can now emulate Qualcomm's Hexagon DSP units.
=== HPPA ===
=== Microblaze ===
=== MIPS ===
* Loongson-3 "virt" machine added
=== Nios2 ===
=== OpenRISC ===
=== PowerPC ===
* Deprecated 'compat' property of server class POWER cpus removed (use the 'max-cpu-compat' machine option instead)
* You can now explicitly choose 'kvm_type=auto' rather than only being able to do that by not setting it at all.
* powernv machine type now defaults to 1GiB of RAM
* powernv now allows an external BMC
* pseries will now send MEM_UNPLUG_ERROR QAPI message in cases where it can detect that a memory unplug has failed
* pseries will now allow cpu unplug requests to be retried, even if the guest hasn't responded to them yet.
* This will re-signal the guest, which might an unplug to complete which the guest previous rejected
=== Renesas RX ===
=== Renesas SH ===
=== RISC-V ===
* Improve the sifive_u DTB generation
* Add QSPI NOR flash to Microchip PFSoC
* Improvements to the Microchip PFSoc to improve support with the SDK
* A range of fixes to the Hypervisor extension
* Fix some mstatus mask defines
* Ibex PLIC and UART improvements
* OpenTitan memory layout update (Breaking change)
* Initial steps towards support for 32-bit CPUs on 64-bit builds
* Automate GDB XML generation (should fix GDB E14 errors)
* Sifive OTP handle OTP access failures
* Correctly generate a PMP failure when no PMP entry is configured
* Fixes to PMP region checking
* Fix 32-bit Linux boot problems with DTB placement
* OpenSBI upgraded to v0.9
* Support the QMP dump-guest-memory command
* Add support for the SiFive SPI controller (sifive_u)
* Initial RISC-V system documentation
* Support for high PCIe memory in the virt machine
* Fixes to the vector extensions CSR accesses
* ramfb support in the virt machine
=== s390 ===
* Linux kernels built with clang-11 and clang-12 now work correctly under tcg
=== SPARC ===
=== TileGX ===
* TileGX has been removed without replacement.TileGX was only implemented in linux-user mode, but support for this CPU was removed from the upstream Linux kernel in 2018, and it has also been dropped
from glibc, so there is no new Linux development taking place with this architecture, rendering the linux-user mode emulation rather useless. For running older binaries, users can simply use older
versions of QEMU.
=== Tricore ===
* Added Triboard with tc27x SoC
=== x86 ===
* TCG can emulate the PKS feature (protection keys for supervisor pages).
* Intel PT can now be exposed to KVM guests when <code>CPUID.(EAX=14,ECX=0).ECX[LIP]</code> (bit 31) is 1. Previous versions only supported Intel PT when LIP=0
* New <code>sev-inject-launch-secret</code> QMP command
* The WHPX accelerator supports accelerated APIC ("-accel whpx,kernel-irqchip=on")
* The microvm machine type got a second (optional) ioapic for the virtio-mmio irq lines, which in turn allows 24 (instead of 8) virtio-mmio devices.
* Support for running SEV-ES encrypted guests.
=== Xtensa ===
=== Device emulation and assignment ===
==== ACPI ====
* new ''-machine'' options ''oem-id'' and ''oem-table-id'' to allow setting custom values for ''OEM ID'' and ''OEM table ID'' ACPI table fields
* in QEMU 5.1, PCI root UID changed to from 1 to 0 for all x86 machine types, this caused issues in Windows guest with virtio devices being re-enumeraed as new devices. QEMU 6.0 fixes it by reverting
UID to 1 for 5.1 and older machine types. See commit 0a343a5add75 for details. For 5.2 and later machine types it might be necessary to reconfigure/reinstall Windows VM, if used disk image was
created on 5.1 and older machine types.
* Support for user provided PCI NIC index on ''pc'' machine type with help of new ''acpi-index'' PCI device option. For linux guests, It lets user to use ''onboard'' naming scheme ''enoX'' where X is
set with ''acpi-index'' option. It makes NIC naming independent from which PCI slot it is plugged in. Works with cold and hot-plugged NICs, as long as used PCI bus is managed by ACPI PCI hotplug
(which is enabled for PCI root bus and bridges present at boot time by default on latest ''pc'' machine type ).
==== Audio ====
==== Block devices ====
* virtio-blk reports <tt>--device virtio-blk-pci,discard_granularity=</tt> in the virtio-blk <tt>discard_sector_alignment</tt> configuration space field so that guests with new machine types can take
advantage of this information. Previously virtio-blk devices reported <tt>--device virtio-blk-pci,logical_block_size=</tt> instead.
==== Graphics ====
==== Input devices ====
==== IPMI ====
==== Multi-process QEMU ====
* The experimental <code>-machine x-remote</code> and <code>-device x-pci-proxy-dev</code> options have been added to support out-of-process device emulation. Currently only the
<code>lsi53c895</code> SCSI device can be emulated in a separate process. Please see [https://qemu.readthedocs.io/en/latest/system/multi-process.html the documentation] and
[[Features/MultiProcessQEMU]] for details on this experimental feature, which is still subject to change.
==== Network devices ====
==== NVDIMM ====
* nvdimm devices will check that <code>-device nvdimm,unarmed=on</code> option is used when using <code>-object memory-backend-file,readonly=on</code>
==== NVMe ====
===== Emulated NVMe Controller =====
* ''Highlights''
** The implemented spec version has been bumped to v1.4
** Experimental support for Zoned Namespaces (TP 4053) has been added
** Experimental support for NVM Subsystems, multipath I/O and namespace sharing
** Experimental support for Metadata and End-to-End Data Protection
* ''New commands''
** Dataset Management
** Compare
** Simple Copy (TP 4065)
** Format NVM
** Verify
* ''Other new features''
** Support for reporting the Deallocated or Unwritten Logical Block Error (DULBE)
** Namespace UUID reported as a Namespace Descriptor
** Support for Namespace Types (TP 4056)
** Support for triggering a SMART Critical Warning through QMP
** Controller Memory Buffer support has been enhanced for NVMe v1.4 (to revert to v1.3 behavior, use the new <code>legacy-cmb</code> controller parameter)
** Persistent Memory Region RDS/WDS support
* ''New log pages''
** Commands Supported and Effects
==== PCI/PCIe ====
* The 'pvpanic-pci' device is a PCI-device version of the 'pvpanic' ISA device, which can be used on systems with only PCI and no ISA bus as a mechanism for the guest to inform QEMU that it has
paniced.
==== SCSI ====
* Rework of the ESP SCSI emulation to allow mixed FIFO/(P)DMA commands along with various other fixes
==== SD card ====
==== SMBIOS ====
==== TPM ====
==== USB ====
* Support for writing usb traffic to package capture files for inspection with wireshark has been added. Use the new pcap=<file> property added to all usb devices to enable this.
==== VFIO ====
==== virtio ====
==== Xen ====
* A new [https://qemu.readthedocs.io/en/latest/system/guest-loader.html guest loader] which allows testing of Xen-like hypervisors booting kernels without messing around with firmware/bootloaders
==== fw_cfg ====
==== 9pfs ====
==== virtiofs ====
* Security fix for CVE-2020-35517 - prevent opening of special files
* Security fix for CVE-2021-20263 - when used with xattrmap, drop remapped security.capability
* Performance improvements with new guest kernel feature FUSE_KILLPRIV_V2
==== Semihosting ====
* Added support for RiscV (ARM style s= Character devices ===
=== Crypto subsystem ===
==== experimental qmp interface ====
=== GUI ===
* vnc: support for cursors with alpha channel has been added.
* vnc: support for extended desktop resize has been added. With virtio-vga the guest displab representation for SVE registers
=== TCG Plugins ===
* New API for querying details about HW access
* Bug fix to avoid double counting some instructions when using -icount
=== Host support ===
=== Memory backends ===
* hostmem-file: added readonly=lation to NBD_STATE_HOLE.
* ''qemu-img'' gained more accurate parsing for size values. Previously, only 53 significant digits were supported, and large sizes could end up with inadvertent rounding; now the parser supports a
full 64 bits of precision.
* The ''object-add'' QMP command is now available in qemu-storage-daemon.
* qemu-storage-daemon supports a ''--pidfile'' option now
* The ''parallels'' image format driver has gained support for dirty bitmaps in read-only mode
=== Tracing ===
=== Miscellaneous ===
* The command line option ''-object'' (or ''--object'') accepts JSON input now in all binaries (system emulators and tools). In tools, it also supports non-scalar options using the dotted key syntax
known from options like ''--blockdev''.
* The QMP command ''object-add'' is now covered by the QAPI schema and clients can use schema introspection to detect object types and options supported by the given QEMU binary.
* A new command line option ''-action'', with suboptions ''panic'', ''shutdown'', ''reboot'' and ''watchdog''. ''-action'' subsumes the pre-existing options ''-no-shutdown'' (''-action
panic=pause,shutdown=pause''), ''-no-reboot'' (''-action reboot=shutdown'') and ''-watchdog-action''; plus, it allows the user to choose whether guest panic should pause the guest (''-action
panic=pause''), shut it down (''-action panic=poweroff'', the default) or be ignored (''-action panic=none'').
* A new generic machine option ''confidential-guest-support'' was added to (partially) unify configuration for AMD SEV memory encrypt, POWER PEF and s390 Protected Virtualization, plus future methods
of protecting a guest from eavesdropping by a compromised hypervisor.
* A new [https://qemu.readthedocs.io/en/latest/system/guest-loader.html guest loader] whications.
== User-mode emulation ==
=== binfmt_misc ===
Added support of 'P' flag (preserve-argv[0])
With kernel v5.12, QEMU can detect if it is started with preserve-argv[0] flag and adjust the list of arguments accordingly.
=== Hexagon ===
Added support for the Qualcomm Hexagon processor, in linux-user mode only.
For more information, see [https://www.youtube.com/watch?v=3EpnTYBOXCI our presenation from the 2019 KVM Forum]
or the [https://github.com/qemu/qemu/blob/master/target/hexagon/README README] file
== TCG ==
* Added support for Apple Silicon hosts (macOS)
To generate a diff of this commit:
cvs rdiff -u -r1.278 -r1.279 pkgsrc/emulators/qemu/Makefile
cvs rdiff -u -r1.74 -r1.75 pkgsrc/emulators/qemu/PLIST
cvs rdiff -u -r1.177 -r1.178 pkgsrc/emulators/qemu/distinfo
cvs rdiff -u -r0 -r1.1 pkgsrc/emulators/qemu/patches/patch-accel_Kconfig \
pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.c \
pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.h \
pkgsrc/emulators/qemu/patches/patch-nvmm-all.c \
pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_meson.build \
pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.c \
pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.h \
pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-all.c
cvs rdiff -u -r1.3 -r0 \
pkgsrc/emulators/qemu/patches/patch-accel_stubs_nvmm-stub.c \
pkgsrc/emulators/qemu/patches/patch-target_i386_helper.c
cvs rdiff -u -r1.31 -r1.32 pkgsrc/emulators/qemu/patches/patch-configure
cvs rdiff -u -r1.1 -r0 \
pkgsrc/emulators/qemu/patches/patch-contrib_ivshmem-client_ivshmem-client.c \
pkgsrc/emulators/qemu/patches/patch-contrib_ivshmem-server_ivshmem-server.c \
pkgsrc/emulators/qemu/patches/patch-include_sysemu_hw_accel.h \
pkgsrc/emulators/qemu/patches/patch-target_i386_kvm-stub.c \
pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_cpus.c \
pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_cpus.h
cvs rdiff -u -r1.1 -r1.2 \
pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build \
pkgsrc/emulators/qemu/patches/patch-meson__options.txt \
pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build
cvs rdiff -u -r0 -r1.4 \
pkgsrc/emulators/qemu/patches/patch-include_sysemu_hw__accel.h
cvs rdiff -u -r1.3 -r1.4 \
pkgsrc/emulators/qemu/patches/patch-include_sysemu_nvmm.h
cvs rdiff -u -r1.5 -r1.6 pkgsrc/emulators/qemu/patches/patch-meson.build
cvs rdiff -u -r1.4 -r1.5 pkgsrc/emulators/qemu/patches/patch-qemu-options.hx
cvs rdiff -u -r1.2 -r0 \
pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_all.c
Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.
Modified files:
Index: pkgsrc/emulators/qemu/Makefile
diff -u pkgsrc/emulators/qemu/Makefile:1.278 pkgsrc/emulators/qemu/Makefile:1.279
--- pkgsrc/emulators/qemu/Makefile:1.278 Sun May 23 13:53:10 2021
+++ pkgsrc/emulators/qemu/Makefile Mon May 24 14:22:08 2021
@@ -1,7 +1,6 @@
-# $NetBSD: Makefile,v 1.278 2021/05/23 13:53:10 thorpej Exp $
+# $NetBSD: Makefile,v 1.279 2021/05/24 14:22:08 ryoon Exp $
-DISTNAME= qemu-5.2.0
-PKGREVISION= 8
+DISTNAME= qemu-6.0.0
CATEGORIES= emulators
MASTER_SITES= https://download.qemu.org/
EXTRACT_SUFX= .tar.xz
@@ -182,6 +181,7 @@ post-install:
${FIND} share/doc/qemu -path '*/_static/*' -type f -print > ${WRKDIR}/PLIST.STATIC
.include "../../archivers/lzo/buildlink3.mk"
+.include "../../archivers/zstd/buildlink3.mk"
.include "../../devel/glib2/buildlink3.mk"
.include "../../devel/jemalloc/buildlink3.mk"
.include "../../devel/snappy/buildlink3.mk"
Index: pkgsrc/emulators/qemu/PLIST
diff -u pkgsrc/emulators/qemu/PLIST:1.74 pkgsrc/emulators/qemu/PLIST:1.75
--- pkgsrc/emulators/qemu/PLIST:1.74 Thu Apr 8 13:14:51 2021
+++ pkgsrc/emulators/qemu/PLIST Mon May 24 14:22:08 2021
@@ -1,4 +1,4 @@
-@comment $NetBSD: PLIST,v 1.74 2021/04/08 13:14:51 nia Exp $
+@comment $NetBSD: PLIST,v 1.75 2021/05/24 14:22:08 ryoon Exp $
bin/elf2dmp
${PLIST.aarch64}bin/qemu-aarch64
${PLIST.aarch64_be}bin/qemu-aarch64_be
@@ -75,46 +75,75 @@ ${PLIST.xtensaeb}bin/qemu-xtensaeb
${PLIST.bridge-helper}libexec/qemu-bridge-helper
${PLIST.virtfs-proxy-helper}libexec/virtfs-proxy-helper
man/man1/qemu-img.1
+man/man1/qemu-storage-daemon.1
man/man1/qemu.1
${PLIST.virtfs-proxy-helper}man/man1/virtfs-proxy-helper.1
man/man7/qemu-block-drivers.7
man/man7/qemu-cpu-models.7
man/man7/qemu-ga-ref.7
man/man7/qemu-qmp-ref.7
+man/man7/qemu-storage-daemon-qmp-ref.7
man/man8/qemu-ga.8
man/man8/qemu-nbd.8
man/man8/qemu-pr-helper.8
share/applications/qemu.desktop
+share/doc/qemu/.buildinfo
share/doc/qemu/Makefile.multinode-NetBSD
+share/doc/qemu/devel/atomics.html
+share/doc/qemu/devel/bitops.html
+share/doc/qemu/devel/block-coroutine-wrapper.html
+share/doc/qemu/devel/build-system.html
+share/doc/qemu/devel/clocks.html
+share/doc/qemu/devel/code-of-conduct.html
+share/doc/qemu/devel/conflict-resolution.html
+share/doc/qemu/devel/control-flow-integrity.html
+share/doc/qemu/devel/decodetree.html
+share/doc/qemu/devel/fuzzing.html
+share/doc/qemu/devel/index.html
+share/doc/qemu/devel/kconfig.html
+share/doc/qemu/devel/loads-stores.html
+share/doc/qemu/devel/memory.html
+share/doc/qemu/devel/migration.html
+share/doc/qemu/devel/multi-process.html
+share/doc/qemu/devel/multi-thread-tcg.html
+share/doc/qemu/devel/qgraph.html
+share/doc/qemu/devel/qom.html
+share/doc/qemu/devel/qtest.html
+share/doc/qemu/devel/reset.html
+share/doc/qemu/devel/s390-dasd-ipl.html
+share/doc/qemu/devel/secure-coding-practices.html
+share/doc/qemu/devel/stable-process.html
+share/doc/qemu/devel/style.html
+share/doc/qemu/devel/tcg-icount.html
+share/doc/qemu/devel/tcg-plugins.html
+share/doc/qemu/devel/tcg.html
+share/doc/qemu/devel/testing.html
+share/doc/qemu/devel/tracing.html
+share/doc/qemu/genindex.html
share/doc/qemu/index.html
share/doc/qemu/interop/bitmaps.html
share/doc/qemu/interop/dbus-vmstate.html
share/doc/qemu/interop/dbus.html
-share/doc/qemu/interop/genindex.html
share/doc/qemu/interop/index.html
share/doc/qemu/interop/live-block-operations.html
-share/doc/qemu/interop/objects.inv
share/doc/qemu/interop/pr-helper.html
share/doc/qemu/interop/qemu-ga-ref.html
share/doc/qemu/interop/qemu-ga.html
share/doc/qemu/interop/qemu-qmp-ref.html
-share/doc/qemu/interop/search.html
-share/doc/qemu/interop/searchindex.js
+share/doc/qemu/interop/qemu-storage-daemon-qmp-ref.html
share/doc/qemu/interop/vhost-user-gpu.html
share/doc/qemu/interop/vhost-user.html
share/doc/qemu/interop/vhost-vdpa.html
+share/doc/qemu/objects.inv
+share/doc/qemu/search.html
+share/doc/qemu/searchindex.js
share/doc/qemu/specs/acpi_hest_ghes.html
share/doc/qemu/specs/acpi_hw_reduced_hotplug.html
-share/doc/qemu/specs/genindex.html
share/doc/qemu/specs/index.html
-share/doc/qemu/specs/objects.inv
share/doc/qemu/specs/ppc-spapr-numa.html
share/doc/qemu/specs/ppc-spapr-xive.html
share/doc/qemu/specs/ppc-xive.html
-share/doc/qemu/specs/search.html
-share/doc/qemu/specs/searchindex.js
share/doc/qemu/specs/tpm.html
-share/doc/qemu/system/.buildinfo
share/doc/qemu/system/arm/aspeed.html
share/doc/qemu/system/arm/collie.html
share/doc/qemu/system/arm/cpu-features.html
@@ -130,6 +159,7 @@ share/doc/qemu/system/arm/orangepi.html
share/doc/qemu/system/arm/palm.html
share/doc/qemu/system/arm/raspi.html
share/doc/qemu/system/arm/realview.html
+share/doc/qemu/system/arm/sabrelite.html
share/doc/qemu/system/arm/sbsa.html
share/doc/qemu/system/arm/stellaris.html
share/doc/qemu/system/arm/sx1.html
@@ -142,7 +172,8 @@ share/doc/qemu/system/build-platforms.ht
share/doc/qemu/system/cpu-hotplug.html
share/doc/qemu/system/deprecated.html
share/doc/qemu/system/gdb.html
-share/doc/qemu/system/genindex.html
+share/doc/qemu/system/generic-loader.html
+share/doc/qemu/system/guest-loader.html
share/doc/qemu/system/i386/microvm.html
share/doc/qemu/system/i386/pc.html
share/doc/qemu/system/images.html
@@ -154,22 +185,29 @@ share/doc/qemu/system/license.html
share/doc/qemu/system/linuxboot.html
share/doc/qemu/system/managed-startup.html
share/doc/qemu/system/monitor.html
+share/doc/qemu/system/multi-process.html
share/doc/qemu/system/mux-chardev.html
share/doc/qemu/system/net.html
-share/doc/qemu/system/objects.inv
+share/doc/qemu/system/nvme.html
+share/doc/qemu/system/ppc/embedded.html
+share/doc/qemu/system/ppc/powermac.html
+share/doc/qemu/system/ppc/powernv.html
+share/doc/qemu/system/ppc/prep.html
+share/doc/qemu/system/ppc/pseries.html
share/doc/qemu/system/pr-manager.html
share/doc/qemu/system/qemu-block-drivers.html
share/doc/qemu/system/qemu-cpu-models.html
share/doc/qemu/system/qemu-manpage.html
share/doc/qemu/system/quickstart.html
+share/doc/qemu/system/removed-features.html
+share/doc/qemu/system/riscv/microchip-icicle-kit.html
+share/doc/qemu/system/riscv/sifive_u.html
share/doc/qemu/system/s390x/3270.html
share/doc/qemu/system/s390x/bootdevices.html
share/doc/qemu/system/s390x/css.html
share/doc/qemu/system/s390x/protvirt.html
share/doc/qemu/system/s390x/vfio-ap.html
share/doc/qemu/system/s390x/vfio-ccw.html
-share/doc/qemu/system/search.html
-share/doc/qemu/system/searchindex.js
share/doc/qemu/system/security.html
share/doc/qemu/system/target-arm.html
share/doc/qemu/system/target-avr.html
@@ -177,6 +215,7 @@ share/doc/qemu/system/target-i386.html
share/doc/qemu/system/target-m68k.html
share/doc/qemu/system/target-mips.html
share/doc/qemu/system/target-ppc.html
+share/doc/qemu/system/target-riscv.html
share/doc/qemu/system/target-rx.html
share/doc/qemu/system/target-s390x.html
share/doc/qemu/system/target-sparc.html
@@ -188,25 +227,16 @@ share/doc/qemu/system/usb.html
share/doc/qemu/system/virtio-net-failover.html
share/doc/qemu/system/virtio-pmem.html
share/doc/qemu/system/vnc-security.html
-share/doc/qemu/tools/.buildinfo
-share/doc/qemu/tools/genindex.html
share/doc/qemu/tools/index.html
-share/doc/qemu/tools/objects.inv
share/doc/qemu/tools/qemu-img.html
share/doc/qemu/tools/qemu-nbd.html
share/doc/qemu/tools/qemu-pr-helper.html
+share/doc/qemu/tools/qemu-storage-daemon.html
share/doc/qemu/tools/qemu-trace-stap.html
-share/doc/qemu/tools/search.html
-share/doc/qemu/tools/searchindex.js
share/doc/qemu/tools/virtfs-proxy-helper.html
share/doc/qemu/tools/virtiofsd.html
-share/doc/qemu/user/.buildinfo
-share/doc/qemu/user/genindex.html
share/doc/qemu/user/index.html
share/doc/qemu/user/main.html
-share/doc/qemu/user/objects.inv
-share/doc/qemu/user/search.html
-share/doc/qemu/user/searchindex.js
share/icons/hicolor/128x128/apps/qemu.png
share/icons/hicolor/16x16/apps/qemu.png
share/icons/hicolor/24x24/apps/qemu.png
@@ -332,3 +362,4 @@ share/qemu/vgabios-stdvga.bin
share/qemu/vgabios-virtio.bin
share/qemu/vgabios-vmware.bin
share/qemu/vgabios.bin
+@pkgdir var/run
Index: pkgsrc/emulators/qemu/distinfo
diff -u pkgsrc/emulators/qemu/distinfo:1.177 pkgsrc/emulators/qemu/distinfo:1.178
--- pkgsrc/emulators/qemu/distinfo:1.177 Sun May 23 13:53:10 2021
+++ pkgsrc/emulators/qemu/distinfo Mon May 24 14:22:08 2021
@@ -1,18 +1,16 @@
-$NetBSD: distinfo,v 1.177 2021/05/23 13:53:10 thorpej Exp $
+$NetBSD: distinfo,v 1.178 2021/05/24 14:22:08 ryoon Exp $
SHA1 (palcode-clipper-qemu-5.2.0nb8) = ddbf1dffb7c2b2157e0bbe9fb7db7e57105130b1
RMD160 (palcode-clipper-qemu-5.2.0nb8) = 3f9fe19a40f7ca72ecfe047d1449e55b63cba3ee
SHA512 (palcode-clipper-qemu-5.2.0nb8) = 33695d6001d86a19793a92d5e31775607c4dfc9ab9eea019ea6c4d543a2e11e8c07f83cca4934811a13ef829b528737ea37d9d2aaf66cba6f2746d44d2aa0b43
Size (palcode-clipper-qemu-5.2.0nb8) = 159808 bytes
-SHA1 (qemu-5.2.0.tar.xz) = 146578267387e301423502d19024f8ffe35ab332
-RMD160 (qemu-5.2.0.tar.xz) = 2c33e773f012e333f99237e3d4ff1653ea0bc88f
-SHA512 (qemu-5.2.0.tar.xz) = bddd633ce111471ebc651e03080251515178808556b49a308a724909e55dac0be0cc0c79c536ac12d239678ae94c60100dc124be9b9d9538340c03a2f27177f3
-Size (qemu-5.2.0.tar.xz) = 106902800 bytes
-SHA1 (patch-accel_stubs_nvmm-stub.c) = d66d47eabb8bb6728e777da7589b43d491adbcc8
+SHA1 (qemu-6.0.0.tar.xz) = 131854b10d8c1614ae137c647aa31b756782ba2e
+RMD160 (qemu-6.0.0.tar.xz) = 0785bb4c32f1e9d23dcdfad562f18d232677a0c6
+SHA512 (qemu-6.0.0.tar.xz) = ee3ff00aebec4d8891d2ff6dabe4e667e510b2a4fe3f6190aa34673a91ea32dcd2db2e9bf94c2f1bf05aa79788f17cfbbedc6027c0988ea08a92587b79ee05e4
+Size (qemu-6.0.0.tar.xz) = 107333232 bytes
+SHA1 (patch-accel_Kconfig) = d343285a8b548d2d6387b92576aed801265d2b24
SHA1 (patch-backends_tpm_tpm__ioctl.h) = fbd6c877ad605f7120290efbb0ac653c69f351de
-SHA1 (patch-configure) = 8b392c5633c70d65f2f27af3b617a53af9772899
-SHA1 (patch-contrib_ivshmem-client_ivshmem-client.c) = 40c8751607cbf66a37e4c4e08f2664b864e2e984
-SHA1 (patch-contrib_ivshmem-server_ivshmem-server.c) = d8f53432b5752f4263dc4ef96108a976a05147a3
+SHA1 (patch-configure) = d94427a90bbb8e4d1347503e5583b4966b039e37
SHA1 (patch-hw-mips-Kconfig) = c7199ad26ac45116ab4d38252db4234ae93bdf9a
SHA1 (patch-hw-mips-mipssim.c) = f701897f2c2bee4a8c3fa5222903789f991a663a
SHA1 (patch-hw_alpha_alpha_sys.h) = 5908698208937ff9eb0bf1c504e1144af3d1bcc4
@@ -20,19 +18,22 @@ SHA1 (patch-hw_alpha_dp264.c) = 85630478
SHA1 (patch-hw_alpha_typhoon.c) = 1bed5cd6f355c4163585c5331356ebf38c5c3a16
SHA1 (patch-hw_core_uboot__image.h) = 17eef02349343c5fcfb7a4069cb6f8fd11efcb59
SHA1 (patch-hw_display_omap__dss.c) = 6b13242f28e32346bc70548c216c578d98fd3420
-SHA1 (patch-hw_mips_meson.build) = 4d1ed1ae2dbfb3edfe5fa5271c4561531b08efee
+SHA1 (patch-hw_mips_meson.build) = ff4bec33d9d2f86a425e02928aa3b6963c22da68
SHA1 (patch-hw_net_etraxfs__eth.c) = e5dd1661d60dbcd27b332403e0843500ba9544bc
SHA1 (patch-hw_net_xilinx__axienet.c) = ebcd2676d64ce6f31e4a8c976d4fdf530ad5e8b7
SHA1 (patch-hw_rtc_mc146818rtc.c) = cc7a3b28010966b65b7a16db756226ac2669f310
SHA1 (patch-hw_scsi_scsi-disk.c) = fdbf2f962a6dcb1a115a7f8a5b8790ff9295fb33
SHA1 (patch-hw_usb_dev-mtp.c) = 94ddf53a41cc75810cfece1b8aef1831fab4ce43
-SHA1 (patch-include_sysemu_hw_accel.h) = d083cd51434e28eb0d647b5107d34018b0ef63dc
+SHA1 (patch-include_sysemu_hw__accel.h) = a3cd022368a074e30dd3958932a006fa0fe011a6
SHA1 (patch-include_sysemu_kvm.h) = 9847abe3be70bd708a521310f5d5515e45a1a5a0
-SHA1 (patch-include_sysemu_nvmm.h) = 1fe49c4f11910d6faf683ae3233f783a0b03ce5a
-SHA1 (patch-meson.build) = 235f4bb3f8ee244a8ee9570b2270300189800983
-SHA1 (patch-meson__options.txt) = 286d097f596baa5af244a990d2874f1a7ee65198
+SHA1 (patch-include_sysemu_nvmm.h) = 7e49abdc7dc6a03f293780c63ac6c242d3914d15
+SHA1 (patch-meson.build) = fe1ef65033aa387a8b029d3db206a04e341644d5
+SHA1 (patch-meson__options.txt) = 050adf1d5c07dc211fdafde7a21e2afe52db9169
SHA1 (patch-net_tap-solaris.c) = cc953c9a624dd55ace4e130d0b31bbfb956c17d5
-SHA1 (patch-qemu-options.hx) = e2f264117f703aa4ccf56219f370c3b1303e8b07
+SHA1 (patch-nvmm-accel-ops.c) = 23ef13420a61d8bfa78f36ed7eae2e1523464617
+SHA1 (patch-nvmm-accel-ops.h) = 101b4f3f2a5775db4c93ffcf10b150e8545a3655
+SHA1 (patch-nvmm-all.c) = 93d33e285b616a20ad2af550bef31e88c55f6a22
+SHA1 (patch-qemu-options.hx) = 2e68ce28c9a678a666c3f23a0c1369d3568aa1eb
SHA1 (patch-roms_qemu-palcode_hwrpb.h) = ae7b4c0680367af6f740d62a54dc86352128d76f
SHA1 (patch-roms_qemu-palcode_init.c) = 7a0ebcd86f4106318791e7d90273fb55a424f1b8
SHA1 (patch-roms_qemu-palcode_memcpy.c) = 7761774ae9092d0f494deaf302d663ba479a09cf
@@ -46,10 +47,9 @@ SHA1 (patch-roms_qemu-palcode_sys-clippe
SHA1 (patch-roms_qemu-palcode_vgaio.c) = c8d7adc053cd6655f005527d16647611040c09d2
SHA1 (patch-roms_u-boot-sam460ex_Makefile) = 3a1bbf19b1422c10ebdd819eb0b711fafc78e2f2
SHA1 (patch-roms_u-boot_tools_imx8m__image.sh) = e4c452062f40569e33aa93eec4a65bd3af2e74fc
-SHA1 (patch-target_i386_helper.c) = 3314e65df11492438af2ec2c53ed3082a0b62b09
-SHA1 (patch-target_i386_kvm-stub.c) = 4cd2b7a8d8d8a317829f982b5acff7fdf2479d9f
-SHA1 (patch-target_i386_meson.build) = d0e0d7d4dd96ea43fc386e7166bbabbd71b0f4fc
-SHA1 (patch-target_i386_nvmm_all.c) = 9a6d85eb650b260dc33d63caee4bcd0e1f4cb49c
-SHA1 (patch-target_i386_nvmm_cpus.c) = 7f028bf2637fe31d8524f710a9e508c8ce65c822
-SHA1 (patch-target_i386_nvmm_cpus.h) = 0a25e49929cb772fc46a4ace91127ccf3605521d
+SHA1 (patch-target_i386_meson.build) = 0b6430825e1f5715f6deea556043b7e5063cf10a
+SHA1 (patch-target_i386_nvmm_meson.build) = c773fbed28a87f53263ab5299a63ca77423d164f
+SHA1 (patch-target_i386_nvmm_nvmm-accel-ops.c) = fdc29ccd0fcd47b72e7802655fe92b08f7d22bb9
+SHA1 (patch-target_i386_nvmm_nvmm-accel-ops.h) = 74d6442e1ac1cdf187996f3dd82bb3efddc002ec
+SHA1 (patch-target_i386_nvmm_nvmm-all.c) = cd75f6a584920093407ec254b9276b056f83132e
SHA1 (patch-target_sparc_translate.c) = 7ec2add2fd808facb48b9a66ccc345599251bf76
Index: pkgsrc/emulators/qemu/patches/patch-configure
diff -u pkgsrc/emulators/qemu/patches/patch-configure:1.31 pkgsrc/emulators/qemu/patches/patch-configure:1.32
--- pkgsrc/emulators/qemu/patches/patch-configure:1.31 Sat Mar 6 11:19:34 2021
+++ pkgsrc/emulators/qemu/patches/patch-configure Mon May 24 14:22:08 2021
@@ -1,19 +1,16 @@
-$NetBSD: patch-configure,v 1.31 2021/03/06 11:19:34 reinoud Exp $
+$NetBSD: patch-configure,v 1.32 2021/05/24 14:22:08 ryoon Exp $
-Add NVMM support.
-Fix jemalloc detection.
-
---- configure.orig 2020-12-08 16:59:44.000000000 +0000
+--- configure.orig 2021-04-29 17:18:59.000000000 +0000
+++ configure
-@@ -334,6 +334,7 @@ vhost_user_fs=""
- kvm="auto"
+@@ -352,6 +352,7 @@ kvm="auto"
hax="auto"
hvf="auto"
-+nvmm="auto"
whpx="auto"
- rdma=""
- pvrdma=""
-@@ -1102,6 +1103,10 @@ for opt do
++nvmm="auto"
+ rdma="$default_feature"
+ pvrdma="$default_feature"
+ gprof="no"
+@@ -1107,6 +1108,10 @@ for opt do
;;
--enable-hvf) hvf="enabled"
;;
@@ -24,7 +21,7 @@ Fix jemalloc detection.
--disable-whpx) whpx="disabled"
;;
--enable-whpx) whpx="enabled"
-@@ -1783,6 +1788,7 @@ disabled with --disable-FEATURE, default
+@@ -1848,6 +1853,7 @@ disabled with --disable-FEATURE, default
kvm KVM acceleration support
hax HAX acceleration support
hvf Hypervisor.framework acceleration support
@@ -32,12 +29,12 @@ Fix jemalloc detection.
whpx Windows Hypervisor Platform acceleration support
rdma Enable RDMA-based migration
pvrdma Enable PVRDMA support
-@@ -7005,7 +7011,7 @@ NINJA=$ninja $meson setup \
- ${staticpic:+-Db_staticpic=$staticpic} \
+@@ -6410,7 +6416,7 @@ NINJA=$ninja $meson setup \
-Db_coverage=$(if test "$gcov" = yes; then echo true; else echo false; fi) \
+ -Db_lto=$lto -Dcfi=$cfi -Dcfi_debug=$cfi_debug \
-Dmalloc=$malloc -Dmalloc_trim=$malloc_trim -Dsparse=$sparse \
- -Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf \
+ -Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf -Dnvmm=$nvmm \
-Dxen=$xen -Dxen_pci_passthrough=$xen_pci_passthrough -Dtcg=$tcg \
- -Dcocoa=$cocoa -Dmpath=$mpath -Dsdl=$sdl -Dsdl_image=$sdl_image \
+ -Dcocoa=$cocoa -Dgtk=$gtk -Dmpath=$mpath -Dsdl=$sdl -Dsdl_image=$sdl_image \
-Dvnc=$vnc -Dvnc_sasl=$vnc_sasl -Dvnc_jpeg=$vnc_jpeg -Dvnc_png=$vnc_png \
Index: pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build
diff -u pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build:1.1 pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build:1.2
--- pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build:1.1 Sat Feb 20 22:59:29 2021
+++ pkgsrc/emulators/qemu/patches/patch-hw_mips_meson.build Mon May 24 14:22:08 2021
@@ -1,9 +1,9 @@
-$NetBSD: patch-hw_mips_meson.build,v 1.1 2021/02/20 22:59:29 ryoon Exp $
+$NetBSD: patch-hw_mips_meson.build,v 1.2 2021/05/24 14:22:08 ryoon Exp $
---- hw/mips/meson.build.orig 2020-12-08 16:59:44.000000000 +0000
+--- hw/mips/meson.build.orig 2021-04-29 17:18:58.000000000 +0000
+++ hw/mips/meson.build
-@@ -3,7 +3,7 @@ mips_ss.add(files('addr.c', 'mips_int.c'
- mips_ss.add(when: 'CONFIG_FULOONG', if_true: files('fuloong2e.c'))
+@@ -5,7 +5,7 @@ mips_ss.add(when: 'CONFIG_FULOONG', if_t
+ mips_ss.add(when: 'CONFIG_LOONGSON3V', if_true: files('loongson3_bootp.c', 'loongson3_virt.c'))
mips_ss.add(when: 'CONFIG_JAZZ', if_true: files('jazz.c'))
mips_ss.add(when: 'CONFIG_MALTA', if_true: files('gt64xxx_pci.c', 'malta.c'))
-mips_ss.add(when: 'CONFIG_MIPSSIM', if_true: files('mipssim.c'))
Index: pkgsrc/emulators/qemu/patches/patch-meson__options.txt
diff -u pkgsrc/emulators/qemu/patches/patch-meson__options.txt:1.1 pkgsrc/emulators/qemu/patches/patch-meson__options.txt:1.2
--- pkgsrc/emulators/qemu/patches/patch-meson__options.txt:1.1 Sat Mar 6 11:19:34 2021
+++ pkgsrc/emulators/qemu/patches/patch-meson__options.txt Mon May 24 14:22:08 2021
@@ -1,8 +1,8 @@
-$NetBSD: patch-meson__options.txt,v 1.1 2021/03/06 11:19:34 reinoud Exp $
+$NetBSD: patch-meson__options.txt,v 1.2 2021/05/24 14:22:08 ryoon Exp $
---- meson_options.txt.orig 2020-12-08 16:59:44.000000000 +0000
+--- meson_options.txt.orig 2021-04-29 17:18:58.000000000 +0000
+++ meson_options.txt
-@@ -29,6 +29,8 @@ option('whpx', type: 'feature', value: '
+@@ -33,6 +33,8 @@ option('whpx', type: 'feature', value: '
description: 'WHPX acceleration support')
option('hvf', type: 'feature', value: 'auto',
description: 'HVF acceleration support')
Index: pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build
diff -u pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build:1.1 pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build:1.2
--- pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build:1.1 Sat Mar 6 11:19:34 2021
+++ pkgsrc/emulators/qemu/patches/patch-target_i386_meson.build Mon May 24 14:22:08 2021
@@ -1,15 +1,12 @@
-$NetBSD: patch-target_i386_meson.build,v 1.1 2021/03/06 11:19:34 reinoud Exp $
+$NetBSD: patch-target_i386_meson.build,v 1.2 2021/05/24 14:22:08 ryoon Exp $
---- target/i386/meson.build.orig 2020-12-08 16:59:44.000000000 +0000
+--- target/i386/meson.build.orig 2021-04-29 17:18:58.000000000 +0000
+++ target/i386/meson.build
-@@ -34,6 +34,10 @@ i386_softmmu_ss.add(when: 'CONFIG_WHPX',
- 'whpx-all.c',
- 'whpx-cpus.c',
- ))
-+i386_softmmu_ss.add(when: 'CONFIG_NVMM', if_true: files(
-+ 'nvmm-all.c',
-+ 'nvmm-cpus.c',
-+))
- i386_softmmu_ss.add(when: 'CONFIG_HAX', if_true: files(
- 'hax-all.c',
- 'hax-mem.c',
+@@ -19,6 +19,7 @@ i386_softmmu_ss.add(files(
+ subdir('kvm')
+ subdir('hax')
+ subdir('whpx')
++subdir('nvmm')
+ subdir('hvf')
+ subdir('tcg')
+
Index: pkgsrc/emulators/qemu/patches/patch-include_sysemu_nvmm.h
diff -u pkgsrc/emulators/qemu/patches/patch-include_sysemu_nvmm.h:1.3 pkgsrc/emulators/qemu/patches/patch-include_sysemu_nvmm.h:1.4
--- pkgsrc/emulators/qemu/patches/patch-include_sysemu_nvmm.h:1.3 Sat Mar 6 11:19:34 2021
+++ pkgsrc/emulators/qemu/patches/patch-include_sysemu_nvmm.h Mon May 24 14:22:08 2021
@@ -1,6 +1,6 @@
-$NetBSD: patch-include_sysemu_nvmm.h,v 1.3 2021/03/06 11:19:34 reinoud Exp $
+$NetBSD: patch-include_sysemu_nvmm.h,v 1.4 2021/05/24 14:22:08 ryoon Exp $
---- include/sysemu/nvmm.h.orig 2021-03-05 22:29:22.991663471 +0000
+--- include/sysemu/nvmm.h.orig 2021-05-06 04:47:40.186492405 +0000
+++ include/sysemu/nvmm.h
@@ -0,0 +1,26 @@
+/*
Index: pkgsrc/emulators/qemu/patches/patch-meson.build
diff -u pkgsrc/emulators/qemu/patches/patch-meson.build:1.5 pkgsrc/emulators/qemu/patches/patch-meson.build:1.6
--- pkgsrc/emulators/qemu/patches/patch-meson.build:1.5 Fri Mar 19 13:25:36 2021
+++ pkgsrc/emulators/qemu/patches/patch-meson.build Mon May 24 14:22:08 2021
@@ -1,13 +1,13 @@
-$NetBSD: patch-meson.build,v 1.5 2021/03/19 13:25:36 reinoud Exp $
+$NetBSD: patch-meson.build,v 1.6 2021/05/24 14:22:08 ryoon Exp $
* Add NetBSD support.
* Detect iconv in libc properly for pkgsrc (pkgsrc removes -liconv)
to fix qemu-system-aarch64 link.
* Detect curses (non-ncurses{,w} too)
---- meson.build.orig 2020-12-08 16:59:44.000000000 +0000
+--- meson.build.orig 2021-04-29 17:18:58.000000000 +0000
+++ meson.build
-@@ -84,6 +84,7 @@ if cpu in ['x86', 'x86_64']
+@@ -87,6 +87,7 @@ if cpu in ['x86', 'x86_64']
accelerator_targets += {
'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'],
'CONFIG_HVF': ['x86_64-softmmu'],
@@ -15,40 +15,30 @@ $NetBSD: patch-meson.build,v 1.5 2021/03
'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
}
endif
-@@ -169,6 +170,7 @@ version_res = []
+@@ -170,6 +171,7 @@ version_res = []
coref = []
iokit = []
emulator_link_args = []
-+nvmm = []
- cocoa = not_found
++nvmm =not_found
hvf = not_found
if targetos == 'windows'
-@@ -196,6 +198,12 @@ elif targetos == 'openbsd'
- # Disable OpenBSD W^X if available
- emulator_link_args = cc.get_supported_link_arguments('-Wl,-z,wxneeded')
- endif
-+elif targetos == 'netbsd'
-+ if not get_option('nvmm').disabled()
-+ if cc.has_header('nvmm.h')
-+ nvmm = cc.find_library('nvmm')
-+ endif
-+ endif
- endif
-
- accelerators = []
-@@ -228,6 +236,11 @@ if not get_option('hax').disabled()
+ socket = cc.find_library('ws2_32')
+@@ -227,6 +229,14 @@ if not get_option('hax').disabled()
accelerators += 'CONFIG_HAX'
endif
endif
-+if not get_option('nvmm').disabled()
++if targetos == 'netbsd'
+ if cc.has_header('nvmm.h', required: get_option('nvmm'))
++ nvmm = cc.find_library('nvmm', required: get_option('nvmm'))
++ endif
++ if nvmm.found()
+ accelerators += 'CONFIG_NVMM'
+ endif
+endif
+
+ tcg_arch = config_host['ARCH']
if not get_option('tcg').disabled()
- if cpu not in supported_cpus
- if 'CONFIG_TCG_INTERPRETER' in config_host
-@@ -246,6 +259,9 @@ endif
+@@ -271,6 +281,9 @@ endif
if 'CONFIG_HVF' not in accelerators and get_option('hvf').enabled()
error('HVF not available on this platform')
endif
@@ -58,7 +48,7 @@ $NetBSD: patch-meson.build,v 1.5 2021/03
if 'CONFIG_WHPX' not in accelerators and get_option('whpx').enabled()
error('WHPX not available on this platform')
endif
-@@ -517,7 +533,7 @@ if have_system and not get_option('curse
+@@ -607,7 +620,7 @@ if have_system and not get_option('curse
has_curses_h = cc.has_header('curses.h', args: curses_compile_args)
endif
if has_curses_h
@@ -67,7 +57,7 @@ $NetBSD: patch-meson.build,v 1.5 2021/03
foreach curses_libname : curses_libname_list
libcurses = cc.find_library(curses_libname,
required: false,
-@@ -535,7 +551,7 @@ if have_system and not get_option('curse
+@@ -625,7 +638,7 @@ if have_system and not get_option('curse
endif
endif
if not get_option('iconv').disabled()
@@ -76,20 +66,11 @@ $NetBSD: patch-meson.build,v 1.5 2021/03
# Programs will be linked with glib and this will bring in libiconv on FreeBSD.
# We need to use libiconv if available because mixing libiconv's headers with
# the system libc does not work.
-@@ -1815,7 +1831,7 @@ foreach target : target_dirs
- 'name': 'qemu-system-' + target_name,
- 'gui': false,
- 'sources': files('softmmu/main.c'),
-- 'dependencies': []
-+ 'dependencies': [nvmm]
- }]
- if targetos == 'windows' and (sdl.found() or gtk.found())
- execs += [{
-@@ -2106,6 +2122,7 @@ summary_info += {'Install blobs': ge
- summary_info += {'KVM support': config_all.has_key('CONFIG_KVM')}
- summary_info += {'HAX support': config_all.has_key('CONFIG_HAX')}
- summary_info += {'HVF support': config_all.has_key('CONFIG_HVF')}
-+summary_info += {'NVMM support': config_all.has_key('CONFIG_NVMM')}
- summary_info += {'WHPX support': config_all.has_key('CONFIG_WHPX')}
- summary_info += {'TCG support': config_all.has_key('CONFIG_TCG')}
- if config_all.has_key('CONFIG_TCG')
+@@ -2576,6 +2589,7 @@ if have_system
+ summary_info += {'HAX support': config_all.has_key('CONFIG_HAX')}
+ summary_info += {'HVF support': config_all.has_key('CONFIG_HVF')}
+ summary_info += {'WHPX support': config_all.has_key('CONFIG_WHPX')}
++ summary_info += {'NVMM support': config_all.has_key('CONFIG_NVMM')}
+ summary_info += {'Xen support': config_host.has_key('CONFIG_XEN_BACKEND')}
+ if config_host.has_key('CONFIG_XEN_BACKEND')
+ summary_info += {'xen ctrl version': config_host['CONFIG_XEN_CTRL_INTERFACE_VERSION']}
Index: pkgsrc/emulators/qemu/patches/patch-qemu-options.hx
diff -u pkgsrc/emulators/qemu/patches/patch-qemu-options.hx:1.4 pkgsrc/emulators/qemu/patches/patch-qemu-options.hx:1.5
--- pkgsrc/emulators/qemu/patches/patch-qemu-options.hx:1.4 Sat Mar 6 11:19:34 2021
+++ pkgsrc/emulators/qemu/patches/patch-qemu-options.hx Mon May 24 14:22:08 2021
@@ -1,8 +1,6 @@
-$NetBSD: patch-qemu-options.hx,v 1.4 2021/03/06 11:19:34 reinoud Exp $
+$NetBSD: patch-qemu-options.hx,v 1.5 2021/05/24 14:22:08 ryoon Exp $
-Add NVMM support.
-
---- qemu-options.hx.orig 2020-04-28 16:49:25.000000000 +0000
+--- qemu-options.hx.orig 2021-04-29 17:18:59.000000000 +0000
+++ qemu-options.hx
@@ -26,7 +26,7 @@ DEF("machine", HAS_ARG, QEMU_OPTION_mach
"-machine [type=]name[,prop[=value][,...]]\n"
@@ -22,7 +20,7 @@ Add NVMM support.
By default, tcg is used. If there is more than one accelerator
specified, the next one is used if the previous one fails to
initialize.
-@@ -119,7 +119,7 @@ ERST
+@@ -135,7 +135,7 @@ ERST
DEF("accel", HAS_ARG, QEMU_OPTION_accel,
"-accel [accel=]accelerator[,prop[=value][,...]]\n"
@@ -31,7 +29,7 @@ Add NVMM support.
" igd-passthru=on|off (enable Xen integrated Intel graphics passthrough, default=off)\n"
" kernel-irqchip=on|off|split controls accelerated irqchip support (default=on)\n"
" kvm-shadow-mem=size of KVM shadow MMU in bytes\n"
-@@ -128,7 +128,7 @@ DEF("accel", HAS_ARG, QEMU_OPTION_accel,
+@@ -145,7 +145,7 @@ DEF("accel", HAS_ARG, QEMU_OPTION_accel,
SRST
``-accel name[,prop=value[,...]]``
This is used to enable an accelerator. Depending on the target
Added files:
Index: pkgsrc/emulators/qemu/patches/patch-accel_Kconfig
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-accel_Kconfig:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-accel_Kconfig Mon May 24 14:22:08 2021
@@ -0,0 +1,14 @@
+$NetBSD: patch-accel_Kconfig,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- accel/Kconfig.orig 2021-04-29 17:18:58.000000000 +0000
++++ accel/Kconfig
+@@ -1,6 +1,9 @@
+ config WHPX
+ bool
+
++config NVMM
++ bool
++
+ config HAX
+ bool
+
Index: pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.c
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.c:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.c Mon May 24 14:22:08 2021
@@ -0,0 +1,116 @@
+$NetBSD: patch-nvmm-accel-ops.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- nvmm-accel-ops.c.orig 2021-05-06 04:47:35.604520043 +0000
++++ nvmm-accel-ops.c
+@@ -0,0 +1,111 @@
++/*
++ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
++ *
++ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
++ *
++ * This work is licensed under the terms of the GNU GPL, version 2 or later.
++ * See the COPYING file in the top-level directory.
++ */
++
++#include "qemu/osdep.h"
++#include "sysemu/kvm_int.h"
++#include "qemu/main-loop.h"
++#include "sysemu/cpus.h"
++#include "qemu/guest-random.h"
++
++#include "sysemu/nvmm.h"
++#include "nvmm-accel-ops.h"
++
++static void *qemu_nvmm_cpu_thread_fn(void *arg)
++{
++ CPUState *cpu = arg;
++ int r;
++
++ assert(nvmm_enabled());
++
++ rcu_register_thread();
++
++ qemu_mutex_lock_iothread();
++ qemu_thread_get_self(cpu->thread);
++ cpu->thread_id = qemu_get_thread_id();
++ current_cpu = cpu;
++
++ r = nvmm_init_vcpu(cpu);
++ if (r < 0) {
++ fprintf(stderr, "nvmm_init_vcpu failed: %s\n", strerror(-r));
++ exit(1);
++ }
++
++ /* signal CPU creation */
++ cpu_thread_signal_created(cpu);
++ qemu_guest_random_seed_thread_part2(cpu->random_seed);
++
++ do {
++ if (cpu_can_run(cpu)) {
++ r = nvmm_vcpu_exec(cpu);
++ if (r == EXCP_DEBUG) {
++ cpu_handle_guest_debug(cpu);
++ }
++ }
++ while (cpu_thread_is_idle(cpu)) {
++ qemu_cond_wait_iothread(cpu->halt_cond);
++ }
++ qemu_wait_io_event_common(cpu);
++ } while (!cpu->unplug || cpu_can_run(cpu));
++
++ nvmm_destroy_vcpu(cpu);
++ cpu_thread_signal_destroyed(cpu);
++ qemu_mutex_unlock_iothread();
++ rcu_unregister_thread();
++ return NULL;
++}
++
++static void nvmm_start_vcpu_thread(CPUState *cpu)
++{
++ char thread_name[VCPU_THREAD_NAME_SIZE];
++
++ cpu->thread = g_malloc0(sizeof(QemuThread));
++ cpu->halt_cond = g_malloc0(sizeof(QemuCond));
++ qemu_cond_init(cpu->halt_cond);
++ snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/NVMM",
++ cpu->cpu_index);
++ qemu_thread_create(cpu->thread, thread_name, qemu_nvmm_cpu_thread_fn,
++ cpu, QEMU_THREAD_JOINABLE);
++}
++
++/*
++ * Abort the call to run the virtual processor by another thread, and to
++ * return the control to that thread.
++ */
++static void nvmm_kick_vcpu_thread(CPUState *cpu)
++{
++ cpu->exit_request = 1;
++ cpus_kick_thread(cpu);
++}
++
++static void nvmm_accel_ops_class_init(ObjectClass *oc, void *data)
++{
++ AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
++
++ ops->create_vcpu_thread = nvmm_start_vcpu_thread;
++ ops->kick_vcpu_thread = nvmm_kick_vcpu_thread;
++
++ ops->synchronize_post_reset = nvmm_cpu_synchronize_post_reset;
++ ops->synchronize_post_init = nvmm_cpu_synchronize_post_init;
++ ops->synchronize_state = nvmm_cpu_synchronize_state;
++ ops->synchronize_pre_loadvm = nvmm_cpu_synchronize_pre_loadvm;
++}
++
++static const TypeInfo nvmm_accel_ops_type = {
++ .name = ACCEL_OPS_NAME("nvmm"),
++
++ .parent = TYPE_ACCEL_OPS,
++ .class_init = nvmm_accel_ops_class_init,
++ .abstract = true,
++};
++
++static void nvmm_accel_ops_register_types(void)
++{
++ type_register_static(&nvmm_accel_ops_type);
++}
++type_init(nvmm_accel_ops_register_types);
Index: pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.h
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.h:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-nvmm-accel-ops.h Mon May 24 14:22:08 2021
@@ -0,0 +1,29 @@
+$NetBSD: patch-nvmm-accel-ops.h,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- nvmm-accel-ops.h.orig 2021-05-06 04:47:35.605973012 +0000
++++ nvmm-accel-ops.h
+@@ -0,0 +1,24 @@
++/*
++ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
++ *
++ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
++ *
++ * This work is licensed under the terms of the GNU GPL, version 2 or later.
++ * See the COPYING file in the top-level directory.
++ */
++
++#ifndef NVMM_CPUS_H
++#define NVMM_CPUS_H
++
++#include "sysemu/cpus.h"
++
++int nvmm_init_vcpu(CPUState *cpu);
++int nvmm_vcpu_exec(CPUState *cpu);
++void nvmm_destroy_vcpu(CPUState *cpu);
++
++void nvmm_cpu_synchronize_state(CPUState *cpu);
++void nvmm_cpu_synchronize_post_reset(CPUState *cpu);
++void nvmm_cpu_synchronize_post_init(CPUState *cpu);
++void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu);
++
++#endif /* NVMM_CPUS_H */
Index: pkgsrc/emulators/qemu/patches/patch-nvmm-all.c
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-nvmm-all.c:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-nvmm-all.c Mon May 24 14:22:08 2021
@@ -0,0 +1,1231 @@
+$NetBSD: patch-nvmm-all.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- nvmm-all.c.orig 2021-05-06 04:47:35.606086411 +0000
++++ nvmm-all.c
+@@ -0,0 +1,1226 @@
++/*
++ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
++ *
++ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
++ *
++ * This work is licensed under the terms of the GNU GPL, version 2 or later.
++ * See the COPYING file in the top-level directory.
++ */
++
++#include "qemu/osdep.h"
++#include "cpu.h"
++#include "exec/address-spaces.h"
++#include "exec/ioport.h"
++#include "qemu-common.h"
++#include "qemu/accel.h"
++#include "sysemu/nvmm.h"
++#include "sysemu/cpus.h"
++#include "sysemu/runstate.h"
++#include "qemu/main-loop.h"
++#include "qemu/error-report.h"
++#include "qapi/error.h"
++#include "qemu/queue.h"
++#include "migration/blocker.h"
++#include "strings.h"
++
++#include "nvmm-accel-ops.h"
++
++#include <nvmm.h>
++
++struct qemu_vcpu {
++ struct nvmm_vcpu vcpu;
++ uint8_t tpr;
++ bool stop;
++
++ /* Window-exiting for INTs/NMIs. */
++ bool int_window_exit;
++ bool nmi_window_exit;
++
++ /* The guest is in an interrupt shadow (POP SS, etc). */
++ bool int_shadow;
++};
++
++struct qemu_machine {
++ struct nvmm_capability cap;
++ struct nvmm_machine mach;
++};
++
++/* -------------------------------------------------------------------------- */
++
++static bool nvmm_allowed;
++static struct qemu_machine qemu_mach;
++
++static struct qemu_vcpu *
++get_qemu_vcpu(CPUState *cpu)
++{
++ return (struct qemu_vcpu *)cpu->hax_vcpu;
++}
++
++static struct nvmm_machine *
++get_nvmm_mach(void)
++{
++ return &qemu_mach.mach;
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++nvmm_set_segment(struct nvmm_x64_state_seg *nseg, const SegmentCache *qseg)
++{
++ uint32_t attrib = qseg->flags;
++
++ nseg->selector = qseg->selector;
++ nseg->limit = qseg->limit;
++ nseg->base = qseg->base;
++ nseg->attrib.type = __SHIFTOUT(attrib, DESC_TYPE_MASK);
++ nseg->attrib.s = __SHIFTOUT(attrib, DESC_S_MASK);
++ nseg->attrib.dpl = __SHIFTOUT(attrib, DESC_DPL_MASK);
++ nseg->attrib.p = __SHIFTOUT(attrib, DESC_P_MASK);
++ nseg->attrib.avl = __SHIFTOUT(attrib, DESC_AVL_MASK);
++ nseg->attrib.l = __SHIFTOUT(attrib, DESC_L_MASK);
++ nseg->attrib.def = __SHIFTOUT(attrib, DESC_B_MASK);
++ nseg->attrib.g = __SHIFTOUT(attrib, DESC_G_MASK);
++}
++
++static void
++nvmm_set_registers(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t bitmap;
++ size_t i;
++ int ret;
++
++ assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
++
++ /* GPRs. */
++ state->gprs[NVMM_X64_GPR_RAX] = env->regs[R_EAX];
++ state->gprs[NVMM_X64_GPR_RCX] = env->regs[R_ECX];
++ state->gprs[NVMM_X64_GPR_RDX] = env->regs[R_EDX];
++ state->gprs[NVMM_X64_GPR_RBX] = env->regs[R_EBX];
++ state->gprs[NVMM_X64_GPR_RSP] = env->regs[R_ESP];
++ state->gprs[NVMM_X64_GPR_RBP] = env->regs[R_EBP];
++ state->gprs[NVMM_X64_GPR_RSI] = env->regs[R_ESI];
++ state->gprs[NVMM_X64_GPR_RDI] = env->regs[R_EDI];
++#ifdef TARGET_X86_64
++ state->gprs[NVMM_X64_GPR_R8] = env->regs[R_R8];
++ state->gprs[NVMM_X64_GPR_R9] = env->regs[R_R9];
++ state->gprs[NVMM_X64_GPR_R10] = env->regs[R_R10];
++ state->gprs[NVMM_X64_GPR_R11] = env->regs[R_R11];
++ state->gprs[NVMM_X64_GPR_R12] = env->regs[R_R12];
++ state->gprs[NVMM_X64_GPR_R13] = env->regs[R_R13];
++ state->gprs[NVMM_X64_GPR_R14] = env->regs[R_R14];
++ state->gprs[NVMM_X64_GPR_R15] = env->regs[R_R15];
++#endif
++
++ /* RIP and RFLAGS. */
++ state->gprs[NVMM_X64_GPR_RIP] = env->eip;
++ state->gprs[NVMM_X64_GPR_RFLAGS] = env->eflags;
++
++ /* Segments. */
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_CS], &env->segs[R_CS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_DS], &env->segs[R_DS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_ES], &env->segs[R_ES]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_FS], &env->segs[R_FS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_GS], &env->segs[R_GS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_SS], &env->segs[R_SS]);
++
++ /* Special segments. */
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_GDT], &env->gdt);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_LDT], &env->ldt);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_TR], &env->tr);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_IDT], &env->idt);
++
++ /* Control registers. */
++ state->crs[NVMM_X64_CR_CR0] = env->cr[0];
++ state->crs[NVMM_X64_CR_CR2] = env->cr[2];
++ state->crs[NVMM_X64_CR_CR3] = env->cr[3];
++ state->crs[NVMM_X64_CR_CR4] = env->cr[4];
++ state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
++ state->crs[NVMM_X64_CR_XCR0] = env->xcr0;
++
++ /* Debug registers. */
++ state->drs[NVMM_X64_DR_DR0] = env->dr[0];
++ state->drs[NVMM_X64_DR_DR1] = env->dr[1];
++ state->drs[NVMM_X64_DR_DR2] = env->dr[2];
++ state->drs[NVMM_X64_DR_DR3] = env->dr[3];
++ state->drs[NVMM_X64_DR_DR6] = env->dr[6];
++ state->drs[NVMM_X64_DR_DR7] = env->dr[7];
++
++ /* FPU. */
++ state->fpu.fx_cw = env->fpuc;
++ state->fpu.fx_sw = (env->fpus & ~0x3800) | ((env->fpstt & 0x7) << 11);
++ state->fpu.fx_tw = 0;
++ for (i = 0; i < 8; i++) {
++ state->fpu.fx_tw |= (!env->fptags[i]) << i;
++ }
++ state->fpu.fx_opcode = env->fpop;
++ state->fpu.fx_ip.fa_64 = env->fpip;
++ state->fpu.fx_dp.fa_64 = env->fpdp;
++ state->fpu.fx_mxcsr = env->mxcsr;
++ state->fpu.fx_mxcsr_mask = 0x0000FFFF;
++ assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
++ memcpy(state->fpu.fx_87_ac, env->fpregs, sizeof(env->fpregs));
++ for (i = 0; i < CPU_NB_REGS; i++) {
++ memcpy(&state->fpu.fx_xmm[i].xmm_bytes[0],
++ &env->xmm_regs[i].ZMM_Q(0), 8);
++ memcpy(&state->fpu.fx_xmm[i].xmm_bytes[8],
++ &env->xmm_regs[i].ZMM_Q(1), 8);
++ }
++
++ /* MSRs. */
++ state->msrs[NVMM_X64_MSR_EFER] = env->efer;
++ state->msrs[NVMM_X64_MSR_STAR] = env->star;
++#ifdef TARGET_X86_64
++ state->msrs[NVMM_X64_MSR_LSTAR] = env->lstar;
++ state->msrs[NVMM_X64_MSR_CSTAR] = env->cstar;
++ state->msrs[NVMM_X64_MSR_SFMASK] = env->fmask;
++ state->msrs[NVMM_X64_MSR_KERNELGSBASE] = env->kernelgsbase;
++#endif
++ state->msrs[NVMM_X64_MSR_SYSENTER_CS] = env->sysenter_cs;
++ state->msrs[NVMM_X64_MSR_SYSENTER_ESP] = env->sysenter_esp;
++ state->msrs[NVMM_X64_MSR_SYSENTER_EIP] = env->sysenter_eip;
++ state->msrs[NVMM_X64_MSR_PAT] = env->pat;
++ state->msrs[NVMM_X64_MSR_TSC] = env->tsc;
++
++ bitmap =
++ NVMM_X64_STATE_SEGS |
++ NVMM_X64_STATE_GPRS |
++ NVMM_X64_STATE_CRS |
++ NVMM_X64_STATE_DRS |
++ NVMM_X64_STATE_MSRS |
++ NVMM_X64_STATE_FPU;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, bitmap);
++ if (ret == -1) {
++ error_report("NVMM: Failed to set virtual processor context,"
++ " error=%d", errno);
++ }
++}
++
++static void
++nvmm_get_segment(SegmentCache *qseg, const struct nvmm_x64_state_seg *nseg)
++{
++ qseg->selector = nseg->selector;
++ qseg->limit = nseg->limit;
++ qseg->base = nseg->base;
++
++ qseg->flags =
++ __SHIFTIN((uint32_t)nseg->attrib.type, DESC_TYPE_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.s, DESC_S_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.dpl, DESC_DPL_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.p, DESC_P_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.avl, DESC_AVL_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.l, DESC_L_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.def, DESC_B_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.g, DESC_G_MASK);
++}
++
++static void
++nvmm_get_registers(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t bitmap, tpr;
++ size_t i;
++ int ret;
++
++ assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
++
++ bitmap =
++ NVMM_X64_STATE_SEGS |
++ NVMM_X64_STATE_GPRS |
++ NVMM_X64_STATE_CRS |
++ NVMM_X64_STATE_DRS |
++ NVMM_X64_STATE_MSRS |
++ NVMM_X64_STATE_FPU;
++
++ ret = nvmm_vcpu_getstate(mach, vcpu, bitmap);
++ if (ret == -1) {
++ error_report("NVMM: Failed to get virtual processor context,"
++ " error=%d", errno);
++ }
++
++ /* GPRs. */
++ env->regs[R_EAX] = state->gprs[NVMM_X64_GPR_RAX];
++ env->regs[R_ECX] = state->gprs[NVMM_X64_GPR_RCX];
++ env->regs[R_EDX] = state->gprs[NVMM_X64_GPR_RDX];
++ env->regs[R_EBX] = state->gprs[NVMM_X64_GPR_RBX];
++ env->regs[R_ESP] = state->gprs[NVMM_X64_GPR_RSP];
++ env->regs[R_EBP] = state->gprs[NVMM_X64_GPR_RBP];
++ env->regs[R_ESI] = state->gprs[NVMM_X64_GPR_RSI];
++ env->regs[R_EDI] = state->gprs[NVMM_X64_GPR_RDI];
++#ifdef TARGET_X86_64
++ env->regs[R_R8] = state->gprs[NVMM_X64_GPR_R8];
++ env->regs[R_R9] = state->gprs[NVMM_X64_GPR_R9];
++ env->regs[R_R10] = state->gprs[NVMM_X64_GPR_R10];
++ env->regs[R_R11] = state->gprs[NVMM_X64_GPR_R11];
++ env->regs[R_R12] = state->gprs[NVMM_X64_GPR_R12];
++ env->regs[R_R13] = state->gprs[NVMM_X64_GPR_R13];
++ env->regs[R_R14] = state->gprs[NVMM_X64_GPR_R14];
++ env->regs[R_R15] = state->gprs[NVMM_X64_GPR_R15];
++#endif
++
++ /* RIP and RFLAGS. */
++ env->eip = state->gprs[NVMM_X64_GPR_RIP];
++ env->eflags = state->gprs[NVMM_X64_GPR_RFLAGS];
++
++ /* Segments. */
++ nvmm_get_segment(&env->segs[R_ES], &state->segs[NVMM_X64_SEG_ES]);
++ nvmm_get_segment(&env->segs[R_CS], &state->segs[NVMM_X64_SEG_CS]);
++ nvmm_get_segment(&env->segs[R_SS], &state->segs[NVMM_X64_SEG_SS]);
++ nvmm_get_segment(&env->segs[R_DS], &state->segs[NVMM_X64_SEG_DS]);
++ nvmm_get_segment(&env->segs[R_FS], &state->segs[NVMM_X64_SEG_FS]);
++ nvmm_get_segment(&env->segs[R_GS], &state->segs[NVMM_X64_SEG_GS]);
++
++ /* Special segments. */
++ nvmm_get_segment(&env->gdt, &state->segs[NVMM_X64_SEG_GDT]);
++ nvmm_get_segment(&env->ldt, &state->segs[NVMM_X64_SEG_LDT]);
++ nvmm_get_segment(&env->tr, &state->segs[NVMM_X64_SEG_TR]);
++ nvmm_get_segment(&env->idt, &state->segs[NVMM_X64_SEG_IDT]);
++
++ /* Control registers. */
++ env->cr[0] = state->crs[NVMM_X64_CR_CR0];
++ env->cr[2] = state->crs[NVMM_X64_CR_CR2];
++ env->cr[3] = state->crs[NVMM_X64_CR_CR3];
++ env->cr[4] = state->crs[NVMM_X64_CR_CR4];
++ tpr = state->crs[NVMM_X64_CR_CR8];
++ if (tpr != qcpu->tpr) {
++ qcpu->tpr = tpr;
++ cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
++ }
++ env->xcr0 = state->crs[NVMM_X64_CR_XCR0];
++
++ /* Debug registers. */
++ env->dr[0] = state->drs[NVMM_X64_DR_DR0];
++ env->dr[1] = state->drs[NVMM_X64_DR_DR1];
++ env->dr[2] = state->drs[NVMM_X64_DR_DR2];
++ env->dr[3] = state->drs[NVMM_X64_DR_DR3];
++ env->dr[6] = state->drs[NVMM_X64_DR_DR6];
++ env->dr[7] = state->drs[NVMM_X64_DR_DR7];
++
++ /* FPU. */
++ env->fpuc = state->fpu.fx_cw;
++ env->fpstt = (state->fpu.fx_sw >> 11) & 0x7;
++ env->fpus = state->fpu.fx_sw & ~0x3800;
++ for (i = 0; i < 8; i++) {
++ env->fptags[i] = !((state->fpu.fx_tw >> i) & 1);
++ }
++ env->fpop = state->fpu.fx_opcode;
++ env->fpip = state->fpu.fx_ip.fa_64;
++ env->fpdp = state->fpu.fx_dp.fa_64;
++ env->mxcsr = state->fpu.fx_mxcsr;
++ assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
++ memcpy(env->fpregs, state->fpu.fx_87_ac, sizeof(env->fpregs));
++ for (i = 0; i < CPU_NB_REGS; i++) {
++ memcpy(&env->xmm_regs[i].ZMM_Q(0),
++ &state->fpu.fx_xmm[i].xmm_bytes[0], 8);
++ memcpy(&env->xmm_regs[i].ZMM_Q(1),
++ &state->fpu.fx_xmm[i].xmm_bytes[8], 8);
++ }
++
++ /* MSRs. */
++ env->efer = state->msrs[NVMM_X64_MSR_EFER];
++ env->star = state->msrs[NVMM_X64_MSR_STAR];
++#ifdef TARGET_X86_64
++ env->lstar = state->msrs[NVMM_X64_MSR_LSTAR];
++ env->cstar = state->msrs[NVMM_X64_MSR_CSTAR];
++ env->fmask = state->msrs[NVMM_X64_MSR_SFMASK];
++ env->kernelgsbase = state->msrs[NVMM_X64_MSR_KERNELGSBASE];
++#endif
++ env->sysenter_cs = state->msrs[NVMM_X64_MSR_SYSENTER_CS];
++ env->sysenter_esp = state->msrs[NVMM_X64_MSR_SYSENTER_ESP];
++ env->sysenter_eip = state->msrs[NVMM_X64_MSR_SYSENTER_EIP];
++ env->pat = state->msrs[NVMM_X64_MSR_PAT];
++ env->tsc = state->msrs[NVMM_X64_MSR_TSC];
++
++ x86_update_hflags(env);
++}
++
++static bool
++nvmm_can_take_int(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ struct nvmm_machine *mach = get_nvmm_mach();
++
++ if (qcpu->int_window_exit) {
++ return false;
++ }
++
++ if (qcpu->int_shadow || !(env->eflags & IF_MASK)) {
++ struct nvmm_x64_state *state = vcpu->state;
++
++ /* Exit on interrupt window. */
++ nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_INTR);
++ state->intr.int_window_exiting = 1;
++ nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_INTR);
++
++ return false;
++ }
++
++ return true;
++}
++
++static bool
++nvmm_can_take_nmi(CPUState *cpu)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++
++ /*
++ * Contrary to INTs, NMIs always schedule an exit when they are
++ * completed. Therefore, if window-exiting is enabled, it means
++ * NMIs are blocked.
++ */
++ if (qcpu->nmi_window_exit) {
++ return false;
++ }
++
++ return true;
++}
++
++/*
++ * Called before the VCPU is run. We inject events generated by the I/O
++ * thread, and synchronize the guest TPR.
++ */
++static void
++nvmm_vcpu_pre_run(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ struct nvmm_vcpu_event *event = vcpu->event;
++ bool has_event = false;
++ bool sync_tpr = false;
++ uint8_t tpr;
++ int ret;
++
++ qemu_mutex_lock_iothread();
++
++ tpr = cpu_get_apic_tpr(x86_cpu->apic_state);
++ if (tpr != qcpu->tpr) {
++ qcpu->tpr = tpr;
++ sync_tpr = true;
++ }
++
++ /*
++ * Force the VCPU out of its inner loop to process any INIT requests
++ * or commit pending TPR access.
++ */
++ if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
++ cpu->exit_request = 1;
++ }
++
++ if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
++ if (nvmm_can_take_nmi(cpu)) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
++ event->type = NVMM_VCPU_EVENT_INTR;
++ event->vector = 2;
++ has_event = true;
++ }
++ }
++
++ if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
++ if (nvmm_can_take_int(cpu)) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
++ event->type = NVMM_VCPU_EVENT_INTR;
++ event->vector = cpu_get_pic_interrupt(env);
++ has_event = true;
++ }
++ }
++
++ /* Don't want SMIs. */
++ if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
++ }
++
++ if (sync_tpr) {
++ ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_CRS);
++ if (ret == -1) {
++ error_report("NVMM: Failed to get CPU state,"
++ " error=%d", errno);
++ }
++
++ state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_CRS);
++ if (ret == -1) {
++ error_report("NVMM: Failed to set CPU state,"
++ " error=%d", errno);
++ }
++ }
++
++ if (has_event) {
++ ret = nvmm_vcpu_inject(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: Failed to inject event,"
++ " error=%d", errno);
++ }
++ }
++
++ qemu_mutex_unlock_iothread();
++}
++
++/*
++ * Called after the VCPU ran. We synchronize the host view of the TPR and
++ * RFLAGS.
++ */
++static void
++nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ uint64_t tpr;
++
++ env->eflags = exit->exitstate.rflags;
++ qcpu->int_shadow = exit->exitstate.int_shadow;
++ qcpu->int_window_exit = exit->exitstate.int_window_exiting;
++ qcpu->nmi_window_exit = exit->exitstate.nmi_window_exiting;
++
++ tpr = exit->exitstate.cr8;
++ if (qcpu->tpr != tpr) {
++ qcpu->tpr = tpr;
++ qemu_mutex_lock_iothread();
++ cpu_set_apic_tpr(x86_cpu->apic_state, qcpu->tpr);
++ qemu_mutex_unlock_iothread();
++ }
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++nvmm_io_callback(struct nvmm_io *io)
++{
++ MemTxAttrs attrs = { 0 };
++ int ret;
++
++ ret = address_space_rw(&address_space_io, io->port, attrs, io->data,
++ io->size, !io->in);
++ if (ret != MEMTX_OK) {
++ error_report("NVMM: I/O Transaction Failed "
++ "[%s, port=%u, size=%zu]", (io->in ? "in" : "out"),
++ io->port, io->size);
++ }
++
++ /* Needed, otherwise infinite loop. */
++ current_cpu->vcpu_dirty = false;
++}
++
++static void
++nvmm_mem_callback(struct nvmm_mem *mem)
++{
++ cpu_physical_memory_rw(mem->gpa, mem->data, mem->size, mem->write);
++
++ /* Needed, otherwise infinite loop. */
++ current_cpu->vcpu_dirty = false;
++}
++
++static struct nvmm_assist_callbacks nvmm_callbacks = {
++ .io = nvmm_io_callback,
++ .mem = nvmm_mem_callback
++};
++
++/* -------------------------------------------------------------------------- */
++
++static int
++nvmm_handle_mem(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
++{
++ int ret;
++
++ ret = nvmm_assist_mem(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: Mem Assist Failed [gpa=%p]",
++ (void *)vcpu->exit->u.mem.gpa);
++ }
++
++ return ret;
++}
++
++static int
++nvmm_handle_io(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
++{
++ int ret;
++
++ ret = nvmm_assist_io(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: I/O Assist Failed [port=%d]",
++ (int)vcpu->exit->u.io.port);
++ }
++
++ return ret;
++}
++
++static int
++nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
++ struct nvmm_vcpu_exit *exit)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t val;
++ int ret;
++
++ switch (exit->u.rdmsr.msr) {
++ case MSR_IA32_APICBASE:
++ val = cpu_get_apic_base(x86_cpu->apic_state);
++ break;
++ case MSR_MTRRcap:
++ case MSR_MTRRdefType:
++ case MSR_MCG_CAP:
++ case MSR_MCG_STATUS:
++ val = 0;
++ break;
++ default: /* More MSRs to add? */
++ val = 0;
++ error_report("NVMM: Unexpected RDMSR 0x%x, ignored",
++ exit->u.rdmsr.msr);
++ break;
++ }
++
++ ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ state->gprs[NVMM_X64_GPR_RAX] = (val & 0xFFFFFFFF);
++ state->gprs[NVMM_X64_GPR_RDX] = (val >> 32);
++ state->gprs[NVMM_X64_GPR_RIP] = exit->u.rdmsr.npc;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ return 0;
++}
++
++static int
++nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
++ struct nvmm_vcpu_exit *exit)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t val;
++ int ret;
++
++ val = exit->u.wrmsr.val;
++
++ switch (exit->u.wrmsr.msr) {
++ case MSR_IA32_APICBASE:
++ cpu_set_apic_base(x86_cpu->apic_state, val);
++ break;
++ case MSR_MTRRdefType:
++ case MSR_MCG_STATUS:
++ break;
++ default: /* More MSRs to add? */
++ error_report("NVMM: Unexpected WRMSR 0x%x [val=0x%lx], ignored",
++ exit->u.wrmsr.msr, val);
++ break;
++ }
++
++ ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ state->gprs[NVMM_X64_GPR_RIP] = exit->u.wrmsr.npc;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ return 0;
++}
++
++static int
++nvmm_handle_halted(struct nvmm_machine *mach, CPUState *cpu,
++ struct nvmm_vcpu_exit *exit)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ int ret = 0;
++
++ qemu_mutex_lock_iothread();
++
++ if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
++ (env->eflags & IF_MASK)) &&
++ !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
++ cpu->exception_index = EXCP_HLT;
++ cpu->halted = true;
++ ret = 1;
++ }
++
++ qemu_mutex_unlock_iothread();
++
++ return ret;
++}
++
++static int
++nvmm_inject_ud(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
++{
++ struct nvmm_vcpu_event *event = vcpu->event;
++
++ event->type = NVMM_VCPU_EVENT_EXCP;
++ event->vector = 6;
++ event->u.excp.error = 0;
++
++ return nvmm_vcpu_inject(mach, vcpu);
++}
++
++static int
++nvmm_vcpu_loop(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_vcpu_exit *exit = vcpu->exit;
++ int ret;
++
++ /*
++ * Some asynchronous events must be handled outside of the inner
++ * VCPU loop. They are handled here.
++ */
++ if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
++ nvmm_cpu_synchronize_state(cpu);
++ do_cpu_init(x86_cpu);
++ /* set int/nmi windows back to the reset state */
++ }
++ if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
++ apic_poll_irq(x86_cpu->apic_state);
++ }
++ if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
++ (env->eflags & IF_MASK)) ||
++ (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
++ cpu->halted = false;
++ }
++ if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
++ nvmm_cpu_synchronize_state(cpu);
++ do_cpu_sipi(x86_cpu);
++ }
++ if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
++ nvmm_cpu_synchronize_state(cpu);
++ apic_handle_tpr_access_report(x86_cpu->apic_state, env->eip,
++ env->tpr_access_type);
++ }
++
++ if (cpu->halted) {
++ cpu->exception_index = EXCP_HLT;
++ qatomic_set(&cpu->exit_request, false);
++ return 0;
++ }
++
++ qemu_mutex_unlock_iothread();
++ cpu_exec_start(cpu);
++
++ /*
++ * Inner VCPU loop.
++ */
++ do {
++ if (cpu->vcpu_dirty) {
++ nvmm_set_registers(cpu);
++ cpu->vcpu_dirty = false;
++ }
++
++ if (qcpu->stop) {
++ cpu->exception_index = EXCP_INTERRUPT;
++ qcpu->stop = false;
++ ret = 1;
++ break;
++ }
++
++ nvmm_vcpu_pre_run(cpu);
++
++ if (qatomic_read(&cpu->exit_request)) {
++ nvmm_vcpu_stop(vcpu);
++ }
++
++ /* Read exit_request before the kernel reads the immediate exit flag */
++ smp_rmb();
++ ret = nvmm_vcpu_run(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: Failed to exec a virtual processor,"
++ " error=%d", errno);
++ break;
++ }
++
++ nvmm_vcpu_post_run(cpu, exit);
++
++ switch (exit->reason) {
++ case NVMM_VCPU_EXIT_NONE:
++ break;
++ case NVMM_VCPU_EXIT_STOPPED:
++ /*
++ * The kernel cleared the immediate exit flag; cpu->exit_request
++ * must be cleared after
++ */
++ smp_wmb();
++ qcpu->stop = true;
++ break;
++ case NVMM_VCPU_EXIT_MEMORY:
++ ret = nvmm_handle_mem(mach, vcpu);
++ break;
++ case NVMM_VCPU_EXIT_IO:
++ ret = nvmm_handle_io(mach, vcpu);
++ break;
++ case NVMM_VCPU_EXIT_INT_READY:
++ case NVMM_VCPU_EXIT_NMI_READY:
++ case NVMM_VCPU_EXIT_TPR_CHANGED:
++ break;
++ case NVMM_VCPU_EXIT_HALTED:
++ ret = nvmm_handle_halted(mach, cpu, exit);
++ break;
++ case NVMM_VCPU_EXIT_SHUTDOWN:
++ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
++ cpu->exception_index = EXCP_INTERRUPT;
++ ret = 1;
++ break;
++ case NVMM_VCPU_EXIT_RDMSR:
++ ret = nvmm_handle_rdmsr(mach, cpu, exit);
++ break;
++ case NVMM_VCPU_EXIT_WRMSR:
++ ret = nvmm_handle_wrmsr(mach, cpu, exit);
++ break;
++ case NVMM_VCPU_EXIT_MONITOR:
++ case NVMM_VCPU_EXIT_MWAIT:
++ ret = nvmm_inject_ud(mach, vcpu);
++ break;
++ default:
++ error_report("NVMM: Unexpected VM exit code 0x%lx [hw=0x%lx]",
++ exit->reason, exit->u.inv.hwcode);
++ nvmm_get_registers(cpu);
++ qemu_mutex_lock_iothread();
++ qemu_system_guest_panicked(cpu_get_crash_info(cpu));
++ qemu_mutex_unlock_iothread();
++ ret = -1;
++ break;
++ }
++ } while (ret == 0);
++
++ cpu_exec_end(cpu);
++ qemu_mutex_lock_iothread();
++
++ qatomic_set(&cpu->exit_request, false);
++
++ return ret < 0;
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++do_nvmm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
++{
++ nvmm_get_registers(cpu);
++ cpu->vcpu_dirty = true;
++}
++
++static void
++do_nvmm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
++{
++ nvmm_set_registers(cpu);
++ cpu->vcpu_dirty = false;
++}
++
++static void
++do_nvmm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
++{
++ nvmm_set_registers(cpu);
++ cpu->vcpu_dirty = false;
++}
++
++static void
++do_nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg)
++{
++ cpu->vcpu_dirty = true;
++}
++
++void nvmm_cpu_synchronize_state(CPUState *cpu)
++{
++ if (!cpu->vcpu_dirty) {
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_state, RUN_ON_CPU_NULL);
++ }
++}
++
++void nvmm_cpu_synchronize_post_reset(CPUState *cpu)
++{
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
++}
++
++void nvmm_cpu_synchronize_post_init(CPUState *cpu)
++{
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
++}
++
++void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu)
++{
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
++}
++
++/* -------------------------------------------------------------------------- */
++
++static Error *nvmm_migration_blocker;
++
++/*
++ * The nvmm_vcpu_stop() mechanism breaks races between entering the VMM
++ * and another thread signaling the vCPU thread to exit.
++ */
++
++static void
++nvmm_ipi_signal(int sigcpu)
++{
++ if (current_cpu) {
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(current_cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ nvmm_vcpu_stop(vcpu);
++ }
++}
++
++static void
++nvmm_init_cpu_signals(void)
++{
++ struct sigaction sigact;
++ sigset_t set;
++
++ /* Install the IPI handler. */
++ memset(&sigact, 0, sizeof(sigact));
++ sigact.sa_handler = nvmm_ipi_signal;
++ sigaction(SIG_IPI, &sigact, NULL);
++
++ /* Allow IPIs on the current thread. */
++ sigprocmask(SIG_BLOCK, NULL, &set);
++ sigdelset(&set, SIG_IPI);
++ pthread_sigmask(SIG_SETMASK, &set, NULL);
++}
++
++int
++nvmm_init_vcpu(CPUState *cpu)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct nvmm_vcpu_conf_cpuid cpuid;
++ struct nvmm_vcpu_conf_tpr tpr;
++ Error *local_error = NULL;
++ struct qemu_vcpu *qcpu;
++ int ret, err;
++
++ nvmm_init_cpu_signals();
++
++ if (nvmm_migration_blocker == NULL) {
++ error_setg(&nvmm_migration_blocker,
++ "NVMM: Migration not supported");
++
++ (void)migrate_add_blocker(nvmm_migration_blocker, &local_error);
++ if (local_error) {
++ error_report_err(local_error);
++ migrate_del_blocker(nvmm_migration_blocker);
++ error_free(nvmm_migration_blocker);
++ return -EINVAL;
++ }
++ }
++
++ qcpu = g_malloc0(sizeof(*qcpu));
++ if (qcpu == NULL) {
++ error_report("NVMM: Failed to allocate VCPU context.");
++ return -ENOMEM;
++ }
++
++ ret = nvmm_vcpu_create(mach, cpu->cpu_index, &qcpu->vcpu);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to create a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++
++ memset(&cpuid, 0, sizeof(cpuid));
++ cpuid.mask = 1;
++ cpuid.leaf = 0x00000001;
++ cpuid.u.mask.set.edx = CPUID_MCE | CPUID_MCA | CPUID_MTRR;
++ ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CPUID,
++ &cpuid);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to configure a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++
++ ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CALLBACKS,
++ &nvmm_callbacks);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to configure a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++
++ if (qemu_mach.cap.arch.vcpu_conf_support & NVMM_CAP_ARCH_VCPU_CONF_TPR) {
++ memset(&tpr, 0, sizeof(tpr));
++ tpr.exit_changed = 1;
++ ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_TPR, &tpr);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to configure a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++ }
++
++ cpu->vcpu_dirty = true;
++ cpu->hax_vcpu = (struct hax_vcpu_state *)qcpu;
++
++ return 0;
++}
++
++int
++nvmm_vcpu_exec(CPUState *cpu)
++{
++ int ret, fatal;
++
++ while (1) {
++ if (cpu->exception_index >= EXCP_INTERRUPT) {
++ ret = cpu->exception_index;
++ cpu->exception_index = -1;
++ break;
++ }
++
++ fatal = nvmm_vcpu_loop(cpu);
++
++ if (fatal) {
++ error_report("NVMM: Failed to execute a VCPU.");
++ abort();
++ }
++ }
++
++ return ret;
++}
++
++void
++nvmm_destroy_vcpu(CPUState *cpu)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++
++ nvmm_vcpu_destroy(mach, &qcpu->vcpu);
++ g_free(cpu->hax_vcpu);
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++nvmm_update_mapping(hwaddr start_pa, ram_addr_t size, uintptr_t hva,
++ bool add, bool rom, const char *name)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ int ret, prot;
++
++ if (add) {
++ prot = PROT_READ | PROT_EXEC;
++ if (!rom) {
++ prot |= PROT_WRITE;
++ }
++ ret = nvmm_gpa_map(mach, hva, start_pa, size, prot);
++ } else {
++ ret = nvmm_gpa_unmap(mach, hva, start_pa, size);
++ }
++
++ if (ret == -1) {
++ error_report("NVMM: Failed to %s GPA range '%s' PA:%p, "
++ "Size:%p bytes, HostVA:%p, error=%d",
++ (add ? "map" : "unmap"), name, (void *)(uintptr_t)start_pa,
++ (void *)size, (void *)hva, errno);
++ }
++}
++
++static void
++nvmm_process_section(MemoryRegionSection *section, int add)
++{
++ MemoryRegion *mr = section->mr;
++ hwaddr start_pa = section->offset_within_address_space;
++ ram_addr_t size = int128_get64(section->size);
++ unsigned int delta;
++ uintptr_t hva;
++
++ if (!memory_region_is_ram(mr)) {
++ return;
++ }
++
++ /* Adjust start_pa and size so that they are page-aligned. */
++ delta = qemu_real_host_page_size - (start_pa & ~qemu_real_host_page_mask);
++ delta &= ~qemu_real_host_page_mask;
++ if (delta > size) {
++ return;
++ }
++ start_pa += delta;
++ size -= delta;
++ size &= qemu_real_host_page_mask;
++ if (!size || (start_pa & ~qemu_real_host_page_mask)) {
++ return;
++ }
++
++ hva = (uintptr_t)memory_region_get_ram_ptr(mr) +
++ section->offset_within_region + delta;
++
++ nvmm_update_mapping(start_pa, size, hva, add,
++ memory_region_is_rom(mr), mr->name);
++}
++
++static void
++nvmm_region_add(MemoryListener *listener, MemoryRegionSection *section)
++{
++ memory_region_ref(section->mr);
++ nvmm_process_section(section, 1);
++}
++
++static void
++nvmm_region_del(MemoryListener *listener, MemoryRegionSection *section)
++{
++ nvmm_process_section(section, 0);
++ memory_region_unref(section->mr);
++}
++
++static void
++nvmm_transaction_begin(MemoryListener *listener)
++{
++ /* nothing */
++}
++
++static void
++nvmm_transaction_commit(MemoryListener *listener)
++{
++ /* nothing */
++}
++
++static void
++nvmm_log_sync(MemoryListener *listener, MemoryRegionSection *section)
++{
++ MemoryRegion *mr = section->mr;
++
++ if (!memory_region_is_ram(mr)) {
++ return;
++ }
++
++ memory_region_set_dirty(mr, 0, int128_get64(section->size));
++}
++
++static MemoryListener nvmm_memory_listener = {
++ .begin = nvmm_transaction_begin,
++ .commit = nvmm_transaction_commit,
++ .region_add = nvmm_region_add,
++ .region_del = nvmm_region_del,
++ .log_sync = nvmm_log_sync,
++ .priority = 10,
++};
++
++static void
++nvmm_ram_block_added(RAMBlockNotifier *n, void *host, size_t size)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ uintptr_t hva = (uintptr_t)host;
++ int ret;
++
++ ret = nvmm_hva_map(mach, hva, size);
++
++ if (ret == -1) {
++ error_report("NVMM: Failed to map HVA, HostVA:%p "
++ "Size:%p bytes, error=%d",
++ (void *)hva, (void *)size, errno);
++ }
++}
++
++static struct RAMBlockNotifier nvmm_ram_notifier = {
++ .ram_block_added = nvmm_ram_block_added
++};
++
++/* -------------------------------------------------------------------------- */
++
++static int
++nvmm_accel_init(MachineState *ms)
++{
++ int ret, err;
++
++ ret = nvmm_init();
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Initialization failed, error=%d", errno);
++ return -err;
++ }
++
++ ret = nvmm_capability(&qemu_mach.cap);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Unable to fetch capability, error=%d", errno);
++ return -err;
++ }
++ if (qemu_mach.cap.version < NVMM_KERN_VERSION) {
++ error_report("NVMM: Unsupported version %u", qemu_mach.cap.version);
++ return -EPROGMISMATCH;
++ }
++ if (qemu_mach.cap.state_size != sizeof(struct nvmm_x64_state)) {
++ error_report("NVMM: Wrong state size %u", qemu_mach.cap.state_size);
++ return -EPROGMISMATCH;
++ }
++
++ ret = nvmm_machine_create(&qemu_mach.mach);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Machine creation failed, error=%d", errno);
++ return -err;
++ }
++
++ memory_listener_register(&nvmm_memory_listener, &address_space_memory);
++ ram_block_notifier_add(&nvmm_ram_notifier);
++
++ printf("NetBSD Virtual Machine Monitor accelerator is operational\n");
++ return 0;
++}
++
++int
++nvmm_enabled(void)
++{
++ return nvmm_allowed;
++}
++
++static void
++nvmm_accel_class_init(ObjectClass *oc, void *data)
++{
++ AccelClass *ac = ACCEL_CLASS(oc);
++ ac->name = "NVMM";
++ ac->init_machine = nvmm_accel_init;
++ ac->allowed = &nvmm_allowed;
++}
++
++static const TypeInfo nvmm_accel_type = {
++ .name = ACCEL_CLASS_NAME("nvmm"),
++ .parent = TYPE_ACCEL,
++ .class_init = nvmm_accel_class_init,
++};
++
++static void
++nvmm_type_init(void)
++{
++ type_register_static(&nvmm_accel_type);
++}
++
++type_init(nvmm_type_init);
Index: pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_meson.build
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_meson.build:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_meson.build Mon May 24 14:22:08 2021
@@ -0,0 +1,13 @@
+$NetBSD: patch-target_i386_nvmm_meson.build,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- target/i386/nvmm/meson.build.orig 2021-05-06 05:09:24.910385600 +0000
++++ target/i386/nvmm/meson.build
+@@ -0,0 +1,8 @@
++i386_softmmu_ss.add(when: 'CONFIG_NVMM', if_true:
++ files(
++ 'nvmm-all.c',
++ 'nvmm-accel-ops.c',
++ )
++)
++
++i386_softmmu_ss.add(when: 'CONFIG_NVMM', if_true: nvmm)
Index: pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.c
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.c:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.c Mon May 24 14:22:08 2021
@@ -0,0 +1,116 @@
+$NetBSD: patch-target_i386_nvmm_nvmm-accel-ops.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- target/i386/nvmm/nvmm-accel-ops.c.orig 2021-05-06 05:09:24.910489458 +0000
++++ target/i386/nvmm/nvmm-accel-ops.c
+@@ -0,0 +1,111 @@
++/*
++ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
++ *
++ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
++ *
++ * This work is licensed under the terms of the GNU GPL, version 2 or later.
++ * See the COPYING file in the top-level directory.
++ */
++
++#include "qemu/osdep.h"
++#include "sysemu/kvm_int.h"
++#include "qemu/main-loop.h"
++#include "sysemu/cpus.h"
++#include "qemu/guest-random.h"
++
++#include "sysemu/nvmm.h"
++#include "nvmm-accel-ops.h"
++
++static void *qemu_nvmm_cpu_thread_fn(void *arg)
++{
++ CPUState *cpu = arg;
++ int r;
++
++ assert(nvmm_enabled());
++
++ rcu_register_thread();
++
++ qemu_mutex_lock_iothread();
++ qemu_thread_get_self(cpu->thread);
++ cpu->thread_id = qemu_get_thread_id();
++ current_cpu = cpu;
++
++ r = nvmm_init_vcpu(cpu);
++ if (r < 0) {
++ fprintf(stderr, "nvmm_init_vcpu failed: %s\n", strerror(-r));
++ exit(1);
++ }
++
++ /* signal CPU creation */
++ cpu_thread_signal_created(cpu);
++ qemu_guest_random_seed_thread_part2(cpu->random_seed);
++
++ do {
++ if (cpu_can_run(cpu)) {
++ r = nvmm_vcpu_exec(cpu);
++ if (r == EXCP_DEBUG) {
++ cpu_handle_guest_debug(cpu);
++ }
++ }
++ while (cpu_thread_is_idle(cpu)) {
++ qemu_cond_wait_iothread(cpu->halt_cond);
++ }
++ qemu_wait_io_event_common(cpu);
++ } while (!cpu->unplug || cpu_can_run(cpu));
++
++ nvmm_destroy_vcpu(cpu);
++ cpu_thread_signal_destroyed(cpu);
++ qemu_mutex_unlock_iothread();
++ rcu_unregister_thread();
++ return NULL;
++}
++
++static void nvmm_start_vcpu_thread(CPUState *cpu)
++{
++ char thread_name[VCPU_THREAD_NAME_SIZE];
++
++ cpu->thread = g_malloc0(sizeof(QemuThread));
++ cpu->halt_cond = g_malloc0(sizeof(QemuCond));
++ qemu_cond_init(cpu->halt_cond);
++ snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/NVMM",
++ cpu->cpu_index);
++ qemu_thread_create(cpu->thread, thread_name, qemu_nvmm_cpu_thread_fn,
++ cpu, QEMU_THREAD_JOINABLE);
++}
++
++/*
++ * Abort the call to run the virtual processor by another thread, and to
++ * return the control to that thread.
++ */
++static void nvmm_kick_vcpu_thread(CPUState *cpu)
++{
++ cpu->exit_request = 1;
++ cpus_kick_thread(cpu);
++}
++
++static void nvmm_accel_ops_class_init(ObjectClass *oc, void *data)
++{
++ AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
++
++ ops->create_vcpu_thread = nvmm_start_vcpu_thread;
++ ops->kick_vcpu_thread = nvmm_kick_vcpu_thread;
++
++ ops->synchronize_post_reset = nvmm_cpu_synchronize_post_reset;
++ ops->synchronize_post_init = nvmm_cpu_synchronize_post_init;
++ ops->synchronize_state = nvmm_cpu_synchronize_state;
++ ops->synchronize_pre_loadvm = nvmm_cpu_synchronize_pre_loadvm;
++}
++
++static const TypeInfo nvmm_accel_ops_type = {
++ .name = ACCEL_OPS_NAME("nvmm"),
++
++ .parent = TYPE_ACCEL_OPS,
++ .class_init = nvmm_accel_ops_class_init,
++ .abstract = true,
++};
++
++static void nvmm_accel_ops_register_types(void)
++{
++ type_register_static(&nvmm_accel_ops_type);
++}
++type_init(nvmm_accel_ops_register_types);
Index: pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.h
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.h:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-accel-ops.h Mon May 24 14:22:08 2021
@@ -0,0 +1,29 @@
+$NetBSD: patch-target_i386_nvmm_nvmm-accel-ops.h,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- target/i386/nvmm/nvmm-accel-ops.h.orig 2021-05-06 05:09:24.910599351 +0000
++++ target/i386/nvmm/nvmm-accel-ops.h
+@@ -0,0 +1,24 @@
++/*
++ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
++ *
++ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
++ *
++ * This work is licensed under the terms of the GNU GPL, version 2 or later.
++ * See the COPYING file in the top-level directory.
++ */
++
++#ifndef NVMM_CPUS_H
++#define NVMM_CPUS_H
++
++#include "sysemu/cpus.h"
++
++int nvmm_init_vcpu(CPUState *cpu);
++int nvmm_vcpu_exec(CPUState *cpu);
++void nvmm_destroy_vcpu(CPUState *cpu);
++
++void nvmm_cpu_synchronize_state(CPUState *cpu);
++void nvmm_cpu_synchronize_post_reset(CPUState *cpu);
++void nvmm_cpu_synchronize_post_init(CPUState *cpu);
++void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu);
++
++#endif /* NVMM_CPUS_H */
Index: pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-all.c
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-all.c:1.1
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-target_i386_nvmm_nvmm-all.c Mon May 24 14:22:08 2021
@@ -0,0 +1,1231 @@
+$NetBSD: patch-target_i386_nvmm_nvmm-all.c,v 1.1 2021/05/24 14:22:08 ryoon Exp $
+
+--- target/i386/nvmm/nvmm-all.c.orig 2021-05-06 05:09:24.911125954 +0000
++++ target/i386/nvmm/nvmm-all.c
+@@ -0,0 +1,1226 @@
++/*
++ * Copyright (c) 2018-2019 Maxime Villard, All rights reserved.
++ *
++ * NetBSD Virtual Machine Monitor (NVMM) accelerator for QEMU.
++ *
++ * This work is licensed under the terms of the GNU GPL, version 2 or later.
++ * See the COPYING file in the top-level directory.
++ */
++
++#include "qemu/osdep.h"
++#include "cpu.h"
++#include "exec/address-spaces.h"
++#include "exec/ioport.h"
++#include "qemu-common.h"
++#include "qemu/accel.h"
++#include "sysemu/nvmm.h"
++#include "sysemu/cpus.h"
++#include "sysemu/runstate.h"
++#include "qemu/main-loop.h"
++#include "qemu/error-report.h"
++#include "qapi/error.h"
++#include "qemu/queue.h"
++#include "migration/blocker.h"
++#include "strings.h"
++
++#include "nvmm-accel-ops.h"
++
++#include <nvmm.h>
++
++struct qemu_vcpu {
++ struct nvmm_vcpu vcpu;
++ uint8_t tpr;
++ bool stop;
++
++ /* Window-exiting for INTs/NMIs. */
++ bool int_window_exit;
++ bool nmi_window_exit;
++
++ /* The guest is in an interrupt shadow (POP SS, etc). */
++ bool int_shadow;
++};
++
++struct qemu_machine {
++ struct nvmm_capability cap;
++ struct nvmm_machine mach;
++};
++
++/* -------------------------------------------------------------------------- */
++
++static bool nvmm_allowed;
++static struct qemu_machine qemu_mach;
++
++static struct qemu_vcpu *
++get_qemu_vcpu(CPUState *cpu)
++{
++ return (struct qemu_vcpu *)cpu->hax_vcpu;
++}
++
++static struct nvmm_machine *
++get_nvmm_mach(void)
++{
++ return &qemu_mach.mach;
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++nvmm_set_segment(struct nvmm_x64_state_seg *nseg, const SegmentCache *qseg)
++{
++ uint32_t attrib = qseg->flags;
++
++ nseg->selector = qseg->selector;
++ nseg->limit = qseg->limit;
++ nseg->base = qseg->base;
++ nseg->attrib.type = __SHIFTOUT(attrib, DESC_TYPE_MASK);
++ nseg->attrib.s = __SHIFTOUT(attrib, DESC_S_MASK);
++ nseg->attrib.dpl = __SHIFTOUT(attrib, DESC_DPL_MASK);
++ nseg->attrib.p = __SHIFTOUT(attrib, DESC_P_MASK);
++ nseg->attrib.avl = __SHIFTOUT(attrib, DESC_AVL_MASK);
++ nseg->attrib.l = __SHIFTOUT(attrib, DESC_L_MASK);
++ nseg->attrib.def = __SHIFTOUT(attrib, DESC_B_MASK);
++ nseg->attrib.g = __SHIFTOUT(attrib, DESC_G_MASK);
++}
++
++static void
++nvmm_set_registers(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t bitmap;
++ size_t i;
++ int ret;
++
++ assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
++
++ /* GPRs. */
++ state->gprs[NVMM_X64_GPR_RAX] = env->regs[R_EAX];
++ state->gprs[NVMM_X64_GPR_RCX] = env->regs[R_ECX];
++ state->gprs[NVMM_X64_GPR_RDX] = env->regs[R_EDX];
++ state->gprs[NVMM_X64_GPR_RBX] = env->regs[R_EBX];
++ state->gprs[NVMM_X64_GPR_RSP] = env->regs[R_ESP];
++ state->gprs[NVMM_X64_GPR_RBP] = env->regs[R_EBP];
++ state->gprs[NVMM_X64_GPR_RSI] = env->regs[R_ESI];
++ state->gprs[NVMM_X64_GPR_RDI] = env->regs[R_EDI];
++#ifdef TARGET_X86_64
++ state->gprs[NVMM_X64_GPR_R8] = env->regs[R_R8];
++ state->gprs[NVMM_X64_GPR_R9] = env->regs[R_R9];
++ state->gprs[NVMM_X64_GPR_R10] = env->regs[R_R10];
++ state->gprs[NVMM_X64_GPR_R11] = env->regs[R_R11];
++ state->gprs[NVMM_X64_GPR_R12] = env->regs[R_R12];
++ state->gprs[NVMM_X64_GPR_R13] = env->regs[R_R13];
++ state->gprs[NVMM_X64_GPR_R14] = env->regs[R_R14];
++ state->gprs[NVMM_X64_GPR_R15] = env->regs[R_R15];
++#endif
++
++ /* RIP and RFLAGS. */
++ state->gprs[NVMM_X64_GPR_RIP] = env->eip;
++ state->gprs[NVMM_X64_GPR_RFLAGS] = env->eflags;
++
++ /* Segments. */
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_CS], &env->segs[R_CS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_DS], &env->segs[R_DS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_ES], &env->segs[R_ES]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_FS], &env->segs[R_FS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_GS], &env->segs[R_GS]);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_SS], &env->segs[R_SS]);
++
++ /* Special segments. */
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_GDT], &env->gdt);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_LDT], &env->ldt);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_TR], &env->tr);
++ nvmm_set_segment(&state->segs[NVMM_X64_SEG_IDT], &env->idt);
++
++ /* Control registers. */
++ state->crs[NVMM_X64_CR_CR0] = env->cr[0];
++ state->crs[NVMM_X64_CR_CR2] = env->cr[2];
++ state->crs[NVMM_X64_CR_CR3] = env->cr[3];
++ state->crs[NVMM_X64_CR_CR4] = env->cr[4];
++ state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
++ state->crs[NVMM_X64_CR_XCR0] = env->xcr0;
++
++ /* Debug registers. */
++ state->drs[NVMM_X64_DR_DR0] = env->dr[0];
++ state->drs[NVMM_X64_DR_DR1] = env->dr[1];
++ state->drs[NVMM_X64_DR_DR2] = env->dr[2];
++ state->drs[NVMM_X64_DR_DR3] = env->dr[3];
++ state->drs[NVMM_X64_DR_DR6] = env->dr[6];
++ state->drs[NVMM_X64_DR_DR7] = env->dr[7];
++
++ /* FPU. */
++ state->fpu.fx_cw = env->fpuc;
++ state->fpu.fx_sw = (env->fpus & ~0x3800) | ((env->fpstt & 0x7) << 11);
++ state->fpu.fx_tw = 0;
++ for (i = 0; i < 8; i++) {
++ state->fpu.fx_tw |= (!env->fptags[i]) << i;
++ }
++ state->fpu.fx_opcode = env->fpop;
++ state->fpu.fx_ip.fa_64 = env->fpip;
++ state->fpu.fx_dp.fa_64 = env->fpdp;
++ state->fpu.fx_mxcsr = env->mxcsr;
++ state->fpu.fx_mxcsr_mask = 0x0000FFFF;
++ assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
++ memcpy(state->fpu.fx_87_ac, env->fpregs, sizeof(env->fpregs));
++ for (i = 0; i < CPU_NB_REGS; i++) {
++ memcpy(&state->fpu.fx_xmm[i].xmm_bytes[0],
++ &env->xmm_regs[i].ZMM_Q(0), 8);
++ memcpy(&state->fpu.fx_xmm[i].xmm_bytes[8],
++ &env->xmm_regs[i].ZMM_Q(1), 8);
++ }
++
++ /* MSRs. */
++ state->msrs[NVMM_X64_MSR_EFER] = env->efer;
++ state->msrs[NVMM_X64_MSR_STAR] = env->star;
++#ifdef TARGET_X86_64
++ state->msrs[NVMM_X64_MSR_LSTAR] = env->lstar;
++ state->msrs[NVMM_X64_MSR_CSTAR] = env->cstar;
++ state->msrs[NVMM_X64_MSR_SFMASK] = env->fmask;
++ state->msrs[NVMM_X64_MSR_KERNELGSBASE] = env->kernelgsbase;
++#endif
++ state->msrs[NVMM_X64_MSR_SYSENTER_CS] = env->sysenter_cs;
++ state->msrs[NVMM_X64_MSR_SYSENTER_ESP] = env->sysenter_esp;
++ state->msrs[NVMM_X64_MSR_SYSENTER_EIP] = env->sysenter_eip;
++ state->msrs[NVMM_X64_MSR_PAT] = env->pat;
++ state->msrs[NVMM_X64_MSR_TSC] = env->tsc;
++
++ bitmap =
++ NVMM_X64_STATE_SEGS |
++ NVMM_X64_STATE_GPRS |
++ NVMM_X64_STATE_CRS |
++ NVMM_X64_STATE_DRS |
++ NVMM_X64_STATE_MSRS |
++ NVMM_X64_STATE_FPU;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, bitmap);
++ if (ret == -1) {
++ error_report("NVMM: Failed to set virtual processor context,"
++ " error=%d", errno);
++ }
++}
++
++static void
++nvmm_get_segment(SegmentCache *qseg, const struct nvmm_x64_state_seg *nseg)
++{
++ qseg->selector = nseg->selector;
++ qseg->limit = nseg->limit;
++ qseg->base = nseg->base;
++
++ qseg->flags =
++ __SHIFTIN((uint32_t)nseg->attrib.type, DESC_TYPE_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.s, DESC_S_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.dpl, DESC_DPL_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.p, DESC_P_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.avl, DESC_AVL_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.l, DESC_L_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.def, DESC_B_MASK) |
++ __SHIFTIN((uint32_t)nseg->attrib.g, DESC_G_MASK);
++}
++
++static void
++nvmm_get_registers(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t bitmap, tpr;
++ size_t i;
++ int ret;
++
++ assert(cpu_is_stopped(cpu) || qemu_cpu_is_self(cpu));
++
++ bitmap =
++ NVMM_X64_STATE_SEGS |
++ NVMM_X64_STATE_GPRS |
++ NVMM_X64_STATE_CRS |
++ NVMM_X64_STATE_DRS |
++ NVMM_X64_STATE_MSRS |
++ NVMM_X64_STATE_FPU;
++
++ ret = nvmm_vcpu_getstate(mach, vcpu, bitmap);
++ if (ret == -1) {
++ error_report("NVMM: Failed to get virtual processor context,"
++ " error=%d", errno);
++ }
++
++ /* GPRs. */
++ env->regs[R_EAX] = state->gprs[NVMM_X64_GPR_RAX];
++ env->regs[R_ECX] = state->gprs[NVMM_X64_GPR_RCX];
++ env->regs[R_EDX] = state->gprs[NVMM_X64_GPR_RDX];
++ env->regs[R_EBX] = state->gprs[NVMM_X64_GPR_RBX];
++ env->regs[R_ESP] = state->gprs[NVMM_X64_GPR_RSP];
++ env->regs[R_EBP] = state->gprs[NVMM_X64_GPR_RBP];
++ env->regs[R_ESI] = state->gprs[NVMM_X64_GPR_RSI];
++ env->regs[R_EDI] = state->gprs[NVMM_X64_GPR_RDI];
++#ifdef TARGET_X86_64
++ env->regs[R_R8] = state->gprs[NVMM_X64_GPR_R8];
++ env->regs[R_R9] = state->gprs[NVMM_X64_GPR_R9];
++ env->regs[R_R10] = state->gprs[NVMM_X64_GPR_R10];
++ env->regs[R_R11] = state->gprs[NVMM_X64_GPR_R11];
++ env->regs[R_R12] = state->gprs[NVMM_X64_GPR_R12];
++ env->regs[R_R13] = state->gprs[NVMM_X64_GPR_R13];
++ env->regs[R_R14] = state->gprs[NVMM_X64_GPR_R14];
++ env->regs[R_R15] = state->gprs[NVMM_X64_GPR_R15];
++#endif
++
++ /* RIP and RFLAGS. */
++ env->eip = state->gprs[NVMM_X64_GPR_RIP];
++ env->eflags = state->gprs[NVMM_X64_GPR_RFLAGS];
++
++ /* Segments. */
++ nvmm_get_segment(&env->segs[R_ES], &state->segs[NVMM_X64_SEG_ES]);
++ nvmm_get_segment(&env->segs[R_CS], &state->segs[NVMM_X64_SEG_CS]);
++ nvmm_get_segment(&env->segs[R_SS], &state->segs[NVMM_X64_SEG_SS]);
++ nvmm_get_segment(&env->segs[R_DS], &state->segs[NVMM_X64_SEG_DS]);
++ nvmm_get_segment(&env->segs[R_FS], &state->segs[NVMM_X64_SEG_FS]);
++ nvmm_get_segment(&env->segs[R_GS], &state->segs[NVMM_X64_SEG_GS]);
++
++ /* Special segments. */
++ nvmm_get_segment(&env->gdt, &state->segs[NVMM_X64_SEG_GDT]);
++ nvmm_get_segment(&env->ldt, &state->segs[NVMM_X64_SEG_LDT]);
++ nvmm_get_segment(&env->tr, &state->segs[NVMM_X64_SEG_TR]);
++ nvmm_get_segment(&env->idt, &state->segs[NVMM_X64_SEG_IDT]);
++
++ /* Control registers. */
++ env->cr[0] = state->crs[NVMM_X64_CR_CR0];
++ env->cr[2] = state->crs[NVMM_X64_CR_CR2];
++ env->cr[3] = state->crs[NVMM_X64_CR_CR3];
++ env->cr[4] = state->crs[NVMM_X64_CR_CR4];
++ tpr = state->crs[NVMM_X64_CR_CR8];
++ if (tpr != qcpu->tpr) {
++ qcpu->tpr = tpr;
++ cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
++ }
++ env->xcr0 = state->crs[NVMM_X64_CR_XCR0];
++
++ /* Debug registers. */
++ env->dr[0] = state->drs[NVMM_X64_DR_DR0];
++ env->dr[1] = state->drs[NVMM_X64_DR_DR1];
++ env->dr[2] = state->drs[NVMM_X64_DR_DR2];
++ env->dr[3] = state->drs[NVMM_X64_DR_DR3];
++ env->dr[6] = state->drs[NVMM_X64_DR_DR6];
++ env->dr[7] = state->drs[NVMM_X64_DR_DR7];
++
++ /* FPU. */
++ env->fpuc = state->fpu.fx_cw;
++ env->fpstt = (state->fpu.fx_sw >> 11) & 0x7;
++ env->fpus = state->fpu.fx_sw & ~0x3800;
++ for (i = 0; i < 8; i++) {
++ env->fptags[i] = !((state->fpu.fx_tw >> i) & 1);
++ }
++ env->fpop = state->fpu.fx_opcode;
++ env->fpip = state->fpu.fx_ip.fa_64;
++ env->fpdp = state->fpu.fx_dp.fa_64;
++ env->mxcsr = state->fpu.fx_mxcsr;
++ assert(sizeof(state->fpu.fx_87_ac) == sizeof(env->fpregs));
++ memcpy(env->fpregs, state->fpu.fx_87_ac, sizeof(env->fpregs));
++ for (i = 0; i < CPU_NB_REGS; i++) {
++ memcpy(&env->xmm_regs[i].ZMM_Q(0),
++ &state->fpu.fx_xmm[i].xmm_bytes[0], 8);
++ memcpy(&env->xmm_regs[i].ZMM_Q(1),
++ &state->fpu.fx_xmm[i].xmm_bytes[8], 8);
++ }
++
++ /* MSRs. */
++ env->efer = state->msrs[NVMM_X64_MSR_EFER];
++ env->star = state->msrs[NVMM_X64_MSR_STAR];
++#ifdef TARGET_X86_64
++ env->lstar = state->msrs[NVMM_X64_MSR_LSTAR];
++ env->cstar = state->msrs[NVMM_X64_MSR_CSTAR];
++ env->fmask = state->msrs[NVMM_X64_MSR_SFMASK];
++ env->kernelgsbase = state->msrs[NVMM_X64_MSR_KERNELGSBASE];
++#endif
++ env->sysenter_cs = state->msrs[NVMM_X64_MSR_SYSENTER_CS];
++ env->sysenter_esp = state->msrs[NVMM_X64_MSR_SYSENTER_ESP];
++ env->sysenter_eip = state->msrs[NVMM_X64_MSR_SYSENTER_EIP];
++ env->pat = state->msrs[NVMM_X64_MSR_PAT];
++ env->tsc = state->msrs[NVMM_X64_MSR_TSC];
++
++ x86_update_hflags(env);
++}
++
++static bool
++nvmm_can_take_int(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ struct nvmm_machine *mach = get_nvmm_mach();
++
++ if (qcpu->int_window_exit) {
++ return false;
++ }
++
++ if (qcpu->int_shadow || !(env->eflags & IF_MASK)) {
++ struct nvmm_x64_state *state = vcpu->state;
++
++ /* Exit on interrupt window. */
++ nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_INTR);
++ state->intr.int_window_exiting = 1;
++ nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_INTR);
++
++ return false;
++ }
++
++ return true;
++}
++
++static bool
++nvmm_can_take_nmi(CPUState *cpu)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++
++ /*
++ * Contrary to INTs, NMIs always schedule an exit when they are
++ * completed. Therefore, if window-exiting is enabled, it means
++ * NMIs are blocked.
++ */
++ if (qcpu->nmi_window_exit) {
++ return false;
++ }
++
++ return true;
++}
++
++/*
++ * Called before the VCPU is run. We inject events generated by the I/O
++ * thread, and synchronize the guest TPR.
++ */
++static void
++nvmm_vcpu_pre_run(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ struct nvmm_vcpu_event *event = vcpu->event;
++ bool has_event = false;
++ bool sync_tpr = false;
++ uint8_t tpr;
++ int ret;
++
++ qemu_mutex_lock_iothread();
++
++ tpr = cpu_get_apic_tpr(x86_cpu->apic_state);
++ if (tpr != qcpu->tpr) {
++ qcpu->tpr = tpr;
++ sync_tpr = true;
++ }
++
++ /*
++ * Force the VCPU out of its inner loop to process any INIT requests
++ * or commit pending TPR access.
++ */
++ if (cpu->interrupt_request & (CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
++ cpu->exit_request = 1;
++ }
++
++ if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
++ if (nvmm_can_take_nmi(cpu)) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
++ event->type = NVMM_VCPU_EVENT_INTR;
++ event->vector = 2;
++ has_event = true;
++ }
++ }
++
++ if (!has_event && (cpu->interrupt_request & CPU_INTERRUPT_HARD)) {
++ if (nvmm_can_take_int(cpu)) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
++ event->type = NVMM_VCPU_EVENT_INTR;
++ event->vector = cpu_get_pic_interrupt(env);
++ has_event = true;
++ }
++ }
++
++ /* Don't want SMIs. */
++ if (cpu->interrupt_request & CPU_INTERRUPT_SMI) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
++ }
++
++ if (sync_tpr) {
++ ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_CRS);
++ if (ret == -1) {
++ error_report("NVMM: Failed to get CPU state,"
++ " error=%d", errno);
++ }
++
++ state->crs[NVMM_X64_CR_CR8] = qcpu->tpr;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_CRS);
++ if (ret == -1) {
++ error_report("NVMM: Failed to set CPU state,"
++ " error=%d", errno);
++ }
++ }
++
++ if (has_event) {
++ ret = nvmm_vcpu_inject(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: Failed to inject event,"
++ " error=%d", errno);
++ }
++ }
++
++ qemu_mutex_unlock_iothread();
++}
++
++/*
++ * Called after the VCPU ran. We synchronize the host view of the TPR and
++ * RFLAGS.
++ */
++static void
++nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ uint64_t tpr;
++
++ env->eflags = exit->exitstate.rflags;
++ qcpu->int_shadow = exit->exitstate.int_shadow;
++ qcpu->int_window_exit = exit->exitstate.int_window_exiting;
++ qcpu->nmi_window_exit = exit->exitstate.nmi_window_exiting;
++
++ tpr = exit->exitstate.cr8;
++ if (qcpu->tpr != tpr) {
++ qcpu->tpr = tpr;
++ qemu_mutex_lock_iothread();
++ cpu_set_apic_tpr(x86_cpu->apic_state, qcpu->tpr);
++ qemu_mutex_unlock_iothread();
++ }
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++nvmm_io_callback(struct nvmm_io *io)
++{
++ MemTxAttrs attrs = { 0 };
++ int ret;
++
++ ret = address_space_rw(&address_space_io, io->port, attrs, io->data,
++ io->size, !io->in);
++ if (ret != MEMTX_OK) {
++ error_report("NVMM: I/O Transaction Failed "
++ "[%s, port=%u, size=%zu]", (io->in ? "in" : "out"),
++ io->port, io->size);
++ }
++
++ /* Needed, otherwise infinite loop. */
++ current_cpu->vcpu_dirty = false;
++}
++
++static void
++nvmm_mem_callback(struct nvmm_mem *mem)
++{
++ cpu_physical_memory_rw(mem->gpa, mem->data, mem->size, mem->write);
++
++ /* Needed, otherwise infinite loop. */
++ current_cpu->vcpu_dirty = false;
++}
++
++static struct nvmm_assist_callbacks nvmm_callbacks = {
++ .io = nvmm_io_callback,
++ .mem = nvmm_mem_callback
++};
++
++/* -------------------------------------------------------------------------- */
++
++static int
++nvmm_handle_mem(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
++{
++ int ret;
++
++ ret = nvmm_assist_mem(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: Mem Assist Failed [gpa=%p]",
++ (void *)vcpu->exit->u.mem.gpa);
++ }
++
++ return ret;
++}
++
++static int
++nvmm_handle_io(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
++{
++ int ret;
++
++ ret = nvmm_assist_io(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: I/O Assist Failed [port=%d]",
++ (int)vcpu->exit->u.io.port);
++ }
++
++ return ret;
++}
++
++static int
++nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
++ struct nvmm_vcpu_exit *exit)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t val;
++ int ret;
++
++ switch (exit->u.rdmsr.msr) {
++ case MSR_IA32_APICBASE:
++ val = cpu_get_apic_base(x86_cpu->apic_state);
++ break;
++ case MSR_MTRRcap:
++ case MSR_MTRRdefType:
++ case MSR_MCG_CAP:
++ case MSR_MCG_STATUS:
++ val = 0;
++ break;
++ default: /* More MSRs to add? */
++ val = 0;
++ error_report("NVMM: Unexpected RDMSR 0x%x, ignored",
++ exit->u.rdmsr.msr);
++ break;
++ }
++
++ ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ state->gprs[NVMM_X64_GPR_RAX] = (val & 0xFFFFFFFF);
++ state->gprs[NVMM_X64_GPR_RDX] = (val >> 32);
++ state->gprs[NVMM_X64_GPR_RIP] = exit->u.rdmsr.npc;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ return 0;
++}
++
++static int
++nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
++ struct nvmm_vcpu_exit *exit)
++{
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_x64_state *state = vcpu->state;
++ uint64_t val;
++ int ret;
++
++ val = exit->u.wrmsr.val;
++
++ switch (exit->u.wrmsr.msr) {
++ case MSR_IA32_APICBASE:
++ cpu_set_apic_base(x86_cpu->apic_state, val);
++ break;
++ case MSR_MTRRdefType:
++ case MSR_MCG_STATUS:
++ break;
++ default: /* More MSRs to add? */
++ error_report("NVMM: Unexpected WRMSR 0x%x [val=0x%lx], ignored",
++ exit->u.wrmsr.msr, val);
++ break;
++ }
++
++ ret = nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ state->gprs[NVMM_X64_GPR_RIP] = exit->u.wrmsr.npc;
++
++ ret = nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_GPRS);
++ if (ret == -1) {
++ return -1;
++ }
++
++ return 0;
++}
++
++static int
++nvmm_handle_halted(struct nvmm_machine *mach, CPUState *cpu,
++ struct nvmm_vcpu_exit *exit)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ int ret = 0;
++
++ qemu_mutex_lock_iothread();
++
++ if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
++ (env->eflags & IF_MASK)) &&
++ !(cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
++ cpu->exception_index = EXCP_HLT;
++ cpu->halted = true;
++ ret = 1;
++ }
++
++ qemu_mutex_unlock_iothread();
++
++ return ret;
++}
++
++static int
++nvmm_inject_ud(struct nvmm_machine *mach, struct nvmm_vcpu *vcpu)
++{
++ struct nvmm_vcpu_event *event = vcpu->event;
++
++ event->type = NVMM_VCPU_EVENT_EXCP;
++ event->vector = 6;
++ event->u.excp.error = 0;
++
++ return nvmm_vcpu_inject(mach, vcpu);
++}
++
++static int
++nvmm_vcpu_loop(CPUState *cpu)
++{
++ struct CPUX86State *env = (CPUArchState *)cpu->env_ptr;
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ X86CPU *x86_cpu = X86_CPU(cpu);
++ struct nvmm_vcpu_exit *exit = vcpu->exit;
++ int ret;
++
++ /*
++ * Some asynchronous events must be handled outside of the inner
++ * VCPU loop. They are handled here.
++ */
++ if (cpu->interrupt_request & CPU_INTERRUPT_INIT) {
++ nvmm_cpu_synchronize_state(cpu);
++ do_cpu_init(x86_cpu);
++ /* set int/nmi windows back to the reset state */
++ }
++ if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
++ apic_poll_irq(x86_cpu->apic_state);
++ }
++ if (((cpu->interrupt_request & CPU_INTERRUPT_HARD) &&
++ (env->eflags & IF_MASK)) ||
++ (cpu->interrupt_request & CPU_INTERRUPT_NMI)) {
++ cpu->halted = false;
++ }
++ if (cpu->interrupt_request & CPU_INTERRUPT_SIPI) {
++ nvmm_cpu_synchronize_state(cpu);
++ do_cpu_sipi(x86_cpu);
++ }
++ if (cpu->interrupt_request & CPU_INTERRUPT_TPR) {
++ cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
++ nvmm_cpu_synchronize_state(cpu);
++ apic_handle_tpr_access_report(x86_cpu->apic_state, env->eip,
++ env->tpr_access_type);
++ }
++
++ if (cpu->halted) {
++ cpu->exception_index = EXCP_HLT;
++ qatomic_set(&cpu->exit_request, false);
++ return 0;
++ }
++
++ qemu_mutex_unlock_iothread();
++ cpu_exec_start(cpu);
++
++ /*
++ * Inner VCPU loop.
++ */
++ do {
++ if (cpu->vcpu_dirty) {
++ nvmm_set_registers(cpu);
++ cpu->vcpu_dirty = false;
++ }
++
++ if (qcpu->stop) {
++ cpu->exception_index = EXCP_INTERRUPT;
++ qcpu->stop = false;
++ ret = 1;
++ break;
++ }
++
++ nvmm_vcpu_pre_run(cpu);
++
++ if (qatomic_read(&cpu->exit_request)) {
++ nvmm_vcpu_stop(vcpu);
++ }
++
++ /* Read exit_request before the kernel reads the immediate exit flag */
++ smp_rmb();
++ ret = nvmm_vcpu_run(mach, vcpu);
++ if (ret == -1) {
++ error_report("NVMM: Failed to exec a virtual processor,"
++ " error=%d", errno);
++ break;
++ }
++
++ nvmm_vcpu_post_run(cpu, exit);
++
++ switch (exit->reason) {
++ case NVMM_VCPU_EXIT_NONE:
++ break;
++ case NVMM_VCPU_EXIT_STOPPED:
++ /*
++ * The kernel cleared the immediate exit flag; cpu->exit_request
++ * must be cleared after
++ */
++ smp_wmb();
++ qcpu->stop = true;
++ break;
++ case NVMM_VCPU_EXIT_MEMORY:
++ ret = nvmm_handle_mem(mach, vcpu);
++ break;
++ case NVMM_VCPU_EXIT_IO:
++ ret = nvmm_handle_io(mach, vcpu);
++ break;
++ case NVMM_VCPU_EXIT_INT_READY:
++ case NVMM_VCPU_EXIT_NMI_READY:
++ case NVMM_VCPU_EXIT_TPR_CHANGED:
++ break;
++ case NVMM_VCPU_EXIT_HALTED:
++ ret = nvmm_handle_halted(mach, cpu, exit);
++ break;
++ case NVMM_VCPU_EXIT_SHUTDOWN:
++ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
++ cpu->exception_index = EXCP_INTERRUPT;
++ ret = 1;
++ break;
++ case NVMM_VCPU_EXIT_RDMSR:
++ ret = nvmm_handle_rdmsr(mach, cpu, exit);
++ break;
++ case NVMM_VCPU_EXIT_WRMSR:
++ ret = nvmm_handle_wrmsr(mach, cpu, exit);
++ break;
++ case NVMM_VCPU_EXIT_MONITOR:
++ case NVMM_VCPU_EXIT_MWAIT:
++ ret = nvmm_inject_ud(mach, vcpu);
++ break;
++ default:
++ error_report("NVMM: Unexpected VM exit code 0x%lx [hw=0x%lx]",
++ exit->reason, exit->u.inv.hwcode);
++ nvmm_get_registers(cpu);
++ qemu_mutex_lock_iothread();
++ qemu_system_guest_panicked(cpu_get_crash_info(cpu));
++ qemu_mutex_unlock_iothread();
++ ret = -1;
++ break;
++ }
++ } while (ret == 0);
++
++ cpu_exec_end(cpu);
++ qemu_mutex_lock_iothread();
++
++ qatomic_set(&cpu->exit_request, false);
++
++ return ret < 0;
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++do_nvmm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
++{
++ nvmm_get_registers(cpu);
++ cpu->vcpu_dirty = true;
++}
++
++static void
++do_nvmm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
++{
++ nvmm_set_registers(cpu);
++ cpu->vcpu_dirty = false;
++}
++
++static void
++do_nvmm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
++{
++ nvmm_set_registers(cpu);
++ cpu->vcpu_dirty = false;
++}
++
++static void
++do_nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu, run_on_cpu_data arg)
++{
++ cpu->vcpu_dirty = true;
++}
++
++void nvmm_cpu_synchronize_state(CPUState *cpu)
++{
++ if (!cpu->vcpu_dirty) {
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_state, RUN_ON_CPU_NULL);
++ }
++}
++
++void nvmm_cpu_synchronize_post_reset(CPUState *cpu)
++{
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
++}
++
++void nvmm_cpu_synchronize_post_init(CPUState *cpu)
++{
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
++}
++
++void nvmm_cpu_synchronize_pre_loadvm(CPUState *cpu)
++{
++ run_on_cpu(cpu, do_nvmm_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
++}
++
++/* -------------------------------------------------------------------------- */
++
++static Error *nvmm_migration_blocker;
++
++/*
++ * The nvmm_vcpu_stop() mechanism breaks races between entering the VMM
++ * and another thread signaling the vCPU thread to exit.
++ */
++
++static void
++nvmm_ipi_signal(int sigcpu)
++{
++ if (current_cpu) {
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(current_cpu);
++ struct nvmm_vcpu *vcpu = &qcpu->vcpu;
++ nvmm_vcpu_stop(vcpu);
++ }
++}
++
++static void
++nvmm_init_cpu_signals(void)
++{
++ struct sigaction sigact;
++ sigset_t set;
++
++ /* Install the IPI handler. */
++ memset(&sigact, 0, sizeof(sigact));
++ sigact.sa_handler = nvmm_ipi_signal;
++ sigaction(SIG_IPI, &sigact, NULL);
++
++ /* Allow IPIs on the current thread. */
++ sigprocmask(SIG_BLOCK, NULL, &set);
++ sigdelset(&set, SIG_IPI);
++ pthread_sigmask(SIG_SETMASK, &set, NULL);
++}
++
++int
++nvmm_init_vcpu(CPUState *cpu)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct nvmm_vcpu_conf_cpuid cpuid;
++ struct nvmm_vcpu_conf_tpr tpr;
++ Error *local_error = NULL;
++ struct qemu_vcpu *qcpu;
++ int ret, err;
++
++ nvmm_init_cpu_signals();
++
++ if (nvmm_migration_blocker == NULL) {
++ error_setg(&nvmm_migration_blocker,
++ "NVMM: Migration not supported");
++
++ (void)migrate_add_blocker(nvmm_migration_blocker, &local_error);
++ if (local_error) {
++ error_report_err(local_error);
++ migrate_del_blocker(nvmm_migration_blocker);
++ error_free(nvmm_migration_blocker);
++ return -EINVAL;
++ }
++ }
++
++ qcpu = g_malloc0(sizeof(*qcpu));
++ if (qcpu == NULL) {
++ error_report("NVMM: Failed to allocate VCPU context.");
++ return -ENOMEM;
++ }
++
++ ret = nvmm_vcpu_create(mach, cpu->cpu_index, &qcpu->vcpu);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to create a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++
++ memset(&cpuid, 0, sizeof(cpuid));
++ cpuid.mask = 1;
++ cpuid.leaf = 0x00000001;
++ cpuid.u.mask.set.edx = CPUID_MCE | CPUID_MCA | CPUID_MTRR;
++ ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CPUID,
++ &cpuid);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to configure a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++
++ ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_CALLBACKS,
++ &nvmm_callbacks);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to configure a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++
++ if (qemu_mach.cap.arch.vcpu_conf_support & NVMM_CAP_ARCH_VCPU_CONF_TPR) {
++ memset(&tpr, 0, sizeof(tpr));
++ tpr.exit_changed = 1;
++ ret = nvmm_vcpu_configure(mach, &qcpu->vcpu, NVMM_VCPU_CONF_TPR, &tpr);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Failed to configure a virtual processor,"
++ " error=%d", err);
++ g_free(qcpu);
++ return -err;
++ }
++ }
++
++ cpu->vcpu_dirty = true;
++ cpu->hax_vcpu = (struct hax_vcpu_state *)qcpu;
++
++ return 0;
++}
++
++int
++nvmm_vcpu_exec(CPUState *cpu)
++{
++ int ret, fatal;
++
++ while (1) {
++ if (cpu->exception_index >= EXCP_INTERRUPT) {
++ ret = cpu->exception_index;
++ cpu->exception_index = -1;
++ break;
++ }
++
++ fatal = nvmm_vcpu_loop(cpu);
++
++ if (fatal) {
++ error_report("NVMM: Failed to execute a VCPU.");
++ abort();
++ }
++ }
++
++ return ret;
++}
++
++void
++nvmm_destroy_vcpu(CPUState *cpu)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
++
++ nvmm_vcpu_destroy(mach, &qcpu->vcpu);
++ g_free(cpu->hax_vcpu);
++}
++
++/* -------------------------------------------------------------------------- */
++
++static void
++nvmm_update_mapping(hwaddr start_pa, ram_addr_t size, uintptr_t hva,
++ bool add, bool rom, const char *name)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ int ret, prot;
++
++ if (add) {
++ prot = PROT_READ | PROT_EXEC;
++ if (!rom) {
++ prot |= PROT_WRITE;
++ }
++ ret = nvmm_gpa_map(mach, hva, start_pa, size, prot);
++ } else {
++ ret = nvmm_gpa_unmap(mach, hva, start_pa, size);
++ }
++
++ if (ret == -1) {
++ error_report("NVMM: Failed to %s GPA range '%s' PA:%p, "
++ "Size:%p bytes, HostVA:%p, error=%d",
++ (add ? "map" : "unmap"), name, (void *)(uintptr_t)start_pa,
++ (void *)size, (void *)hva, errno);
++ }
++}
++
++static void
++nvmm_process_section(MemoryRegionSection *section, int add)
++{
++ MemoryRegion *mr = section->mr;
++ hwaddr start_pa = section->offset_within_address_space;
++ ram_addr_t size = int128_get64(section->size);
++ unsigned int delta;
++ uintptr_t hva;
++
++ if (!memory_region_is_ram(mr)) {
++ return;
++ }
++
++ /* Adjust start_pa and size so that they are page-aligned. */
++ delta = qemu_real_host_page_size - (start_pa & ~qemu_real_host_page_mask);
++ delta &= ~qemu_real_host_page_mask;
++ if (delta > size) {
++ return;
++ }
++ start_pa += delta;
++ size -= delta;
++ size &= qemu_real_host_page_mask;
++ if (!size || (start_pa & ~qemu_real_host_page_mask)) {
++ return;
++ }
++
++ hva = (uintptr_t)memory_region_get_ram_ptr(mr) +
++ section->offset_within_region + delta;
++
++ nvmm_update_mapping(start_pa, size, hva, add,
++ memory_region_is_rom(mr), mr->name);
++}
++
++static void
++nvmm_region_add(MemoryListener *listener, MemoryRegionSection *section)
++{
++ memory_region_ref(section->mr);
++ nvmm_process_section(section, 1);
++}
++
++static void
++nvmm_region_del(MemoryListener *listener, MemoryRegionSection *section)
++{
++ nvmm_process_section(section, 0);
++ memory_region_unref(section->mr);
++}
++
++static void
++nvmm_transaction_begin(MemoryListener *listener)
++{
++ /* nothing */
++}
++
++static void
++nvmm_transaction_commit(MemoryListener *listener)
++{
++ /* nothing */
++}
++
++static void
++nvmm_log_sync(MemoryListener *listener, MemoryRegionSection *section)
++{
++ MemoryRegion *mr = section->mr;
++
++ if (!memory_region_is_ram(mr)) {
++ return;
++ }
++
++ memory_region_set_dirty(mr, 0, int128_get64(section->size));
++}
++
++static MemoryListener nvmm_memory_listener = {
++ .begin = nvmm_transaction_begin,
++ .commit = nvmm_transaction_commit,
++ .region_add = nvmm_region_add,
++ .region_del = nvmm_region_del,
++ .log_sync = nvmm_log_sync,
++ .priority = 10,
++};
++
++static void
++nvmm_ram_block_added(RAMBlockNotifier *n, void *host, size_t size)
++{
++ struct nvmm_machine *mach = get_nvmm_mach();
++ uintptr_t hva = (uintptr_t)host;
++ int ret;
++
++ ret = nvmm_hva_map(mach, hva, size);
++
++ if (ret == -1) {
++ error_report("NVMM: Failed to map HVA, HostVA:%p "
++ "Size:%p bytes, error=%d",
++ (void *)hva, (void *)size, errno);
++ }
++}
++
++static struct RAMBlockNotifier nvmm_ram_notifier = {
++ .ram_block_added = nvmm_ram_block_added
++};
++
++/* -------------------------------------------------------------------------- */
++
++static int
++nvmm_accel_init(MachineState *ms)
++{
++ int ret, err;
++
++ ret = nvmm_init();
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Initialization failed, error=%d", errno);
++ return -err;
++ }
++
++ ret = nvmm_capability(&qemu_mach.cap);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Unable to fetch capability, error=%d", errno);
++ return -err;
++ }
++ if (qemu_mach.cap.version < NVMM_KERN_VERSION) {
++ error_report("NVMM: Unsupported version %u", qemu_mach.cap.version);
++ return -EPROGMISMATCH;
++ }
++ if (qemu_mach.cap.state_size != sizeof(struct nvmm_x64_state)) {
++ error_report("NVMM: Wrong state size %u", qemu_mach.cap.state_size);
++ return -EPROGMISMATCH;
++ }
++
++ ret = nvmm_machine_create(&qemu_mach.mach);
++ if (ret == -1) {
++ err = errno;
++ error_report("NVMM: Machine creation failed, error=%d", errno);
++ return -err;
++ }
++
++ memory_listener_register(&nvmm_memory_listener, &address_space_memory);
++ ram_block_notifier_add(&nvmm_ram_notifier);
++
++ printf("NetBSD Virtual Machine Monitor accelerator is operational\n");
++ return 0;
++}
++
++int
++nvmm_enabled(void)
++{
++ return nvmm_allowed;
++}
++
++static void
++nvmm_accel_class_init(ObjectClass *oc, void *data)
++{
++ AccelClass *ac = ACCEL_CLASS(oc);
++ ac->name = "NVMM";
++ ac->init_machine = nvmm_accel_init;
++ ac->allowed = &nvmm_allowed;
++}
++
++static const TypeInfo nvmm_accel_type = {
++ .name = ACCEL_CLASS_NAME("nvmm"),
++ .parent = TYPE_ACCEL,
++ .class_init = nvmm_accel_class_init,
++};
++
++static void
++nvmm_type_init(void)
++{
++ type_register_static(&nvmm_accel_type);
++}
++
++type_init(nvmm_type_init);
Index: pkgsrc/emulators/qemu/patches/patch-include_sysemu_hw__accel.h
diff -u /dev/null pkgsrc/emulators/qemu/patches/patch-include_sysemu_hw__accel.h:1.4
--- /dev/null Mon May 24 14:22:08 2021
+++ pkgsrc/emulators/qemu/patches/patch-include_sysemu_hw__accel.h Mon May 24 14:22:08 2021
@@ -0,0 +1,12 @@
+$NetBSD: patch-include_sysemu_hw__accel.h,v 1.4 2021/05/24 14:22:08 ryoon Exp $
+
+--- include/sysemu/hw_accel.h.orig 2021-04-29 17:18:58.000000000 +0000
++++ include/sysemu/hw_accel.h
+@@ -16,6 +16,7 @@
+ #include "sysemu/kvm.h"
+ #include "sysemu/hvf.h"
+ #include "sysemu/whpx.h"
++#include "sysemu/nvmm.h"
+
+ void cpu_synchronize_state(CPUState *cpu);
+ void cpu_synchronize_post_reset(CPUState *cpu);
Home |
Main Index |
Thread Index |
Old Index