pkgsrc-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[pkgsrc/trunk]: pkgsrc/sysutils Backport upstream patches, fixing today's XSA...
details: https://anonhg.NetBSD.org/pkgsrc/rev/37b3a21fe3e8
branches: trunk
changeset: 355136:37b3a21fe3e8
user: bouyer <bouyer%pkgsrc.org@localhost>
date: Tue Nov 22 20:53:40 2016 +0000
description:
Backport upstream patches, fixing today's XSA 191, 192, 195, 197, 198.
Bump PKGREVISIONs
diffstat:
sysutils/xenkernel41/Makefile | 4 +-
sysutils/xenkernel41/distinfo | 5 +-
sysutils/xenkernel41/patches/patch-XSA-191 | 142 ++++++++++++++++++++++++++++
sysutils/xenkernel41/patches/patch-XSA-192 | 67 +++++++++++++
sysutils/xenkernel41/patches/patch-XSA-195 | 49 +++++++++
sysutils/xenkernel42/Makefile | 4 +-
sysutils/xenkernel42/distinfo | 5 +-
sysutils/xenkernel42/patches/patch-XSA-191 | 142 ++++++++++++++++++++++++++++
sysutils/xenkernel42/patches/patch-XSA-192 | 65 ++++++++++++
sysutils/xenkernel42/patches/patch-XSA-195 | 49 +++++++++
sysutils/xentools41/Makefile | 4 +-
sysutils/xentools41/distinfo | 4 +-
sysutils/xentools41/patches/patch-XSA-197 | 69 +++++++++++++
sysutils/xentools41/patches/patch-XSA-198 | 58 +++++++++++
sysutils/xentools42/Makefile | 4 +-
sysutils/xentools42/distinfo | 5 +-
sysutils/xentools42/patches/patch-XSA-197-1 | 69 +++++++++++++
sysutils/xentools42/patches/patch-XSA-197-2 | 67 +++++++++++++
sysutils/xentools42/patches/patch-XSA-198 | 58 +++++++++++
19 files changed, 858 insertions(+), 12 deletions(-)
diffs (truncated from 1018 to 300 lines):
diff -r a04711f8f2f1 -r 37b3a21fe3e8 sysutils/xenkernel41/Makefile
--- a/sysutils/xenkernel41/Makefile Tue Nov 22 16:02:54 2016 +0000
+++ b/sysutils/xenkernel41/Makefile Tue Nov 22 20:53:40 2016 +0000
@@ -1,9 +1,9 @@
-# $NetBSD: Makefile,v 1.51 2016/09/08 15:41:01 bouyer Exp $
+# $NetBSD: Makefile,v 1.52 2016/11/22 20:53:40 bouyer Exp $
VERSION= 4.1.6.1
DISTNAME= xen-${VERSION}
PKGNAME= xenkernel41-${VERSION}
-PKGREVISION= 20
+PKGREVISION= 21
CATEGORIES= sysutils
MASTER_SITES= http://bits.xensource.com/oss-xen/release/${VERSION}/
diff -r a04711f8f2f1 -r 37b3a21fe3e8 sysutils/xenkernel41/distinfo
--- a/sysutils/xenkernel41/distinfo Tue Nov 22 16:02:54 2016 +0000
+++ b/sysutils/xenkernel41/distinfo Tue Nov 22 20:53:40 2016 +0000
@@ -1,4 +1,4 @@
-$NetBSD: distinfo,v 1.44 2016/09/08 15:41:01 bouyer Exp $
+$NetBSD: distinfo,v 1.45 2016/11/22 20:53:40 bouyer Exp $
SHA1 (xen-4.1.6.1.tar.gz) = e5f15feb0821578817a65ede16110c6eac01abd0
RMD160 (xen-4.1.6.1.tar.gz) = bff11421fc44a26f2cc3156713267abcb36d7a19
@@ -41,6 +41,9 @@
SHA1 (patch-XSA-185) = a2313922aa4dad734b96c80f64fe54eca3c14019
SHA1 (patch-XSA-187-1) = 55ea0c2d9c7d8d9476a5ab97342ff552be4faf56
SHA1 (patch-XSA-187-2) = e21b24771fa9417f593b8f6d1550660bbad36b98
+SHA1 (patch-XSA-191) = 5da559e104543b8d22ea60378d9160d2ad83b8d0
+SHA1 (patch-XSA-192) = b0f2801fe6db91c2a98b82897cdee057062c6c2b
+SHA1 (patch-XSA-195) = a04295b397126e1cc1f129bb3cb9fb872fcbb373
SHA1 (patch-xen_Makefile) = d1c7e4860221f93d90818f45a77748882486f92b
SHA1 (patch-xen_arch_x86_Rules.mk) = 6b9b4bfa28924f7d3f6c793a389f1a7ac9d228e2
SHA1 (patch-xen_arch_x86_cpu_mcheck_vmce.c) = 5afd01780a13654f1d21bf1562f6431c8370be0b
diff -r a04711f8f2f1 -r 37b3a21fe3e8 sysutils/xenkernel41/patches/patch-XSA-191
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel41/patches/patch-XSA-191 Tue Nov 22 20:53:40 2016 +0000
@@ -0,0 +1,142 @@
+$NetBSD: patch-XSA-191,v 1.1 2016/11/22 20:53:40 bouyer Exp $
+
+backported from:
+
+From: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+Subject: x86/hvm: Fix the handling of non-present segments
+
+In 32bit, the data segments may be NULL to indicate that the segment is
+ineligible for use. In both 32bit and 64bit, the LDT selector may be NULL to
+indicate that the entire LDT is ineligible for use. However, nothing in Xen
+actually checks for this condition when performing other segmentation
+checks. (Note however that limit and writeability checks are correctly
+performed).
+
+Neither Intel nor AMD specify the exact behaviour of loading a NULL segment.
+Experimentally, AMD zeroes all attributes but leaves the base and limit
+unmodified. Intel zeroes the base, sets the limit to 0xfffffff and resets the
+attributes to just .G and .D/B.
+
+The use of the segment information in the VMCB/VMCS is equivalent to a native
+pipeline interacting with the segment cache. The present bit can therefore
+have a subtly different meaning, and it is now cooked to uniformly indicate
+whether the segment is usable or not.
+
+GDTR and IDTR don't have access rights like the other segments, but for
+consistency, they are treated as being present so no special casing is needed
+elsewhere in the segmentation logic.
+
+AMD hardware does not consider the present bit for %cs and %tr, and will
+function as if they were present. They are therefore unconditionally set to
+present when reading information from the VMCB, to maintain the new meaning of
+usability.
+
+Intel hardware has a separate unusable bit in the VMCS segment attributes.
+This bit is inverted and stored in the present field, so the hvm code can work
+with architecturally-common state.
+
+This is XSA-191.
+
+Signed-off-by: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+Reviewed-by: Jan Beulich <jbeulich%suse.com@localhost>
+
+--- xen/arch/x86/hvm/hvm.c.orig 2016-11-22 15:03:22.000000000 +0100
++++ xen/arch/x86/hvm/hvm.c 2016-11-22 15:19:57.000000000 +0100
+@@ -1626,6 +1626,10 @@
+ * COMPATIBILITY MODE: Apply segment checks and add base.
+ */
+
++ /* Segment not valid for use (cooked meaning of .p)? */
++ if ( !reg->attr.fields.p )
++ return 0;
++
+ switch ( access_type )
+ {
+ case hvm_access_read:
+@@ -1800,6 +1804,10 @@
+ hvm_get_segment_register(
+ v, (sel & 4) ? x86_seg_ldtr : x86_seg_gdtr, &desctab);
+
++ /* Segment not valid for use (cooked meaning of .p)? */
++ if ( !desctab.attr.fields.p )
++ goto fail;
++
+ /* Check against descriptor table limit. */
+ if ( ((sel & 0xfff8) + 7) > desctab.limit )
+ goto fail;
+--- xen/arch/x86/hvm/svm/svm.c.orig 2013-09-10 08:42:18.000000000 +0200
++++ xen/arch/x86/hvm/svm/svm.c 2016-11-22 15:19:57.000000000 +0100
+@@ -459,6 +459,7 @@
+ {
+ case x86_seg_cs:
+ memcpy(reg, &vmcb->cs, sizeof(*reg));
++ reg->attr.fields.p = 1;
+ reg->attr.fields.g = reg->limit > 0xFFFFF;
+ break;
+ case x86_seg_ds:
+@@ -492,13 +493,16 @@
+ case x86_seg_tr:
+ svm_sync_vmcb(v);
+ memcpy(reg, &vmcb->tr, sizeof(*reg));
++ reg->attr.fields.p = 1;
+ reg->attr.fields.type |= 0x2;
+ break;
+ case x86_seg_gdtr:
+ memcpy(reg, &vmcb->gdtr, sizeof(*reg));
++ reg->attr.bytes = 0x80;
+ break;
+ case x86_seg_idtr:
+ memcpy(reg, &vmcb->idtr, sizeof(*reg));
++ reg->attr.bytes = 0x80;
+ break;
+ case x86_seg_ldtr:
+ svm_sync_vmcb(v);
+--- xen/arch/x86/hvm/vmx/vmx.c.orig 2013-09-10 08:42:18.000000000 +0200
++++ xen/arch/x86/hvm/vmx/vmx.c 2016-11-22 15:19:57.000000000 +0100
+@@ -761,10 +761,12 @@
+
+ vmx_vmcs_exit(v);
+
+- reg->attr.bytes = (attr & 0xff) | ((attr >> 4) & 0xf00);
+- /* Unusable flag is folded into Present flag. */
+- if ( attr & (1u<<16) )
+- reg->attr.fields.p = 0;
++ /*
++ * Fold VT-x representation into Xen's representation. The Present bit is
++ * unconditionally set to the inverse of unusable.
++ */
++ reg->attr.bytes =
++ (!(attr & (1u << 16)) << 7) | (attr & 0x7f) | ((attr >> 4) & 0xf00);
+
+ /* Adjust for virtual 8086 mode */
+ if ( v->arch.hvm_vmx.vmx_realmode && seg <= x86_seg_tr
+@@ -844,11 +846,11 @@
+ }
+ }
+
+- attr = ((attr & 0xf00) << 4) | (attr & 0xff);
+-
+- /* Not-present must mean unusable. */
+- if ( !reg->attr.fields.p )
+- attr |= (1u << 16);
++ /*
++ * Unfold Xen representation into VT-x representation. The unusable bit
++ * is unconditionally set to the inverse of present.
++ */
++ attr = (!(attr & (1u << 7)) << 16) | ((attr & 0xf00) << 4) | (attr & 0xff);
+
+ /* VMX has strict consistency requirement for flag G. */
+ attr |= !!(limit >> 20) << 15;
+--- xen/arch/x86/x86_emulate/x86_emulate.c.orig 2016-11-22 15:03:21.000000000 +0100
++++ xen/arch/x86/x86_emulate/x86_emulate.c 2016-11-22 15:19:57.000000000 +0100
+@@ -1020,6 +1020,10 @@
+ &desctab, ctxt)) )
+ return rc;
+
++ /* Segment not valid for use (cooked meaning of .p)? */
++ if ( !desctab.attr.fields.p )
++ goto raise_exn;
++
+ /* Check against descriptor table limit. */
+ if ( ((sel & 0xfff8) + 7) > desctab.limit )
+ goto raise_exn;
diff -r a04711f8f2f1 -r 37b3a21fe3e8 sysutils/xenkernel41/patches/patch-XSA-192
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel41/patches/patch-XSA-192 Tue Nov 22 20:53:40 2016 +0000
@@ -0,0 +1,67 @@
+$NetBSD: patch-XSA-192,v 1.1 2016/11/22 20:53:40 bouyer Exp $
+
+backported from:
+
+From: Jan Beulich <jbeulich%suse.com@localhost>
+Subject: x86/HVM: don't load LDTR with VM86 mode attrs during task switch
+
+Just like TR, LDTR is purely a protected mode facility and hence needs
+to be loaded accordingly. Also move its loading to where it
+architecurally belongs.
+
+This is XSA-192.
+
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+Reviewed-by: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+Tested-by: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+
+--- xen/arch/x86/hvm/hvm.c.orig 2016-11-22 15:19:57.000000000 +0100
++++ xen/arch/x86/hvm/hvm.c 2016-11-22 15:31:13.000000000 +0100
+@@ -1767,16 +1767,15 @@
+ }
+
+ static int hvm_load_segment_selector(
+- enum x86_segment seg, uint16_t sel)
++ enum x86_segment seg, uint16_t sel, unsigned int eflags)
+ {
+ struct segment_register desctab, cs, segr;
+ struct desc_struct *pdesc, desc;
+ u8 dpl, rpl, cpl;
+ int fault_type = TRAP_invalid_tss;
+- struct cpu_user_regs *regs = guest_cpu_user_regs();
+ struct vcpu *v = current;
+
+- if ( regs->eflags & X86_EFLAGS_VM )
++ if ( eflags & X86_EFLAGS_VM )
+ {
+ segr.sel = sel;
+ segr.base = (uint32_t)sel << 4;
+@@ -2022,6 +2021,8 @@
+ if ( rc != HVMCOPY_okay )
+ goto out;
+
++ if ( hvm_load_segment_selector(x86_seg_ldtr, tss.ldt, 0) )
++ goto out;
+
+ if ( hvm_set_cr3(tss.cr3) )
+ goto out;
+@@ -2044,13 +2045,12 @@
+ }
+
+ exn_raised = 0;
+- if ( hvm_load_segment_selector(x86_seg_ldtr, tss.ldt) ||
+- hvm_load_segment_selector(x86_seg_es, tss.es) ||
+- hvm_load_segment_selector(x86_seg_cs, tss.cs) ||
+- hvm_load_segment_selector(x86_seg_ss, tss.ss) ||
+- hvm_load_segment_selector(x86_seg_ds, tss.ds) ||
+- hvm_load_segment_selector(x86_seg_fs, tss.fs) ||
+- hvm_load_segment_selector(x86_seg_gs, tss.gs) )
++ if ( hvm_load_segment_selector(x86_seg_es, tss.es, tss.eflags) ||
++ hvm_load_segment_selector(x86_seg_cs, tss.cs, tss.eflags) ||
++ hvm_load_segment_selector(x86_seg_ss, tss.ss, tss.eflags) ||
++ hvm_load_segment_selector(x86_seg_ds, tss.ds, tss.eflags) ||
++ hvm_load_segment_selector(x86_seg_fs, tss.fs, tss.eflags) ||
++ hvm_load_segment_selector(x86_seg_gs, tss.gs, tss.eflags) )
+ exn_raised = 1;
+
+ rc = hvm_copy_to_guest_virt(
diff -r a04711f8f2f1 -r 37b3a21fe3e8 sysutils/xenkernel41/patches/patch-XSA-195
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel41/patches/patch-XSA-195 Tue Nov 22 20:53:40 2016 +0000
@@ -0,0 +1,49 @@
+$NetBSD: patch-XSA-195,v 1.1 2016/11/22 20:53:40 bouyer Exp $
+
+backported from:
+
+From: Jan Beulich <jbeulich%suse.com@localhost>
+Subject: x86emul: fix huge bit offset handling
+
+We must never chop off the high 32 bits.
+
+This is XSA-195.
+
+Reported-by: George Dunlap <george.dunlap%citrix.com@localhost>
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+Reviewed-by: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+
+--- xen/arch/x86/x86_emulate/x86_emulate.c.orig 2016-11-22 15:19:57.000000000 +0100
++++ xen/arch/x86/x86_emulate/x86_emulate.c 2016-11-22 16:03:48.000000000 +0100
+@@ -1578,6 +1578,12 @@
+ else
+ {
+ /*
++ * Instructions such as bt can reference an arbitrary offset from
++ * their memory operand, but the instruction doing the actual
++ * emulation needs the appropriate op_bytes read from memory.
++ * Adjust both the source register and memory operand to make an
++ * equivalent instruction.
++ *
+ * EA += BitOffset DIV op_bytes*8
+ * BitOffset = BitOffset MOD op_bytes*8
+ * DIV truncates towards negative infinity.
+@@ -1589,14 +1595,15 @@
+ src.val = (int32_t)src.val;
+ if ( (long)src.val < 0 )
+ {
+- unsigned long byte_offset;
+- byte_offset = op_bytes + (((-src.val-1) >> 3) & ~(op_bytes-1));
++ unsigned long byte_offset =
++ op_bytes + (((-src.val - 1) >> 3) & ~(op_bytes - 1L));
++
+ ea.mem.off -= byte_offset;
+ src.val = (byte_offset << 3) + src.val;
+ }
+ else
+ {
+- ea.mem.off += (src.val >> 3) & ~(op_bytes - 1);
Home |
Main Index |
Thread Index |
Old Index