Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/netbsd-6-0]: src/sys/arch/x86 Pull up following revision(s) (requested b...
details: https://anonhg.NetBSD.org/src/rev/bcebde988700
branches: netbsd-6-0
changeset: 775180:bcebde988700
user: snj <snj%NetBSD.org@localhost>
date: Mon Mar 06 08:17:49 2017 +0000
description:
Pull up following revision(s) (requested by bouyer in ticket #1441):
sys/arch/x86/x86/pmap.c: revision 1.241 via patch
sys/arch/x86/include/pmap.h: revision 1.63 via patch
Should be PG_k, doesn't change anything.
--
Remove PG_u from the kernel pages on Xen. Otherwise there is no privilege
separation between the kernel and userland.
On Xen-amd64, the kernel runs in ring3 just like userland, and the
separation is guaranteed by the hypervisor - each syscall/trap is
intercepted by Xen and sent manually to the kernel. Before that, the
hypervisor modifies the page tables so that the kernel becomes accessible.
Later, when returning to userland, the hypervisor removes the kernel pages
and flushes the TLB.
However, TLB flushes are costly, and in order to reduce the number of pages
flushed Xen marks the userland pages as global, while keeping the kernel
ones as local. This way, when returning to userland, only the kernel pages
get flushed - which makes sense since they are the only ones that got
removed from the mapping.
Xen differentiates the userland pages by looking at their PG_u bit in the
PTE; if a page has this bit then Xen tags it as global, otherwise Xen
manually adds the bit but keeps the page as local. The thing is, since we
set PG_u in the kernel pages, Xen believes our kernel pages are in fact
userland pages, so it marks them as global. Therefore, when returning to
userland, the kernel pages indeed get removed from the page tree, but are
not flushed from the TLB. Which means that they are still accessible.
With this - and depending on the DTLB size - userland has a small window
where it can read/write to the last kernel pages accessed, which is enough
to completely escalate privileges: the sysent structure systematically gets
read when performing a syscall, and chances are that it will still be
cached in the TLB. Userland can then use this to patch a chosen syscall,
make it point to a userland function, retrieve %gs and compute the address
of its credentials, and finally grant itself root privileges.
diffstat:
sys/arch/x86/include/pmap.h | 10 +---------
sys/arch/x86/x86/pmap.c | 8 ++++----
2 files changed, 5 insertions(+), 13 deletions(-)
diffs (61 lines):
diff -r 0ad288200ef6 -r bcebde988700 sys/arch/x86/include/pmap.h
--- a/sys/arch/x86/include/pmap.h Sun Feb 19 17:45:16 2017 +0000
+++ b/sys/arch/x86/include/pmap.h Mon Mar 06 08:17:49 2017 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: pmap.h,v 1.49.2.2 2012/05/09 03:22:52 riz Exp $ */
+/* $NetBSD: pmap.h,v 1.49.2.2.4.1 2017/03/06 08:17:49 snj Exp $ */
/*
* Copyright (c) 1997 Charles D. Cranor and Washington University.
@@ -182,15 +182,7 @@
((pmap)->pm_pdirpa[0] + (index) * sizeof(pd_entry_t))
#endif
-/*
- * flag to be used for kernel mappings: PG_u on Xen/amd64,
- * 0 otherwise.
- */
-#if defined(XEN) && defined(__x86_64__)
-#define PG_k PG_u
-#else
#define PG_k 0
-#endif
/*
* MD flags that we use for pmap_enter and pmap_kenter_pa:
diff -r 0ad288200ef6 -r bcebde988700 sys/arch/x86/x86/pmap.c
--- a/sys/arch/x86/x86/pmap.c Sun Feb 19 17:45:16 2017 +0000
+++ b/sys/arch/x86/x86/pmap.c Mon Mar 06 08:17:49 2017 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: pmap.c,v 1.164.2.4.4.1 2016/07/14 07:09:39 snj Exp $ */
+/* $NetBSD: pmap.c,v 1.164.2.4.4.2 2017/03/06 08:17:49 snj Exp $ */
/*-
* Copyright (c) 2008, 2010 The NetBSD Foundation, Inc.
@@ -171,7 +171,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.164.2.4.4.1 2016/07/14 07:09:39 snj Exp $");
+__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.164.2.4.4.2 2017/03/06 08:17:49 snj Exp $");
#include "opt_user_ldt.h"
#include "opt_lockdebug.h"
@@ -1467,7 +1467,7 @@
memset((void *) (xen_dummy_user_pgd + KERNBASE), 0, PAGE_SIZE);
/* Mark read-only */
HYPERVISOR_update_va_mapping(xen_dummy_user_pgd + KERNBASE,
- pmap_pa2pte(xen_dummy_user_pgd) | PG_u | PG_V, UVMF_INVLPG);
+ pmap_pa2pte(xen_dummy_user_pgd) | PG_k | PG_V, UVMF_INVLPG);
/* Pin as L4 */
xpq_queue_pin_l4_table(xpmap_ptom_masked(xen_dummy_user_pgd));
#endif /* __x86_64__ */
@@ -2064,7 +2064,7 @@
* this pdir will NEVER be active in kernel mode
* so mark recursive entry invalid
*/
- pdir[PDIR_SLOT_PTE] = pmap_pa2pte(pdirpa) | PG_u;
+ pdir[PDIR_SLOT_PTE] = pmap_pa2pte(pdirpa) | PG_k;
/*
* PDP constructed this way won't be for kernel,
* hence we don't put kernel mappings on Xen.
Home |
Main Index |
Thread Index |
Old Index