Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/trunk]: src/sys/uvm Page allocator:
details: https://anonhg.NetBSD.org/src/rev/3d4280931671
branches: trunk
changeset: 1006223:3d4280931671
user: ad <ad%NetBSD.org@localhost>
date: Sun Jan 05 22:01:09 2020 +0000
description:
Page allocator:
The method for assigning pages to buckets in the non-NUMA case sucks. It
can defeat memory interleaving in the hardware, and not distribute pages
fairly by colour. To fix this and make things more deterministic, take the
physical PFN and colour into account.
Then when freeing pages, in the non-NUMA case don't change the page's bucket
either. Keeping the bucket number stable will also permit partitioning page
replacement state by CPU package / NUMA node.
diffstat:
sys/uvm/uvm_page.c | 35 +++++++++++++----------------------
1 files changed, 13 insertions(+), 22 deletions(-)
diffs (73 lines):
diff -r e058ade052ab -r 3d4280931671 sys/uvm/uvm_page.c
--- a/sys/uvm/uvm_page.c Sun Jan 05 21:12:34 2020 +0000
+++ b/sys/uvm/uvm_page.c Sun Jan 05 22:01:09 2020 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: uvm_page.c,v 1.220 2019/12/31 22:42:51 ad Exp $ */
+/* $NetBSD: uvm_page.c,v 1.221 2020/01/05 22:01:09 ad Exp $ */
/*-
* Copyright (c) 2019 The NetBSD Foundation, Inc.
@@ -95,7 +95,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: uvm_page.c,v 1.220 2019/12/31 22:42:51 ad Exp $");
+__KERNEL_RCSID(0, "$NetBSD: uvm_page.c,v 1.221 2020/01/05 22:01:09 ad Exp $");
#include "opt_ddb.h"
#include "opt_uvm.h"
@@ -806,9 +806,12 @@
* Here we decide on the NEW color &
* bucket for the page. For NUMA
* we'll use the info that the
- * hardware gave us. Otherwise we
- * just do a round-robin among the
- * buckets.
+ * hardware gave us. For non-NUMA
+ * assign take physical page frame
+ * number and cache color into
+ * account. We do this to try and
+ * avoid defeating any memory
+ * interleaving in the hardware.
*/
KASSERT(
uvm_page_get_bucket(pg) == ob);
@@ -816,10 +819,10 @@
uvm_page_get_freelist(pg));
if (uvm.numa_alloc) {
nb = uvm_page_numa_lookup(pg);
- } else if (nb + 1 < newnbuckets) {
- nb = nb + 1;
} else {
- nb = 0;
+ nb = atop(VM_PAGE_TO_PHYS(pg))
+ / uvmexp.ncolors / 8
+ % newnbuckets;
}
uvm_page_set_bucket(pg, nb);
npgb = npgfl.pgfl_buckets[nb];
@@ -1575,22 +1578,10 @@
uvm_pagezerocheck(pg);
#endif /* DEBUG */
+ /* Try to send the page to the per-CPU cache. */
s = splvm();
ucpu = curcpu()->ci_data.cpu_uvm;
-
- /*
- * If we're using the NUMA strategy, we'll only cache this page if
- * it came from the local CPU's NUMA node. Otherwise we're using
- * the L2/L3 cache locality strategy and we'll cache anything.
- */
- if (uvm.numa_alloc) {
- bucket = uvm_page_get_bucket(pg);
- } else {
- bucket = ucpu->pgflbucket;
- uvm_page_set_bucket(pg, bucket);
- }
-
- /* Try to send the page to the per-CPU cache. */
+ bucket = uvm_page_get_bucket(pg);
if (bucket == ucpu->pgflbucket && uvm_pgflcache_free(ucpu, pg)) {
splx(s);
return;
Home |
Main Index |
Thread Index |
Old Index