Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/trunk]: src/sys/uvm fix amap_extend() to handle amaps where we previousl...
details: https://anonhg.NetBSD.org/src/rev/ac1a97290015
branches: trunk
changeset: 974965:ac1a97290015
user: chs <chs%NetBSD.org@localhost>
date: Tue Aug 18 10:40:20 2020 +0000
description:
fix amap_extend() to handle amaps where we previously failed to allocate
the ppref memory.
diffstat:
sys/uvm/uvm_amap.c | 36 +++++++++++++++++++++++++++++++++---
1 files changed, 33 insertions(+), 3 deletions(-)
diffs (64 lines):
diff -r 1fa218d37e99 -r ac1a97290015 sys/uvm/uvm_amap.c
--- a/sys/uvm/uvm_amap.c Tue Aug 18 10:35:51 2020 +0000
+++ b/sys/uvm/uvm_amap.c Tue Aug 18 10:40:20 2020 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: uvm_amap.c,v 1.122 2020/07/09 05:57:15 skrll Exp $ */
+/* $NetBSD: uvm_amap.c,v 1.123 2020/08/18 10:40:20 chs Exp $ */
/*
* Copyright (c) 1997 Charles D. Cranor and Washington University.
@@ -35,7 +35,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: uvm_amap.c,v 1.122 2020/07/09 05:57:15 skrll Exp $");
+__KERNEL_RCSID(0, "$NetBSD: uvm_amap.c,v 1.123 2020/08/18 10:40:20 chs Exp $");
#include "opt_uvmhist.h"
@@ -353,7 +353,7 @@
struct vm_amap *amap = entry->aref.ar_amap;
int slotoff = entry->aref.ar_pageoff;
int slotmapped, slotadd, slotneed, slotadded, slotalloc;
- int slotadj, slotarea;
+ int slotadj, slotarea, slotendoff;
int oldnslots;
#ifdef UVM_AMAP_PPREF
int *newppref, *oldppref;
@@ -388,6 +388,36 @@
}
/*
+ * Because this amap only has 1 ref, we know that there is
+ * only one vm_map_entry pointing to it, and the one entry is
+ * using slots between slotoff and slotoff + slotmapped. If
+ * we have been using ppref then we know that only slots in
+ * the one map entry's range can have anons, since ppref
+ * allowed us to free any anons outside that range as other map
+ * entries which used this amap were removed. But without ppref,
+ * we couldn't know which slots were still needed by other map
+ * entries, so we couldn't free any anons as we removed map
+ * entries, and so any slot from 0 to am_nslot can have an
+ * anon. But now that we know there is only one map entry
+ * left and we know its range, we can free up any anons
+ * outside that range. This is necessary because the rest of
+ * this function assumes that there are no anons in the amap
+ * outside of the one map entry's range.
+ */
+
+ slotendoff = slotoff + slotmapped;
+ if (amap->am_ppref == PPREF_NONE) {
+ amap_wiperange(amap, 0, slotoff);
+ amap_wiperange(amap, slotendoff, amap->am_nslot - slotendoff);
+ }
+ for (i = 0; i < slotoff; i++) {
+ KASSERT(amap->am_anon[i] == NULL);
+ }
+ for (i = slotendoff; i < amap->am_nslot - slotendoff; i++) {
+ KASSERT(amap->am_anon[i] == NULL);
+ }
+
+ /*
* case 1: we already have enough slots in the map and thus
* only need to bump the reference counts on the slots we are
* adding.
Home |
Main Index |
Thread Index |
Old Index