Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/trunk]: src/sys/uvm When unwiring a range in uvm_fault_unwire_locked(), ...
details: https://anonhg.NetBSD.org/src/rev/0cb628658f71
branches: trunk
changeset: 473721:0cb628658f71
user: thorpej <thorpej%NetBSD.org@localhost>
date: Wed Jun 16 23:02:40 1999 +0000
description:
When unwiring a range in uvm_fault_unwire_locked(), don't call
pmap_change_wiring(...,FALSE) unless the map entry claims the address
is unwired. This fixes the following scenario, as described on
tech-kern%netbsd.org@localhost on Wed 6/16/1999 12:25:23:
- User mlock(2)'s a buffer, to guarantee it will never become
non-resident while he is using it.
- User then does physio to that buffer. Physio calls uvm_vslock()
to lock down the pages and ensure that page faults do not happen
while the I/O is in progress (possibly in interrupt context).
- Physio does the I/O.
- Physio calls uvm_vsunlock(). This calls uvm_fault_unwire().
>>> HERE IS WHERE THE PROBLEM OCCURS <<<
uvm_fault_unwire() calls pmap_change_wiring(..., FALSE),
which now gives the pmap free reign to recycle the mapping
information for that page, which is illegal; the mapping is
still wired (due to the mlock(2)), but now access of the
page could cause a non-protection page fault (disallowed).
NOTE: This could eventually lead to a panic when the user
subsequently munlock(2)'s the buffer and the mapping info
has been recycled for use by another mapping!
diffstat:
sys/uvm/uvm_fault.c | 46 ++++++++++++++++++++++++++++++++++++++++------
1 files changed, 40 insertions(+), 6 deletions(-)
diffs (87 lines):
diff -r 31846b73a717 -r 0cb628658f71 sys/uvm/uvm_fault.c
--- a/sys/uvm/uvm_fault.c Wed Jun 16 22:11:23 1999 +0000
+++ b/sys/uvm/uvm_fault.c Wed Jun 16 23:02:40 1999 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: uvm_fault.c,v 1.36 1999/06/16 22:11:23 thorpej Exp $ */
+/* $NetBSD: uvm_fault.c,v 1.37 1999/06/16 23:02:40 thorpej Exp $ */
/*
*
@@ -1777,6 +1777,7 @@
vm_map_t map;
vaddr_t start, end;
{
+ vm_map_entry_t entry;
pmap_t pmap = vm_map_pmap(map);
vaddr_t va;
paddr_t pa;
@@ -1784,7 +1785,7 @@
#ifdef DIAGNOSTIC
if (map->flags & VM_MAP_INTRSAFE)
- panic("uvm_fault_unwire: intrsafe map");
+ panic("uvm_fault_unwire_locked: intrsafe map");
#endif
/*
@@ -1793,17 +1794,51 @@
* the PAs from the pmap. we also lock out the page daemon so that
* we can call uvm_pageunwire.
*/
-
+
uvm_lock_pageq();
+ /*
+ * find the beginning map entry for the region.
+ */
+#ifdef DIAGNOSTIC
+ if (start < vm_map_min(map) || end > vm_map_max(map))
+ panic("uvm_fault_unwire_locked: address out of range");
+#endif
+ if (uvm_map_lookup_entry(map, start, &entry) == FALSE)
+ panic("uvm_fault_unwire_locked: address not in map");
+
for (va = start; va < end ; va += PAGE_SIZE) {
pa = pmap_extract(pmap, va);
/* XXX: assumes PA 0 cannot be in map */
if (pa == (paddr_t) 0) {
- panic("uvm_fault_unwire: unwiring non-wired memory");
+ panic("uvm_fault_unwire_locked: unwiring "
+ "non-wired memory");
}
- pmap_change_wiring(pmap, va, FALSE); /* tell the pmap */
+
+ /*
+ * make sure the current entry is for the address we're
+ * dealing with. if not, grab the next entry.
+ */
+#ifdef DIAGNOSTIC
+ if (va < entry->start)
+ panic("uvm_fault_unwire_locked: hole 1");
+#endif
+ if (va >= entry->end) {
+#ifdef DIAGNOSTIC
+ if (entry->next == &map->header ||
+ entry->next->start > entry->end)
+ panic("uvm_fault_unwire_locked: hole 2");
+#endif
+ entry = entry->next;
+ }
+
+ /*
+ * if the entry is no longer wired, tell the pmap.
+ */
+ if (VM_MAPENT_ISWIRED(entry) == 0)
+ pmap_change_wiring(pmap, va, FALSE);
+
pg = PHYS_TO_VM_PAGE(pa);
if (pg)
uvm_pageunwire(pg);
@@ -1817,5 +1852,4 @@
*/
pmap_pageable(pmap, start, end, TRUE);
-
}
Home |
Main Index |
Thread Index |
Old Index