Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/trunk]: src/sys/uvm Modify uvm_map_pageable() and uvm_map_pageable_all()...
details: https://anonhg.NetBSD.org/src/rev/67c3c74cd58b
branches: trunk
changeset: 473715:67c3c74cd58b
user: thorpej <thorpej%NetBSD.org@localhost>
date: Wed Jun 16 19:34:24 1999 +0000
description:
Modify uvm_map_pageable() and uvm_map_pageable_all() to follow POSIX 1003.1b
semantics. That is, regardless of the number of mlock/mlockall calls,
an munlock/munlockall actually unlocks the region (i.e. sets wiring count
to 0).
Add a comment describing why uvm_map_pageable() should not be used for
transient page wirings (e.g. for physio) -- note, it's currently only
(ab)used in this way by a few pieces of code which are known to be
broken, i.e. the Amiga and Atari pmaps, and i386 and pc532 if PMAP_NEW is
not used. The i386 GDT code uses uvm_map_pageable(), but in a safe
way, and could be trivially converted to use uvm_fault_wire() instead.
diffstat:
sys/uvm/uvm_map.c | 35 +++++++++++++++--------------------
1 files changed, 15 insertions(+), 20 deletions(-)
diffs (107 lines):
diff -r d2e52e0660d7 -r 67c3c74cd58b sys/uvm/uvm_map.c
--- a/sys/uvm/uvm_map.c Wed Jun 16 18:43:28 1999 +0000
+++ b/sys/uvm/uvm_map.c Wed Jun 16 19:34:24 1999 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: uvm_map.c,v 1.55 1999/06/16 00:29:04 thorpej Exp $ */
+/* $NetBSD: uvm_map.c,v 1.56 1999/06/16 19:34:24 thorpej Exp $ */
/*
* Copyright (c) 1997 Charles D. Cranor and Washington University.
@@ -599,7 +599,7 @@
prev_entry->advice != advice)
goto step3;
- /* wired_count's must match (new area is unwired) */
+ /* wiring status must match (new area is unwired) */
if (VM_MAPENT_ISWIRED(prev_entry))
goto step3;
@@ -1978,6 +1978,8 @@
/*
* uvm_map_pageable: sets the pageability of a range in a map.
*
+ * => wires map entries. should not be used for transient page locking.
+ * for that, use uvm_fault_wire()/uvm_fault_unwire() (see uvm_vslock()).
* => regions sepcified as not pageable require lock-down (wired) memory
* and page tables.
* => map must not be locked.
@@ -2024,10 +2026,8 @@
* handle wiring and unwiring seperately.
*/
- if (new_pageable) { /* unwire */
-
+ if (new_pageable) { /* unwire */
UVM_MAP_CLIP_START(map, entry, start);
-
/*
* unwiring. first ensure that the range to be unwired is
* really wired down and that there are no holes.
@@ -2046,20 +2046,18 @@
}
/*
- * now decrement the wiring count for each region. if a region
- * becomes completely unwired, unwire its physical pages and
- * mappings.
+ * POSIX 1003.1b - a single munlock call unlocks a region,
+ * regardless of the number of mlock calls made on that
+ * region.
*
* Note, uvm_fault_unwire() (called via uvm_map_entry_unwire())
* does not lock the map, so we don't have to do anything
* special regarding locking here.
*/
-
entry = start_entry;
while ((entry != &map->header) && (entry->start < end)) {
UVM_MAP_CLIP_END(map, entry, end);
- entry->wired_count--;
- if (VM_MAPENT_ISWIRED(entry) == 0)
+ if (VM_MAPENT_ISWIRED(entry))
uvm_map_entry_unwire(map, entry);
entry = entry->next;
}
@@ -2080,7 +2078,7 @@
* be wired and increment its wiring count.
*
* 2: we downgrade to a read lock, and call uvm_fault_wire to fault
- * in the pages for any newly wired area (wired_count is 1).
+ * in the pages for any newly wired area (wired_count == 1).
*
* downgrading to a read lock for uvm_fault_wire avoids a possible
* deadlock with another thread that may have faulted on one of
@@ -2235,8 +2233,8 @@
if (flags == 0) { /* unwire */
/*
- * Decrement the wiring count on the entries. If they
- * reach zero, unwire them.
+ * POSIX 1003.1b -- munlockall unlocks all regions,
+ * regardless of how many times mlockall has been called.
*
* Note, uvm_fault_unwire() (called via uvm_map_entry_unwire())
* does not lock the map, so we don't have to do anything
@@ -2244,11 +2242,8 @@
*/
for (entry = map->header.next; entry != &map->header;
entry = entry->next) {
- if (VM_MAPENT_ISWIRED(entry)) {
- entry->wired_count--;
- if (VM_MAPENT_ISWIRED(entry) == 0)
- uvm_map_entry_unwire(map, entry);
- }
+ if (VM_MAPENT_ISWIRED(entry))
+ uvm_map_entry_unwire(map, entry);
}
map->flags &= ~VM_MAP_WIREFUTURE;
vm_map_unlock(map);
@@ -2286,7 +2281,7 @@
* need to be created. then we increment its wiring count.
*
* 3: we downgrade to a read lock, and call uvm_fault_wire to fault
- * in the pages for any newly wired area (wired count is 1).
+ * in the pages for any newly wired area (wired_count == 1).
*
* downgrading to a read lock for uvm_fault_wire avoids a possible
* deadlock with another thread that may have faulted on one of
Home |
Main Index |
Thread Index |
Old Index