Subject: Multiple faulting
To: None <tech-kern@netbsd.org>
From: Charles M. Hannum <root@ihack.net>
List: tech-kern
Date: 03/25/1999 05:32:31
So I went ahead and implemented my suggestion -- that is, passing the
fault/access type through uvm_fault() to pmap_enter(), so that on
systems with R/M emulation we can preset the emulated bits and avoid
taking another fault to set them.
While doing this, I noticed several things:
* The VM system itself can cause an extra fault during the handling of
copy-on-write. This means that the total possible fault count for a
given access was actually at least *three*. That's pretty suckful.
* The copy-on-write checks in uvm_fault() force the page to be mapped
with just PROT_READ. This may also cause any particular pmap
implementation may cause the page to also be non-executable. This
would be poor; we'd end up taking another fault and going through
uvm_fault() again, even though no copy-on-write action needs to be
performed.
(I don't think there are any pmaps which currently maintain the read
and execute bits separately, though some of them certainly could.)
* Because uvm_fault_wire() and uvm_vslock() have no way to insure that
the page is not going to take R/M emulation faults, it's actually
possible for them to occur inside interrupt handlers, while munging
I/O buffers. This isn't necessarily a problem, but the machine
dependent code needs to take this into account and make sure it
doesn't break anything.
* If file system buffers were mapped using pmap_enter(), they would
not actually be accessible yet, due to the R/M emulation. Since
they are then pagemove()d behind the pmap's back, the R/M emulation
state got highly confused, and this caused substantial lossage. The
Alpha port slinks around this by using pmap_kenter(), which
explicitly does not do R/M emulation. Using the new `fault type'
argument to pmap_enter(), I have arranged for the buffers to always
be immediately accessible, so it is not necessary to use
pmap_kenter().
(Indeed, this may obviate the `need' for pmap_kenter() completely.)
* There was an absolutely horrible hack in the arm32 code. The space
for the pmseg data was allocated in pmap_init() using
uvm_km_zalloc(). Internally, this uses pmap_enter() to install the
mappings. However, since the pmseg pointers were not yet set up,
this caused the p->v list pointer in the `new mapping' case of
pmap_enter() to point into page 0. This is very bad. The solution
I chose was to allocate the space inside pmap_bootstrap(), using
uvm_pageboot_alloc(). This steals the memory from the physsegs, and
so the pages will always be considered `unmanaged' by pmap_enter()
and mapped immediately.
XXX Currently this causes the pages to be mapped uncachable. This
should probably be changed. Note that this was already the case for
the vm_pages.
I include below my current implementation (for arm32 only, though it
should be easy to modify the alpha port).
-----8<-----snip-----8<-----snip-----8<-----snip-----8<-----snip-----8<-----
Index: arch/arm32/arm32/bus_dma.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/arm32/bus_dma.c,v
retrieving revision 1.11
diff -c -2 -r1.11 bus_dma.c
*** bus_dma.c 1998/09/21 22:53:35 1.11
--- bus_dma.c 1999/03/25 10:01:50
***************
*** 527,531 ****
panic("_bus_dmamem_map: size botch");
pmap_enter(pmap_kernel(), va, addr,
! VM_PROT_READ | VM_PROT_WRITE, TRUE);
/*
* If the memory must remain coherent with the
--- 527,531 ----
panic("_bus_dmamem_map: size botch");
pmap_enter(pmap_kernel(), va, addr,
! VM_PROT_READ | VM_PROT_WRITE, TRUE, 0);
/*
* If the memory must remain coherent with the
Index: arch/arm32/arm32/fault.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/arm32/fault.c,v
retrieving revision 1.39
diff -c -2 -r1.39 fault.c
*** fault.c 1999/03/23 18:02:02 1.39
--- fault.c 1999/03/25 10:01:50
***************
*** 201,217 ****
}
- #ifdef DIAGNOSTIC
- if (current_intr_depth > 0) {
- #ifdef DDB
- printf("Fault with intr_depth > 0\n");
- report_abort(NULL, fault_status, fault_address, fault_pc);
- kdb_trap(-1, frame);
- return;
- #else
- panic("Fault with intr_depth > 0");
- #endif /* DDB */
- }
- #endif /* DIAGNOSTIC */
-
/* More debug stuff */
--- 201,204 ----
***************
*** 421,424 ****
--- 408,424 ----
pmap_handled_emulation(map->pmap, va))
goto out;
+
+ #ifdef DIAGNOSTIC
+ if (current_intr_depth > 0) {
+ #ifdef DDB
+ printf("Non-emulated page fault with intr_depth > 0\n");
+ report_abort(NULL, fault_status, fault_address, fault_pc);
+ kdb_trap(-1, frame);
+ return;
+ #else
+ panic("Fault with intr_depth > 0");
+ #endif /* DDB */
+ }
+ #endif /* DIAGNOSTIC */
#if defined(UVM)
Index: arch/arm32/arm32/machdep.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/arm32/machdep.c,v
retrieving revision 1.61
diff -c -2 -r1.61 machdep.c
*** machdep.c 1999/03/06 01:29:53 1.61
--- machdep.c 1999/03/25 10:01:51
***************
*** 409,413 ****
pmap_enter(pmap_kernel(),
(vm_offset_t)((caddr_t)msgbufaddr + loop * NBPG),
! msgbufphys + loop * NBPG, VM_PROT_ALL, TRUE);
initmsgbuf(msgbufaddr, round_page(MSGBUFSIZE));
--- 409,414 ----
pmap_enter(pmap_kernel(),
(vm_offset_t)((caddr_t)msgbufaddr + loop * NBPG),
! msgbufphys + loop * NBPG, VM_PROT_READ|VM_PROT_WRITE, TRUE,
! VM_PROT_READ|VM_PROT_WRITE);
initmsgbuf(msgbufaddr, round_page(MSGBUFSIZE));
***************
*** 471,478 ****
while (curbufsize) {
! if ((pg = uvm_pagealloc(NULL, 0, NULL)) == NULL)
! panic("cpu_startup: More RAM needed for buffer cache");
pmap_enter(kernel_map->pmap, curbuf,
! VM_PAGE_TO_PHYS(pg), VM_PROT_ALL, TRUE);
curbuf += PAGE_SIZE;
curbufsize -= PAGE_SIZE;
--- 472,481 ----
while (curbufsize) {
! pg = uvm_pagealloc(NULL, 0, NULL);
! if (pg == NULL)
! panic("cpu_startup: not enough memory for buffer cache");
pmap_enter(kernel_map->pmap, curbuf,
! VM_PAGE_TO_PHYS(pg), VM_PROT_READ|VM_PROT_WRITE,
! TRUE, VM_PROT_READ|VM_PROT_WRITE);
curbuf += PAGE_SIZE;
curbufsize -= PAGE_SIZE;
***************
*** 489,493 ****
* but has no physical memory allocated for it.
*/
-
curbuf = (vm_offset_t)buffers + loop * MAXBSIZE;
curbufsize = CLBYTES * (loop < residual ? base+1 : base);
--- 492,495 ----
***************
*** 535,541 ****
*/
callfree = callout;
-
for (loop = 1; loop < ncallout; ++loop)
callout[loop - 1].c_next = &callout[loop];
#if defined(UVM)
--- 537,543 ----
*/
callfree = callout;
for (loop = 1; loop < ncallout; ++loop)
callout[loop - 1].c_next = &callout[loop];
+ callout[loop - 1].c_next = NULL;
#if defined(UVM)
***************
*** 550,554 ****
* Set up buffers, so they can be used to read disk labels.
*/
-
bufinit();
--- 552,555 ----
Index: arch/arm32/arm32/mem.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/arm32/mem.c,v
retrieving revision 1.8
diff -c -2 -r1.8 mem.c
*** mem.c 1999/02/10 17:03:26 1.8
--- mem.c 1999/03/25 10:01:51
***************
*** 126,130 ****
pmap_enter(pmap_kernel(), (vm_offset_t)memhook,
trunc_page(v), uio->uio_rw == UIO_READ ?
! VM_PROT_READ : VM_PROT_WRITE, TRUE);
o = uio->uio_offset & PGOFSET;
c = min(uio->uio_resid, (int)(NBPG - o));
--- 126,130 ----
pmap_enter(pmap_kernel(), (vm_offset_t)memhook,
trunc_page(v), uio->uio_rw == UIO_READ ?
! VM_PROT_READ : VM_PROT_WRITE, TRUE, 0);
o = uio->uio_offset & PGOFSET;
c = min(uio->uio_resid, (int)(NBPG - o));
Index: arch/arm32/arm32/pmap.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/arm32/pmap.c,v
retrieving revision 1.47
diff -c -2 -r1.47 pmap.c
*** pmap.c 1999/03/24 02:45:27 1.47
--- pmap.c 1999/03/25 10:01:51
***************
*** 60,63 ****
--- 60,64 ----
#include "opt_pmap_debug.h"
#include "opt_uvm.h"
+ #include "opt_ddb.h"
#include <sys/types.h>
***************
*** 599,603 ****
{
while (spa < epa) {
! pmap_enter(pmap_kernel(), va, spa, prot, FALSE);
va += NBPG;
spa += NBPG;
--- 600,604 ----
{
while (spa < epa) {
! pmap_enter(pmap_kernel(), va, spa, prot, FALSE, 0);
va += NBPG;
spa += NBPG;
***************
*** 622,625 ****
--- 623,629 ----
extern vm_offset_t physical_freeend;
+ struct pv_entry *boot_pvent;
+ char *boot_attrs;
+
void
pmap_bootstrap(kernel_l1pt, kernel_ptpt)
***************
*** 633,636 ****
--- 637,641 ----
vm_size_t isize;
#endif
+ vsize_t size;
kernel_pmap = &kernel_pmap_store;
***************
*** 651,654 ****
--- 656,660 ----
#endif
+ npages = 0;
loop = 0;
while (loop < bootconfig.dramblocks) {
***************
*** 677,680 ****
--- 683,687 ----
atop(istart + isize), atop(istart),
atop(istart + isize), VM_FREELIST_ISADMA);
+ npages += atop(istart + isize) - atop(istart);
/*
***************
*** 691,694 ****
--- 698,702 ----
atop(istart), atop(start),
atop(istart), VM_FREELIST_DEFAULT);
+ npages += atop(istart) - atop(start);
}
***************
*** 706,725 ****
--- 714,741 ----
atop(end), atop(istart + isize),
atop(end), VM_FREELIST_DEFAULT);
+ npages += atop(end) - atop(istart + isize);
}
} else {
uvm_page_physload(atop(start), atop(end),
atop(start), atop(end), VM_FREELIST_DEFAULT);
+ npages += atop(end) - atop(start);
}
#else /* NISADMA > 0 */
uvm_page_physload(atop(start), atop(end),
atop(start), atop(end), VM_FREELIST_DEFAULT);
+ npages += atop(end) - atop(start);
#endif /* NISADMA > 0 */
#else /* UVM */
vm_page_physload(atop(start), atop(end),
atop(start), atop(end));
+ npages += atop(end) - atop(start);
#endif /* UVM */
++loop;
}
+ #ifdef MYCROFT_HACK
+ printf("npages = %ld\n", npages);
+ #endif
+
virtual_start = KERNEL_VM_BASE;
virtual_end = virtual_start + KERNEL_VM_SIZE - 1;
***************
*** 747,750 ****
--- 763,774 ----
cpu_tlb_flushD_SE(hydrascratch.virtual);
#endif /* NHYDRABUS */
+
+ size = npages * sizeof(struct pv_entry);
+ boot_pvent = (struct pv_entry *)uvm_pageboot_alloc(size);
+ bzero(boot_pvent, size);
+ size = npages * sizeof(char);
+ boot_attrs = (char *)uvm_pageboot_alloc(size);
+ bzero(boot_attrs, size);
+
cpu_cache_cleanD();
}
***************
*** 763,772 ****
pmap_init()
{
- vm_size_t s;
- vm_offset_t addr;
int lcv;
! npages = physmem;
! printf("Number of pages to handle = %ld\n", npages);
/*
--- 787,795 ----
pmap_init()
{
int lcv;
! #ifdef MYCROFT_HACK
! printf("physmem = %d\n", physmem);
! #endif
/*
***************
*** 777,815 ****
* the memory that is useable in a user process.
*/
-
avail_start = 0;
avail_end = physmem * NBPG;
! npages = 0;
! for (lcv = 0 ; lcv < vm_nphysseg ; lcv++)
! npages += (vm_physmem[lcv].end - vm_physmem[lcv].start);
! s = (vm_size_t) (sizeof(struct pv_entry) * npages + npages);
! s = round_page(s);
! #if defined(UVM)
! addr = (vm_offset_t)uvm_km_zalloc(kernel_map, s);
! #else
! addr = (vm_offset_t)kmem_alloc(kernel_map, s);
! #endif
! if (addr == NULL)
! panic("pmap_init");
!
! /* allocate pv_entry stuff first */
! for (lcv = 0 ; lcv < vm_nphysseg ; lcv++) {
! vm_physmem[lcv].pmseg.pvent = (struct pv_entry *) addr;
! addr = (vm_offset_t)(vm_physmem[lcv].pmseg.pvent +
! (vm_physmem[lcv].end - vm_physmem[lcv].start));
}
! /* allocate attrs next */
for (lcv = 0 ; lcv < vm_nphysseg ; lcv++) {
! vm_physmem[lcv].pmseg.attrs = (char *) addr;
! addr = (vm_offset_t)(vm_physmem[lcv].pmseg.attrs +
! (vm_physmem[lcv].end - vm_physmem[lcv].start));
}
TAILQ_INIT(&pv_page_freelist);
- /*#ifdef DEBUG*/
- printf("pmap_init: %lx bytes (%lx pgs)\n", s, npages);
- /*#endif*/
-
/*
* Now it is safe to enable pv_entry recording.
--- 800,823 ----
* the memory that is useable in a user process.
*/
avail_start = 0;
avail_end = physmem * NBPG;
! /* Set up pmap info for physsegs. */
! for (lcv = 0; lcv < vm_nphysseg; lcv++) {
! vm_physmem[lcv].pmseg.pvent = boot_pvent;
! boot_pvent += vm_physmem[lcv].end - vm_physmem[lcv].start;
! vm_physmem[lcv].pmseg.attrs = boot_attrs;
! boot_attrs += vm_physmem[lcv].end - vm_physmem[lcv].start;
}
! #ifdef MYCROFT_HACK
for (lcv = 0 ; lcv < vm_nphysseg ; lcv++) {
! printf("physseg[%d] pvent=%p attrs=%p start=%ld end=%ld\n",
! lcv,
! vm_physmem[lcv].pmseg.pvent, vm_physmem[lcv].pmseg.attrs,
! vm_physmem[lcv].start, vm_physmem[lcv].end);
}
+ #endif
TAILQ_INIT(&pv_page_freelist);
/*
* Now it is safe to enable pv_entry recording.
***************
*** 959,963 ****
pmap_enter(pmap_kernel(), va, pa,
! VM_PROT_READ | VM_PROT_WRITE, TRUE);
/* Revoke cacheability and bufferability */
--- 967,971 ----
pmap_enter(pmap_kernel(), va, pa,
! VM_PROT_READ | VM_PROT_WRITE, TRUE, 0);
/* Revoke cacheability and bufferability */
***************
*** 1142,1146 ****
/* Map zero page for the pmap. This will also map the L2 for it */
pmap_enter(pmap, 0x00000000, systempage.pv_pa,
! VM_PROT_READ, TRUE);
}
--- 1150,1154 ----
/* Map zero page for the pmap. This will also map the L2 for it */
pmap_enter(pmap, 0x00000000, systempage.pv_pa,
! VM_PROT_READ, TRUE, 0);
}
***************
*** 2015,2019 ****
void
! pmap_enter(pmap, va, pa, prot, wired)
pmap_t pmap;
vm_offset_t va;
--- 2023,2027 ----
void
! pmap_enter(pmap, va, pa, prot, wired, ftype)
pmap_t pmap;
vm_offset_t va;
***************
*** 2021,2024 ****
--- 2029,2033 ----
vm_prot_t prot;
boolean_t wired;
+ vm_prot_t ftype;
{
pt_entry_t *pte;
***************
*** 2027,2031 ****
struct pv_entry *pv = NULL;
u_int cacheable = 0;
! vm_offset_t opa = -1;
PDEBUG(5, printf("pmap_enter: V%08lx P%08lx in pmap %p prot=%08x, wired = %d\n",
--- 2036,2041 ----
struct pv_entry *pv = NULL;
u_int cacheable = 0;
! vm_offset_t opa;
! int flags;
PDEBUG(5, printf("pmap_enter: V%08lx P%08lx in pmap %p prot=%08x, wired = %d\n",
***************
*** 2106,2123 ****
opa = pmap_pte_pa(pte);
/* Are we mapping the same page ? */
if (opa == pa) {
- int flags;
-
/* All we must be doing is changing the protection */
PDEBUG(0, printf("Case 02 in pmap_enter (V%08lx P%08lx)\n",
va, pa));
- if ((bank = vm_physseg_find(atop(pa), &off)) != -1)
- pv = &vm_physmem[bank].pmseg.pvent[off];
- cacheable = (*pte) & PT_C;
-
/* Has the wiring changed ? */
! if (pv) {
flags = pmap_modify_pv(pmap, va, pv, 0, 0) & PT_W;
if (flags && !wired)
--- 2116,2132 ----
opa = pmap_pte_pa(pte);
+ #ifdef MYCROFT_HACK
+ printf("pmap_enter: pmap=%p va=%lx pa=%lx opa=%lx\n", pmap, va, pa, opa);
+ #endif
+
/* Are we mapping the same page ? */
if (opa == pa) {
/* All we must be doing is changing the protection */
PDEBUG(0, printf("Case 02 in pmap_enter (V%08lx P%08lx)\n",
va, pa));
/* Has the wiring changed ? */
! if ((bank = vm_physseg_find(atop(pa), &off)) != -1) {
! pv = &vm_physmem[bank].pmseg.pvent[off];
flags = pmap_modify_pv(pmap, va, pv, 0, 0) & PT_W;
if (flags && !wired)
***************
*** 2127,2131 ****
cacheable = pmap_nightmare1(pmap, pv, va, prot, cacheable);
! }
} else {
/* We are replacing the page with a new one. */
--- 2136,2141 ----
cacheable = pmap_nightmare1(pmap, pv, va, prot, cacheable);
! } else
! cacheable = *pte & PT_C;
} else {
/* We are replacing the page with a new one. */
***************
*** 2140,2170 ****
*/
if ((bank = vm_physseg_find(atop(opa), &off)) != -1) {
! int flags;
! flags = pmap_remove_pv(pmap, va,
! &vm_physmem[bank].pmseg.pvent[off]);
! if (flags & PT_W)
! pmap->pm_stats.wired_count--;
}
- /* Update the wiring stats for the new page */
- if (wired)
- ++pmap->pm_stats.wired_count;
! /*
! * Enter on the PV list if part of our managed memory
! */
! if ((bank = vm_physseg_find(atop(pa), &off)) != -1)
! pv = &vm_physmem[bank].pmseg.pvent[off];
! if (pv) {
! if (pmap_enter_pv(pmap, va, pv, 0))
! cacheable = PT_C;
! else
! cacheable = pmap_nightmare(pmap, pv, va, prot);
! } else
! cacheable = 0;
}
} else {
/* pte is not valid so we must be hooking in a new page */
-
++pmap->pm_stats.resident_count;
if (wired)
++pmap->pm_stats.wired_count;
--- 2150,2167 ----
*/
if ((bank = vm_physseg_find(atop(opa), &off)) != -1) {
! pv = &vm_physmem[bank].pmseg.pvent[off];
! flags = pmap_remove_pv(pmap, va, pv) & PT_W;
! if (flags)
! --pmap->pm_stats.wired_count;
}
! goto enter;
}
} else {
/* pte is not valid so we must be hooking in a new page */
++pmap->pm_stats.resident_count;
+
+ enter:
+ /* Update the wiring stats for the new page */
if (wired)
++pmap->pm_stats.wired_count;
***************
*** 2173,2203 ****
* Enter on the PV list if part of our managed memory
*/
! if ((bank = vm_physseg_find(atop(pa), &off)) != -1)
pv = &vm_physmem[bank].pmseg.pvent[off];
- if (pv) {
if (pmap_enter_pv(pmap, va, pv, 0))
cacheable = PT_C;
else
cacheable = pmap_nightmare(pmap, pv, va, prot);
! } else {
! /*
! * Assumption: if it is not part of our managed
! * memory then it must be device memory which
! * may be volatile.
! */
! if (bank == -1) {
cacheable = 0;
- PDEBUG(0, printf("pmap_enter: non-managed memory mapping va=%08lx pa=%08lx\n",
- va, pa));
- } else
- cacheable = PT_C;
- }
}
/* Construct the pte, giving the correct access */
!
! npte = (pa & PG_FRAME) | cacheable;
! if (pv)
! npte |= PT_B;
#ifdef DIAGNOSTIC
--- 2170,2189 ----
* Enter on the PV list if part of our managed memory
*/
! if ((bank = vm_physseg_find(atop(pa), &off)) != -1) {
pv = &vm_physmem[bank].pmseg.pvent[off];
if (pmap_enter_pv(pmap, va, pv, 0))
cacheable = PT_C;
else
cacheable = pmap_nightmare(pmap, pv, va, prot);
! } else
cacheable = 0;
}
+ #ifdef MYCROFT_HACK
+ printf("pmap_enter: pmap=%p va=%lx pa=%lx opa=%lx bank=%d off=%d pv=%p\n", pmap, va, pa, opa, bank, off, pv);
+ #endif
+
/* Construct the pte, giving the correct access */
! npte = (pa & PG_FRAME);
#ifdef DIAGNOSTIC
***************
*** 2205,2238 ****
printf("va=0 prot=%d\n", prot);
#endif /* DIAGNOSTIC */
-
- /* if (va >= VM_MIN_ADDRESS && va < VM_MAXUSER_ADDRESS && !wired)
- npte |= L2_INVAL;
- else*/
- npte |= L2_SPAGE;
! if (prot & VM_PROT_WRITE)
! npte |= PT_AP(AP_W);
! if (va >= VM_MIN_ADDRESS) {
! if (va < VM_MAXUSER_ADDRESS)
! npte |= PT_AP(AP_U);
! else if (va < VM_MAX_ADDRESS) { /* This must be a page table */
! npte |= PT_AP(AP_W);
! npte &= ~(PT_C | PT_B);
}
}
! if (va >= VM_MIN_ADDRESS && va < VM_MAXUSER_ADDRESS && pv) /* Inhibit write access for user pages */
! *pte = (npte & ~PT_AP(AP_W));
! else
! *pte = npte;
if (*pte == 0)
panic("oopss: *pte = 0 in pmap_enter() npte=%08x\n", npte);
! if (pv) {
! int flags;
!
! flags = npte & (PT_Wr | PT_Us);
if (wired)
flags |= PT_W;
--- 2191,2269 ----
printf("va=0 prot=%d\n", prot);
#endif /* DIAGNOSTIC */
! if (va >= VM_MIN_ADDRESS && va < VM_MAXUSER_ADDRESS)
! npte |= PT_AP(AP_U);
! if (va >= VM_MAXUSER_ADDRESS && va < VM_MAX_ADDRESS) {
! #ifdef MYCROFT_HACK
! printf("entering PT page\n");
! //if (pmap_initialized)
! // console_debugger();
! #endif
! /*
! * This is a page table page.
! * We don't track R/M information for page table pages, and
! * they can never be aliased, so we punt on some of the extra
! * handling below.
! */
! if (!wired)
! panic("pmap_enter: bogon bravo");
! if (!pv)
! panic("pmap_enter: bogon charlie");
! if (~prot & (VM_PROT_READ|VM_PROT_WRITE))
! panic("pmap_enter: bogon delta");
! pv = 0;
! }
! if (bank != -1) {
! #ifdef MYCROFT_HACK
! if (~prot & ftype) {
! printf("pmap_enter: bogon echo");
! console_debugger();
}
+ #endif
+ /*
+ * An obvious question here is why a page would be entered in
+ * response to a fault, but with permissions less than those
+ * requested. This can happen in the case of a copy-on-write
+ * page that's not currently mapped being accessed; the first
+ * fault will map the original page read-only, and another
+ * fault will be taken to do the copy and make it read-write.
+ */
+ ftype &= prot;
+ npte |= PT_B | cacheable;
+ if (ftype & VM_PROT_WRITE) {
+ npte |= L2_SPAGE | PT_AP(AP_W);
+ vm_physmem[bank].pmseg.attrs[off] |= PT_H | PT_M;
+ } else if (ftype & VM_PROT_ALL) {
+ npte |= L2_SPAGE;
+ vm_physmem[bank].pmseg.attrs[off] |= PT_H;
+ } else
+ npte |= L2_INVAL;
+ } else {
+ if (prot & VM_PROT_WRITE)
+ npte |= L2_SPAGE | PT_AP(AP_W);
+ else if (prot & VM_PROT_ALL)
+ npte |= L2_SPAGE;
+ else
+ npte |= L2_INVAL;
}
! #ifdef MYCROFT_HACK
! printf("pmap_enter: pmap=%p va=%lx pa=%lx prot=%x wired=%d ftype=%x npte=%08x\n", pmap, va, pa, prot, wired, ftype, npte);
! //if (pmap_initialized)
! // console_debugger();
! #endif
!
! *pte = npte;
if (*pte == 0)
panic("oopss: *pte = 0 in pmap_enter() npte=%08x\n", npte);
! if (bank != -1) {
! flags = 0;
! if (prot & VM_PROT_WRITE)
! flags |= PT_Wr;
! if (va >= VM_MIN_ADDRESS && va < VM_MAXUSER_ADDRESS)
! flags |= PT_Us;
if (wired)
flags |= PT_W;
***************
*** 2399,2404 ****
* panic if it is not.
*/
! if (!p->p_vmspace || !p->p_vmspace->vm_map.pmap)
! panic("pmap_pte: problem\n");
/*
* The pmap for the current process should be mapped. If it
--- 2430,2438 ----
* panic if it is not.
*/
! if (!p->p_vmspace || !p->p_vmspace->vm_map.pmap) {
! printf("pmap_pte: va=%08lx p=%p vm=%p\n",
! va, p, p->p_vmspace);
! console_debugger();
! }
/*
* The pmap for the current process should be mapped. If it
Index: arch/arm32/arm32/vm_machdep.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/arm32/vm_machdep.c,v
retrieving revision 1.34
diff -c -2 -r1.34 vm_machdep.c
*** vm_machdep.c 1999/01/03 02:23:28 1.34
--- vm_machdep.c 1999/03/25 10:01:52
***************
*** 247,251 ****
/* Map the system page */
pmap_enter(p->p_vmspace->vm_map.pmap, 0x00000000,
! systempage.pv_pa, VM_PROT_READ, TRUE);
}
--- 247,251 ----
/* Map the system page */
pmap_enter(p->p_vmspace->vm_map.pmap, 0x00000000,
! systempage.pv_pa, VM_PROT_READ, TRUE, 0);
}
Index: arch/arm32/ofw/ofrom.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/ofw/ofrom.c,v
retrieving revision 1.5
diff -c -2 -r1.5 ofrom.c
*** ofrom.c 1998/11/19 15:38:21 1.5
--- ofrom.c 1999/03/25 10:01:55
***************
*** 185,189 ****
pmap_enter(pmap_kernel(), (vm_offset_t)memhook,
trunc_page(v), uio->uio_rw == UIO_READ ?
! VM_PROT_READ : VM_PROT_WRITE, TRUE);
o = uio->uio_offset & PGOFSET;
c = min(uio->uio_resid, (int)(NBPG - o));
--- 185,189 ----
pmap_enter(pmap_kernel(), (vm_offset_t)memhook,
trunc_page(v), uio->uio_rw == UIO_READ ?
! VM_PROT_READ : VM_PROT_WRITE, TRUE, 0);
o = uio->uio_offset & PGOFSET;
c = min(uio->uio_resid, (int)(NBPG - o));
Index: uvm/uvm_device.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_device.c,v
retrieving revision 1.11
diff -c -2 -r1.11 uvm_device.c
*** uvm_device.c 1998/11/19 05:23:26 1.11
--- uvm_device.c 1999/03/25 10:01:55
***************
*** 467,471 ****
" MAPPING: device: pm=0x%x, va=0x%x, pa=0x%x, at=%d",
ufi->orig_map->pmap, curr_va, (int)paddr, access_type);
! pmap_enter(ufi->orig_map->pmap, curr_va, paddr, access_type, 0);
}
--- 467,472 ----
" MAPPING: device: pm=0x%x, va=0x%x, pa=0x%x, at=%d",
ufi->orig_map->pmap, curr_va, (int)paddr, access_type);
! pmap_enter(ufi->orig_map->pmap, curr_va, paddr, access_type, 0,
! access_type);
}
Index: uvm/uvm_fault.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_fault.c,v
retrieving revision 1.20
diff -c -2 -r1.20 uvm_fault.c
*** uvm_fault.c 1999/01/31 09:27:18 1.20
--- uvm_fault.c 1999/03/25 10:01:56
***************
*** 653,657 ****
* needs_copy is still true
*/
! enter_prot = enter_prot & ~VM_PROT_WRITE;
}
--- 653,657 ----
* needs_copy is still true
*/
! enter_prot &= ~VM_PROT_WRITE;
}
***************
*** 824,828 ****
VM_PAGE_TO_PHYS(anon->u.an_page),
(anon->an_ref > 1) ? VM_PROT_READ : enter_prot,
! (ufi.entry->wired_count != 0));
}
simple_unlock(&anon->an_lock);
--- 824,828 ----
VM_PAGE_TO_PHYS(anon->u.an_page),
(anon->an_ref > 1) ? VM_PROT_READ : enter_prot,
! (ufi.entry->wired_count != 0), 0);
}
simple_unlock(&anon->an_lock);
***************
*** 952,956 ****
VM_PAGE_TO_PHYS(pages[lcv]),
UVM_ET_ISCOPYONWRITE(ufi.entry) ?
! VM_PROT_READ : enter_prot, wired);
/*
--- 952,957 ----
VM_PAGE_TO_PHYS(pages[lcv]),
UVM_ET_ISCOPYONWRITE(ufi.entry) ?
! VM_PROT_READ : enter_prot, wired,
! access_type);
/*
***************
*** 1202,1206 ****
ufi.orig_map->pmap, ufi.orig_rvaddr, pg, 0);
pmap_enter(ufi.orig_map->pmap, ufi.orig_rvaddr, VM_PAGE_TO_PHYS(pg),
! enter_prot, wired);
/*
--- 1203,1207 ----
ufi.orig_map->pmap, ufi.orig_rvaddr, pg, 0);
pmap_enter(ufi.orig_map->pmap, ufi.orig_rvaddr, VM_PAGE_TO_PHYS(pg),
! enter_prot, wired, access_type);
/*
***************
*** 1624,1628 ****
ufi.orig_map->pmap, ufi.orig_rvaddr, pg, promote);
pmap_enter(ufi.orig_map->pmap, ufi.orig_rvaddr, VM_PAGE_TO_PHYS(pg),
! enter_prot, wired);
uvm_lock_pageq();
--- 1625,1629 ----
ufi.orig_map->pmap, ufi.orig_rvaddr, pg, promote);
pmap_enter(ufi.orig_map->pmap, ufi.orig_rvaddr, VM_PAGE_TO_PHYS(pg),
! enter_prot, wired, access_type);
uvm_lock_pageq();
Index: uvm/uvm_glue.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_glue.c,v
retrieving revision 1.15
diff -c -2 -r1.15 uvm_glue.c
*** uvm_glue.c 1998/10/19 22:21:19 1.15
--- uvm_glue.c 1999/03/25 10:01:56
***************
*** 216,220 ****
if (pa == 0)
panic("chgkprot: invalid page");
! pmap_enter(pmap_kernel(), sva, pa&~1, prot, TRUE);
}
}
--- 216,220 ----
if (pa == 0)
panic("chgkprot: invalid page");
! pmap_enter(pmap_kernel(), sva, pa&~1, prot, TRUE, 0);
}
}
Index: uvm/uvm_km.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_km.c,v
retrieving revision 1.18
diff -c -2 -r1.18 uvm_km.c
*** uvm_km.c 1998/10/18 23:49:59 1.18
--- uvm_km.c 1999/03/25 10:01:56
***************
*** 747,751 ****
#else
pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
! UVM_PROT_ALL, TRUE);
#endif
loopva += PAGE_SIZE;
--- 747,751 ----
#else
pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
! UVM_PROT_ALL, TRUE, 0);
#endif
loopva += PAGE_SIZE;
***************
*** 880,884 ****
#else
pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
! UVM_PROT_ALL, TRUE);
#endif
loopva += PAGE_SIZE;
--- 880,884 ----
#else
pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
! UVM_PROT_ALL, TRUE, 0);
#endif
loopva += PAGE_SIZE;
Index: uvm/uvm_page.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_page.c,v
retrieving revision 1.15
diff -c -2 -r1.15 uvm_page.c
*** uvm_page.c 1998/10/18 23:50:00 1.15
--- uvm_page.c 1999/03/25 10:01:56
***************
*** 431,435 ****
#else
pmap_enter(pmap_kernel(), vaddr, paddr,
! VM_PROT_READ|VM_PROT_WRITE, FALSE);
#endif
--- 431,436 ----
#else
pmap_enter(pmap_kernel(), vaddr, paddr,
! VM_PROT_READ|VM_PROT_WRITE, FALSE,
! VM_PROT_READ|VM_PROT_WRITE);
#endif
Index: uvm/uvm_pager.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_pager.c,v
retrieving revision 1.14
diff -c -2 -r1.14 uvm_pager.c
*** uvm_pager.c 1999/01/22 08:00:35 1.14
--- uvm_pager.c 1999/03/25 10:01:56
***************
*** 190,194 ****
pmap_enter(vm_map_pmap(pager_map), cva, VM_PAGE_TO_PHYS(pp),
! VM_PROT_DEFAULT, TRUE);
}
--- 190,194 ----
pmap_enter(vm_map_pmap(pager_map), cva, VM_PAGE_TO_PHYS(pp),
! VM_PROT_DEFAULT, TRUE, 0);
}
-----8<-----snip-----8<-----snip-----8<-----snip-----8<-----snip-----8<-----