Source-Changes-D archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: CVS commit: src/sys/arch
Hi,
On 8 November 2011 05:50, Jean-Yves Migeon <jeanyves.migeon%free.fr@localhost>
wrote:
> On 06.11.2011 16:18, Cherry G. Mathew wrote:
>> Module Name: src
>> Committed By: cherry
>> Date: Sun Nov 6 15:18:19 UTC 2011
>>
>> Modified Files:
>> src/sys/arch/amd64/include: pmap.h
>> src/sys/arch/i386/include: pmap.h
>> src/sys/arch/x86/include: cpu.h
>> src/sys/arch/x86/x86: pmap.c
>> src/sys/arch/xen/x86: cpu.c x86_xpmap.c
>>
>> Log Message:
>> [merging from cherry-xenmp] make pmap_kernel() shadow PMD per-cpu and MP
>> aware.
>
> Some comments.
>
>> -#ifdef PAE
>> -/* Address of the static kernel's L2 page */
>> -pd_entry_t *pmap_kl2pd;
>> -paddr_t pmap_kl2paddr;
>> -#endif
>> -
>> -
> [snip]
>> #ifdef PAE
>> - uint32_t ci_pae_l3_pdirpa; /* PA of L3 PD */
>> + paddr_t ci_pae_l3_pdirpa; /* PA of L3 PD */
>> pd_entry_t * ci_pae_l3_pdir; /* VA pointer to L3 PD */
>> #endif
>>
> [snip]
>> +
>> +#if defined(PAE)
>> + ci->ci_pae_l3_pdir = (paddr_t *)uvm_km_alloc(kernel_map, PAGE_SIZE,
>> 0,
>> + UVM_KMF_WIRED | UVM_KMF_ZERO | UVM_KMF_NOWAIT);
>> +
>> + if (ci->ci_pae_l3_pdir == NULL) {
>> + panic("%s: failed to allocate L3 per-cpu PD for CPU %d\n",
>> + __func__, cpu_index(ci));
>> + }
>
> Please keep ci_pae_l3_pdir to a uint32_t and back out its paddr_t type.
>
> Unless you can convince me that your code will do the right thing(tm), the
> PDP for PAE should to be allocated below the 4GiB physical boundary; unless
> mistaken, the uvm_km_alloc(kernel_map, ...) does not enforce that. Besides,
> %cr3 is still a 32 bits entity under PAE. Trying to stash a 64 bits paddr_t
> in %cr3 is a really bad idea.
>
> See comments http://nxr.netbsd.org/xref/src/sys/arch/x86/x86/pmap.c#1588
>
This doesn't hold true for Xen. See:
http://lxr.xensource.com/lxr/source/xen/arch/x86/mm.c#L509
Xen does the <4GB translation for the VM by maintaining its own
shadow. I just took the easier route ( after initially using the x86
pmap < 4GB cr3 code ), but if you're thinking of a unified a
xen/non-xen implementation we'll have to re-think this one.
>> -/* flag to be used for kernel mappings: PG_u on Xen/amd64, 0 otherwise */
>> -#if defined(XEN)&& defined(__x86_64__)
>> -#define PG_k PG_u
>> -#else
>> -#define PG_k 0
>> -#endif
>> -
>
> Are you sure that all the mapping sites are safe (PT/PD bits), given the
> pmap split between pmap/xen_xpmap.c?
>
No. I haven't reviewed this in that context - however the use of PG_k
is exactly the same in xen as it was before this commit.
> On a side note, please stick to KNF especially for 80 columns.
>
Noted, thanks.
> [snip]
>> +void
>> +pmap_cpu_init_late(struct cpu_info *ci)
>> +{
>> +#if defined(PAE) || defined(__x86_64__)
>> + /*
>> + * The BP has already its own PD page allocated during early
>> + * MD startup.
>> + */
>> +
>> + if (ci ==&cpu_info_primary)
>> + return;
>> +
>> + KASSERT(ci != NULL);
>
>> + ci->ci_pae_l3_pdirpa = vtophys((vaddr_t) ci->ci_pae_l3_pdir);
>> + KASSERT(ci->ci_pae_l3_pdirpa != 0);
>> +
>> + /* Initialise L2 entries 0 - 2: Point them to pmap_kernel() */
>> + ci->ci_pae_l3_pdir[0] =
>> + xpmap_ptom_masked(pmap_kernel()->pm_pdirpa[0]) | PG_V;
>> + ci->ci_pae_l3_pdir[1] =
>> + xpmap_ptom_masked(pmap_kernel()->pm_pdirpa[1]) | PG_V;
>> + ci->ci_pae_l3_pdir[2] =
>> + xpmap_ptom_masked(pmap_kernel()->pm_pdirpa[2]) | PG_V;
>> +#endif /* PAE */
>
> - are you sure that these have to be
> "defined(PAE) || defined(__x86_64__)" ?
>
> - please use "for (i = 0; i < PTP_LEVELS - 1; i++ ) { ... }". I will have to
> identify these blocks later when GENERIC will include both PAE and !PAE
> pmaps. PTP_LEVELS makes it a lot easier.
>
ok, will watchout for it - do you want to do this one yourself ?
> That's all for the first quick review :) Thanks for starting the merge of
> your branch.
>
Thanks for this - looking forward to more :-)
--
~Cherry
Home |
Main Index |
Thread Index |
Old Index