Source-Changes-D archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: CVS commit: src/sys
Le 27/07/2015 20:28, matthew green a écrit :
> "Maxime Villard" writes:
>> Module Name: src
>> Committed By: maxv
>> Date: Mon Jul 27 09:24:28 UTC 2015
>>
>> Modified Files:
>> src/sys/kern: subr_kmem.c
>> src/sys/uvm: files.uvm
>> Removed Files:
>> src/sys/uvm: uvm_kmguard.c uvm_kmguard.h
>>
>> Log Message:
>> Several changes and improvements in KMEM_GUARD:
>> - merge uvm_kmguard.{c,h} into subr_kmem.c. It is only user there, and
>> makes it more consistent. Also, it allows us to enable KMEM_GUARD
>> without enabling DEBUG.
>> - rename uvm_kmguard_XXX to kmem_guard_XXX, for consistency
>> - improve kmem_guard_alloc() so that it supports allocations bigger than
>> PAGE_SIZE
>> - remove the canary value, and use directly the kmem header as underflow
>> pattern.
>> - fix some comments
>>
>> (The UAF fifo is disabled for the moment; we actually need to register
>> the va and its size, and add a weight support not to consume too much
>> memory.)
>
> thanks for extending KMEM_GUARD beyond PAGE_SIZE. this will be
> quite helpful, if even now requiring a lot more memory when
> using kmguard :)
Not really. In fact, I don't really understand why the comment in
subr_kmem's header says it is "very expensive", since even on my
slow laptop VM, I see no performance regression with KMEM_GUARD
enabled.
It is true that, now that the allocations can take several pages, the
fifo (which only holds the original va) can queue a lot of memory. My
plan was to add a field in the guard structure in order to hold the
weight of the fifo, and not insert enormous elements into it.
I also planned to always enable KMEM_GUARD on DEBUG, even when depth=0.
depth is only used for the fifo; having no fifo does not prevent the
guard page from being here and working. And then, depending on how much
memory you have, you can modify that value to more or less detect UAFs..
>
> was this change presented for review anywhere before commit?
>
> i'm not convinced that moving directly into subr_kmem.c is the
> right thing or the only way to solve the problem of depending
> on DEBUG. could you separate them again, please?
I'm not convinced keeping them separate really makes sense; as I
said, it is kmem-specific, and gathering it with the kmem code
reduces fragmentation (and, having to hardcode things when you
want to enable it without enabling DEBUG). It was also a way to
replace the canary by the kmem header.
>
> the failure mode in the new kmem_guard_alloc() is also expensive,
> in that instead of knowing how to unwind itself, it calls into
> UVM to figure out all the details (which involves pmap_extract()
> and a bunch more accounting.) it would probably be best to
> replace this with a call to uvm_pglistalloc() in the new version,
> and then you'll never have a partial allocation failure to deal
> with and you only make a single call to allocate these pages.
Yes, probably. But I'm more concerned about fixing the fifo.
Actually, we need to register the va and its size into the fifo, and I
haven't found an atomic function that can modify two pointers.
>
> also, about removing the canary -- can you explain this a little
> more, i'm not sure i understand this part.
Just give a look at subr_kmem's header:
* |CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|
* +-----+-----+-----+-----+-----+-----+-----+-----+-----+---+-+--+--+
* |/////| | | | | | | | | |*|**|UU|
* |/HSZ/| | | | | | | | | |*|**|UU|
* |/////| | | | | | | | | |*|**|UU|
* +-----+-----+-----+-----+-----+-----+-----+-----+-----+---+-+--+--+
* |Size | Buffer usable by the caller (requested size) |RedZ|Unused\
The same applies to kmem_guard_alloc(): right before the returned
pointer, there's a kmem_header structure that indicates the size of the
allocated memory; the kernel compares this size with the one given in
kmem_free().
This header can therefore be used as an underflow pattern: if the caller
writes at p[-1], the size will be modified, and when freeing the kernel
will detect the mismatch.
But I know there's still an issue here:
struct kmem_header {
size_t size;
} __aligned(KMEM_ALIGN);
KMEM_ALIGN being 8, on 32bit systems, a padding is added by the compiler
in the last 4 bytes of the structure; which means that if the caller
writes at p[-1], the kernel won't detect it.
I guess there's a magic compiler option somewhere that can instruct the
compiler to pad at the bottom (so that the 'size' field sleeps at the
end of the structure).
>
> thanks.
>
>
> .mrg.
>
Home |
Main Index |
Thread Index |
Old Index