tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Patch: optimize kmem_alloc for frequent mid-sized allocations
On Sun, Jan 11, 2009 at 01:40:48PM +0000, Andrew Doran wrote:
>
> It is a problem for "mid-sized" allocations, up to PAGE_SIZE because these
> occur frequently. The below patch introduces an additional level of caching,
> from the maximum size provided by the quantum cache up to PAGE_SIZE. It also
> adds debug code to check that allocated size == freed size.
So my understanding of the old version is that:
1) the minumum allocation size is 128
2) allocations above PAGE_SIZE directly map pages
3) allocations are done from some allocator with a bit-map free list??
dynamically freeing pages when no longer used
With the modified version:
1) the minumum allocation size is 128
2) allocations above PAGE_SIZE directly map pages
3) allocations are done from free lists which contain power-of-2
sized items generated by splitting up pages.
The correct list being found by indexing with the size.
I'm not surprised the former sucks :-)
I think I'd go further :-)
1) Reduce the minimum allocation size further
but maybe not the rounding for larger items.
(say 32 byte rounding below 256 bytes)
yes: you might start sharing cache lines, but that is ok unless you
are unlucky and repeatedly dirty the same line on more than one cpu.
2) Divide PAGE_SIZE repeatedly by all the integers, then round down
to a an appropriate size. With 4k pages and 128 byte resolution
you have 32 items/page:
1/32, 2/16, 3/10, 4/8, 5/6, 6/5, 8/4, 10/3, 2/16, 32/1
instead of just:
1/32, 2/16, 4/8, 8/4, 16/2, 32/1
3) Consider splitting multiple pages (eg 4) for sizes above (say) 1k
to reduce wastage.
Investigate per-cpu free lists - or does pool_cache_xxxx(0 use them now.
Possibly allow code to pre-lookup the free list for fixed size items.
David
--
David Laight: david%l8s.co.uk@localhost
Home |
Main Index |
Thread Index |
Old Index