tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: mlock() issues
On Fri, 22 Oct 2010 08:13:34 +0200
Michael van Elst <mlelstv%serpens.de@localhost> wrote:
> On Thu, Oct 21, 2010 at 10:40:15PM +0100, Sad Clouds wrote:
>
> > I do realise this reinvents kernel file cache, but it gives you a
> > lot more flexibility over what files get cached in memory and you
> > can plug custom algorithms over how files get evicted from cache.
>
> NIH is the driving force for many such decisions.
You make it sound like it's a really bad thing. My opinion - it's good
to invent or even reinvent, because sometimes "one wheel fits all"
solution is not as optimal or flexible as a custom made solution. For
example, take HTTP protocol that allows file requests to be pipelined.
A pipelined request, say for 10 small files can be served with a single
writev() system call (provided those files are cached in RAM), if you
rely on kernel file cache, you need to issue 10 write() system calls.
I ran some simple benchmarks and they showed that on NetBSD copying data
from application file cache was 2 to 4 times faster than relying on
kernel file cache.
On Linux, copying data from application file cache was 35 times faster
than using sendfile(). This result looks a bit bogus, but I ran it a
few times and got the same results...
Also, as far as I know the only way to tell if kernel cache has file
cached in memory it to call mincore() system call, which is expensive.
With application cache that locks file pages, simple hash table
lookup will indicate if the file is present in memory.
Home |
Main Index |
Thread Index |
Old Index