Subject: Re: Compressed cache system [Re: Google Summer of Code project ideas]
To: None <tech-embed@netbsd.org>
From: Matt Fleming <mattjfleming@googlemail.com>
List: tech-embed
Date: 04/23/2006 00:45:17
Hubert Feyrer wrote:
> On Sun, 23 Apr 2006, Matt Fleming wrote:
>> Now this point has got me to thinking. Is the way that an object is
>> written to backing store dictated by its uvm_pagerops structure? And
>> if so, would creating a pseudo device with its own pager operations be
>> a reasonable way to go about handling writing to the compressed cache?
>> Surely this way the pseudo device would handle the compression and
>> decompression of the pages for the object.
>>
>> Although I suppose it would have to be in addition to the uvm_pagerops
>> struct that an object already had (or else how would the compressed
>> cache know how to write to the _real_ backing store?).
>>
>> Does this seem like a feasible idea?
>
> I don't know zero about UVM, so I cannot comment.
>
> But I guess you'll hit a problem that pops up with vnd(4) with
> VND_COMPRESSION, too: how are you going to manage your compressed storage?
> "Input" data is fixed size (== pagesize), but "output" size depends on
> the compression factor, and thus you'll need some way to arrange things
> with a non-fixed offset.
A page of compressed cache consists of 'fragments' which consist of a
compressed page. The compressed cache tries to limit the amount of
fragmented space by compacting fragments together. I'm not quite sure if
that answers your question..
Worse, if you replace a page with something
> that compresses worse, you may not be able to use available space.
>
I think the pages are tested before to see whether compression is
'worth' the effort, i.e if the compressed size is going to be smaller
than the current size.