tech-crypto archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Changes to make /dev/*random better sooner
On Wed, Apr 09, 2014 at 04:36:26PM -0700, Dennis Ferguson wrote:
>
> I'd really like to understand what problem is fixed by this. It
> seems to make the code more expensive (significantly so since it
> precludes using timestamps in their cheapest-to-obtain form) but
> I'm missing the benefit this cost pays for.
It's no more expensive to sample a 64-bit than a 32-bit cycle counter,
if you have both. Where do we have access to only a 32-bit cycle
counter? I admit that the problem exists in theory. I am not so sure
at all that it exists in practice.
The problem that is fixed is multiple wraparound of the counter. I
saw this effect with many sources on many systems. Some samples really
are very low rate, but they nonetheless are worth collecting.
The multiple-wrap problem is one of several problems with the delta
estimator, which I am beginning to suspect is not really well suited
for entropy estimation of timestamps (it does work well on sequences
like disk block or memory page addresses).
The delta estimator seems to almost _always_ attribute entropy to timestamps,
on modern machines, so it is a good thing it attributes only 1 bit. I am
more than open to suggestions of a better entropy estimator for timestamps;
I had some hopes for LZ but the data seem to show clearly that's not it.
It is noteworthy though that this work is not done when the samples are
enqueued; it is done in batches when they are dequeued and processed,
and without locks held. So though there is a cost, it may not be happening
in the context in which you're worried about it.
Thor
Home |
Main Index |
Thread Index |
Old Index