tech-security archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Lightweight support for instruction RNGs
Date: Tue, 22 Dec 2015 12:22:57 -0500
From: Greg Troxel <gdt%ir.bbn.com@localhost>
I am only dimly following this, but I have two thoughts:
I see the point that running randomness tests will not detect a
well-engineered attack. But it probably will detect a large class of
implementation bugs, so it seems worth doing.
Randomness tests on input, not normally accessible, could detect a
further class of bugs.
I think agc's point is that all tests which are reasonably feasible
might as well be done, vs a claim that they will detect intentional
attacks.
On-line crypto self-tests with known-answer test vectors are a good
way to make sure of that. All the crypto code I have added to the
tree has such self-tests. The chance of passing the self-tests and
failing to function on other inputs is tremendously slim (unless the
compiler optimizes the self-test code away or something).
It is implausible to me that anyone could fabricate inputs to the
entropy pool so that the composition
pool |---> let k = SHA1(pool), AES128_k(0) || AES128_k(1) || ...
on even nonuniform inputs would fail any *generic* statistical tests
with probability higher than a uniform random stream would anyway.
(If/when we switch to using Keccak as I proposed earlier, the
implausibility will increase: indistinguishability from a random
oracle was one of the design criteria for the SHA-3 competition.)
There might be *specialized* distinguishers for this function -- but
nobody in the literature has published a specialized distinguisher for
k |---> AES128_k(0) || AES128_k(1) || ..., nor for pool |--->
SHA1(pool) except for standard Merkle-Damgard length extensions. So I
can't imagine how generic statistical tests on these parts of the
system could possibly find anything that an army of cryptographers for
over a decade hasn't.
Home |
Main Index |
Thread Index |
Old Index