Le 13/08/2014 09:31, David Laight a écrit :
On Sun, Jun 29, 2014 at 06:08:29PM +0200, Jean-Yves Migeon wrote:IMHO adding sleep(base + rand()) here is not productive. After all bozo is in the situation of comparing two byte strings (~= hash check), so the legimitate user is already penalized by bozo when it has to validate the entire string. Randomizing the sleep just increases the signal/noise ratio. IMHO constant time checks is better.The 'cost' of any memory compare will be absolutely minimal compared to the cost of actually sending a TCP packet. Is the time taken to do the password hash check actually measureable on a remote system? Go through a couple of routers and you'll get jitter. Even the ethernet MAC's interrupt mitigation could well add enough jitter. Of course, if you have to do another lookup against some database server then that will add add a measurable delay.
Sorry for such a late answer, I was hiking in places where Internet access is... well, inexistant. Yes, even in 2014 this still happens :)
All your remarks about jitter and latency added by network components are true; however counting on it to "hide" side channels is a bit optimistic. We never know how the server might be used, or even how its code could be cargo-culted someplace else.
Hash/HMAC checks should be using constant time code paths between valid/invalid states when these are used for authentication/signature purposes. Google's keyczar made this mistake a while back, and they fixed it with the classic "return XOR computation" (for good).
Granted timing attacks over a network are more complicated than localhost ones but are still doable. Usually the noise added by all the network components are roughly comparable to white noise which can be filtered out when you have enough samples and time at your disposal. See [1] as a good example (a bit outdated though).
[1] http://crypto.stanford.edu/~dabo/pubs/papers/ssl-timing.pdf -- Jean-Yves Migeon