Subject: Re: setitimer() bug?
To: msaitoh <msaitoh@spa.is.uec.ac.jp>
From: Charles M. Hannum <mycroft@MIT.EDU>
List: tech-kern
Date: 07/21/1998 15:15:03
> The only way to really `fix' this is to fetch the current time with
> microtime() and schedule the event from that.
Actually, given that we're still using tick scheduling, that
degenerates to the same thing, unless you happen to be right on the
edge of a tick.
Consider this pseudo-code:
now = microtime()
later = now + interval
hzto(later) = (later - time) / tick
You have the following invariant:
time + epsilon < now < time + epsilon + tick
Therefore the difference washes out in the division. To ensure that
at least {interval} time passed, you'd have to wait for at least
{interval / tick + 1} ticks to pass.
But this sucks for a different reason. Consider an application (and I
believe there are *many* of these) that repeatedly called usleep() or
select() to do timing, with a specific interval. After the first
call, it's more or less synchronized with the clock. This means that
for every iteration after that, it enters the kernel just after a
tick. Using the above algorithm, it would always end up waiting just
slightly less than one extra tick past the specified amount of time,
causing rapid timing skew.
Of course, programs really shouldn't be doing that, but I don't live
in an ideal world.
The real answer here is to punt all the tick garbage and do
high-precision interval timing. (I have an interesting story related
to this, about what I learned from disassembling the NextStep timer
code...)
In the interim, the question is whether we choose to be fascist about
the intervals (and possibly `break' timing in some stupid programs),
or leave it as is.