Subject: DELAY on i386 change?
To: None <port-i386@netbsd.org, tech-kern@netbsd.org>
From: Perry E. Metzger <perry@piermont.com>
List: tech-kern
Date: 03/26/1999 12:25:18
Right now, DELAY() on port-i386 loses badly on delays less than
5usec. (To see why, look at the code -- 5 is a magic constant.)
I was bitten badly by this when trying to fix a bug in the line
printer driver a couple of days ago.
I'm proposing (no, not for 1.4 -- for post 1.4) that we
1) Calibrate a loop timer on bootup, and
2) use it (thusly) in DELAY() for low values
#define DELAY(x) { \
volatile int _i; \
if ((x) < 10) \
for (_i = 0; _i < delay_table[(x)]; _i++); \
else \
delay(x); \
}
(Note that delay_table[x] could just be loops_per_usec*x or some such
-- I just thought of a table lookup because it would be faster on
genuine ancient i386es and such, but it is probably silly. Naturally,
either the table or the loops_per_usec variables are calibrated
during boot.)
Note that if x is a constant, the optimizer will get rid of the if for
you. Most of the time, the x IS a constant, so this is no greater
overhead for the majority of DELAY() calls.
Why, you may ask, call delay() instead of always using the loop?
Because laptop processors change speed in many laptops depending on
power conditions. Right now, very short delays are completely broken,
so this causes no additional harm on them and actually helps, but
delay() works regardless of clock speed and I'd rather not break
longer delays.
BTW, this code has the advantage that it actually *can*, in most
cases, give you accurate delays of between 1 and 9 usec if the clock
stays constant. You avoid the procedure call overhead that might cause
you pain on very slow machines, and it should still be reasonably
accurate on fast ones.
Comments, anyone?
Perry