Subject: Re: settimeofday() versus interval tim{ers,ing}
To: Dennis Ferguson <dennis@jnx.com>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 09/29/1996 22:00:06
>The machine in question is a router. It can't set its time until it can
>reach an NTP server, which it can't in general do until it is running the
>routing protocols to populate its forwarding table. Once the routing is
>running it can set the time. Unfortunately, if the time is way off from
>boot (like the hardware RTC is set to PDT rather than UTC),
There's not much one can do once things have ogne that far. As
someone else isfond of saying, ``don't do that, then''. But if you
want a better example, try NZDT instead of PDT. NZDT is thirteen hours
ahead of UTC.
> setting the
>time hoses the process running the routing due to interval timer death.
I populate routers with sufficient static routes for them to reach a
stratum-1 server when they boot. The router does an ntpdate at boot,
*before* IP forwarding is enabled, and before starting any routping
protocols. The clock runs pertty stably thereafter. These machines
are in a fairly temperature-stable setting. Clock steps (rather than
slews) are rare and small enough that the scenario you've describe
hasn't yet been a worry. I think there's a good case that
being hours off, rather than seconds or minutes, is so broken
one *can't* always do the right thing.
That is, I think there's a real architectural issue here. Suppose a
process wants to set an interal timer. The process could want to
sleep for a specified interval of real (oh, let's say UTC and ignore
leap-seocnds for now) time. Or the process coudl want to sleep until
a specific *point* in time.
The desired behaviour of a BSD-style timer implementation, in the
two cases, is rather different. One *should* have the timer value
decremented, and one should not. I think the underlying problem
is that the interface isn't rich enough to let the kernel distingusih
the two cases and do the "right thing". (This is, btw, quite
different from the real vs. virtual timer distinction. I woud like
to ignore the issue of fudging virtual timers across settimeofday()
calls; I don't think it makes sense.)
VMS had a cute explicit encoding of time deltas: positive times were
absolute times, and negative time values (.e., with the high bit set)
were time *deltas*. That's enough information for an in-kernel
timer implementation to decide whether or not to adjust a timer
when the "real" time jumps discontinuously. Since that information
has been lost, forcing intervals to be interpreted one way is
(from an academic perspective) just as wrong as interpreting them
the other.
Or, in othe words, a FreeBSD-style "fix" that does the right
thing for you gated will do the *wrong* thing for my process sleeping
until 5 minutes before a meeting; an vice-versa.
I think I mis-understand now what you wanted to do. Fiddling
boottime to maintain the invariant you want sounds, well, gross.
I'd sonner extend the user/kernel interface to let user processes
indicate whether they want to sleep for a real-time *interval*,
or until a specified point in (the system's idea of) real time
has passed. I think you want to keep the existing
setitimer()/settimeofday behaviour, but to provide enough information
to let luser-level processes fix things up if they care to.
Is that right?
And, on a forward-lookingnote: whatever gets done, I think timespsecs
are a better proposal that timespecs for any new time code. Converting
a timespec to a timeval is easy, the converse runs into not having
the information in the first place.