Subject: Re: Should Alpha PCI code manage latency timers?
To: None <tls@rek.tjls.com>
From: List Mail User <track@Plectere.com>
List: port-alpha
Date: 01/24/2005 08:05:50
>From bounces-tech-kern-owner-track=Plectere.com@NetBSD.org Mon Jan 24 05:10:52 2005
>X-Original-To: tech-kern@netbsd.org
>Delivered-To: tech-kern@netbsd.org
>Date: Mon, 24 Jan 2005 08:10:07 -0500
>From: Thor Lancelot Simon <tls@rek.tjls.com>
>To: port-alpha@NetBSD.org
>Cc: tech-kern@NetBSD.org
>Subject: Should Alpha PCI code manage latency timers?
>...
>
>The upshot of a rather strange recent thread in netbsd-help (titled
>"got drivers?") was that, at least on some PCI alphas, neither SRM
>nor our MD PCI code set devices' latency timers at all. A user had
>a machine with two tulips, a pciide, and a QL1040 -- only the 1040
>worked reliably, because the isp driver explicitly whacks the latency
>timer value to 0x40 if it finds it at 0x00.
>
>The user adjusted the pciide driver to set the latency timer to 0x40
>and all of a sudden he could use the disk and talk on the network at
>the same time without losing packets.
>
>If SRM isn't going to set the latency timer it seems to me we ought to;
>and not in every device driver, either!
>
>--
> Thor Lancelot Simon tls@rek.tjls.com
>
>"The inconsistency is startling, though admittedly, if consistency is to be
> abandoned or transcended, there is no problem." - Noam Chomsky
>
This is from my *very* old PCI 1.0 and 1.1 drafts and specs, but...
The latency timer, min and max grant and irq fields in the PCI config
space are all reserved for either the BIOS or OS which does the initial
resource allocation. It is/was strictly a violation of the (original)
specification for these to be changed after allocation has occurred.
The fact that several drivers do this (both in the free OS's and in
some MS products) is just plain wrong. In particular a "smart" device
can/will use the settings to recognize the worst case times for bus
lockout and adjust its usage of DMA accordingly (yes, I have designed
chips that do exactly this). The most extreme (mis)-example I have seen
is the ath driver's HAL changing latency field to 168 in some versions on
some platforms. Simply it is (or was) always wrong for a driver to change
the fields, they are writable only to allow the allocation code to notify
the device of the worst case used by other devices sharing the bus, and
should be treated as read-only be any driver or on-chip circuitry after boot
up (re. allocation). Changing the value after allocation defeats the entire
purpose of this.
In this light it becomes quite clear why the problem mentioned above
occurs -- The pciide chip expects that the driver will gain control after a
minimum set of handshakes and overhead after a grant request, but doesn't
actually get control until an additional time later, thus it will either not
have enough time to complete the scheduled DNS transaction (at best) or will
be aborted when it fails to complete its transaction in time; Either way, the
intended transfer fails will sometimes fail (when there is contention for the
bus). All devices on a PCI bus should have the latency field set to the
maximum value of any device sharing the bus to allow each of them to account
for the worst case buffering required before a grant after request is issued.
Simply, someone needs to double check for any drivers which play with
these fields and fix them. Also, the initial allocation (when NetBSD does the
PCIBIOS dance itself) should follow the guidelines in the PCI BIOS Writer's
guide (my own copies are all over ten years old - and I no longer have access
to the current ftp password to get a more recent copy; It has been a very long
time since I was on any of the committees or even a representative to the SIG).
Paul Shupak
P.S. I seem that from the description, the SRM is just plain busted for this
Alpha.