Subject: Re: PCI Memory Write and Invalidate
To: Jason R Thorpe <thorpej@wasabisystems.com>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 06/28/2002 14:49:46
In message <20020628143003.Z1614@dr-evil.shagadelic.org>Jason R Thorpe writes
>I'm wondering if we're missing out on the opportunity to use MWI on
>a lot of devices that support it. The PCI spec says that if a device
>does not support MWI, it must hard-wire the MWI bit in the PCI CSR
>to 0, which means it's probably safe to unconditionally set that bit
>in generic PCI code. Another option is to make sure it is always clear
>and place responsibility on individual device drivers for setting it.
hi Jason,
I've seen hardware where setting the MWI bit to enabled, resulted in
screwups and (iirc) data corruption. Yes, this is not in conformance
with recent PCI specs (2.1 or 2.2?) but I've still seen it.
I've also seen PCI bridges (or rather, hostbridge-CPU bus interfaces)
which claim to do MWI, but where PCI 'scopes show that it makes
absolutely no difference to PCI performance (a pci bus-anlayzer showed
the same rate of stalls, at the same cache-line boundaries.)
Some P-Pro era chipsets may actually be marginally slower.
I also vaguely recall some problems with pci-pci bridges. (I'd
have to find my lab notebook to get more details on all of these.)
OTOH, write-invalidate does make a signifcant difference on
well-engeineered PCI hardware.
On balance, I'd prefer the 2nd, more conservative approach. If you
add a config-time option to enable the more agressive approach, then
those who want it, can try it. I'm leery of making it the default in
a GENERIC kernel, tho.