Subject: Re: packet capturing
To: None <tls@rek.tjls.com>
From: Jason Thorpe <thorpej@wasabisystems.com>
List: tech-kern
Date: 01/23/2004 13:53:28
--Apple-Mail-75--49156067
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=US-ASCII; format=flowed
On Jan 23, 2004, at 1:30 PM, Thor Lancelot Simon wrote:
> It's my impression that there are two general design philosophies for
> network interface chips:
>
> 1) "Buffer many packets, trickle them out to memory as necessary"
>
> 2) "Buffer almost nothing, rely on bursting packets into host memory
> before
> the buffer fills". These often even start DMA before they've got
> the
> entire packet off the wire, or try to anyway.
There's a 3rd kind:
3) "Buffer many packets, and burst them out to memory as aggressively
as possible."
I tend to prefer this kind :-) These sort often allow you to tune the
Rx DMA thresholds.
> It seems to me if you're going to put a pile of network interface chips
> behind a PCI-PCI bridge (meaning they all share one latency-timer's
> worth
> of bursting on the primary bus, as I understand it, or sometimes can't
> burst at all) you'd better make sure they're type 1 ("buffer a lot of
> stuff and trickle it out") and that their buffering is set up
> appropriately. For example, the current crop of Marvell "sk" chips can
> be configured for "store and forward" or for "cut-through", even though
> they do have a semi-decent amount of buffering on them (128K). If you
> configure them for cut-through, and they're behind a PCI-PCI bridge,
> they perform *terribly*. This isn't a badly-designed chip, but it _is_
> a chip whose design requires the driver to set it up differently
> depending
> on some facts about the bus it's on.
Perform terribly...but do they report errors like Rx FIFO overruns? If
so, you can detect this problem at run-time and correct for it.
> You can see something similar with the Intel gig chips if you drop them
> onto a bus that's not the one they're tuned for by default. Put a
> PCI-X
> Intel chip on a 33MHz bus, and note that its performance will be _far_
> worse than that of the previous-generation straight-PCI part, even if
> the
> bus is lightly loaded. I suspect this is one reason the "em" driver
> does
> better than our "wm" driver in some cases: it tries to tune the chip
> according to some facts about the PCI bus. Generally, this is a good
> thing to take into account, even though in an ideal world one should
> not
> have to; our drivers don't really do this at all right now.
I guess the real question is: How do you know you're behind a PCI-PCI
bridge? I suppose we could add a flag to the pci_attach_args that the
ppb driver could set.
-- Jason R. Thorpe <thorpej@wasabisystems.com>
--Apple-Mail-75--49156067
content-type: application/pgp-signature; x-mac-type=70674453;
name=PGP.sig
content-description: This is a digitally signed message part
content-disposition: inline; filename=PGP.sig
content-transfer-encoding: 7bit
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (Darwin)
iD8DBQFAEZfYOpVKkaBm8XkRArwRAJ9bB7ogrSnaau6YpGT/3uBnWCLDZACguSBZ
q8Al0VvsG/TvLRxzZ18XZfY=
=CzvA
-----END PGP SIGNATURE-----
--Apple-Mail-75--49156067--