tech-net archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: NetBSD 5.1 TCP performance issue (lots of ACK)
On Fri, Oct 28, 2011 at 12:30:57PM -0400, Thor Lancelot Simon wrote:
> I'd say turn off WM_F_NEWQUEUE -- assuming that works -- and commit.
I'll look at this.
>
> While you're in there, can you see whether we're actually feeding
> the TSO enough data at once that it's actually useful? The tcpdump
> makes me suspect for some reason, we may not be.
I added a sc_ev_txlargetso, which gets incremented by:
if (m0->m_pkthdr.len > 32768)
WM_EVCNT_INCR(&sc->sc_ev_txlargetso);
in wm_tx_offload() at the end of the
if ((m0->m_pkthdr.csum_flags & (M_CSUM_TSOv4 | M_CSUM_TSOv6)) != 0) {
block.
With
ttcp -t -l65536 -b524288 -D xen1-priv < /glpool/truc
(64k writes but a 512k TCP buffer) I get no "largetso".
With ttcp -t -l524288 -b524288 -D xen1-priv < /glpool/truc
I get only 16 (the file to send is 640MB large so I'd expect 10000 TSO
segments).
So yes, tso is not very usefull it seems.
When talking to the linux host,
ttcp -t -l65536 -b524288 -D
gives 615 largetso (still not enough, but better), and 693 with
ttcp -t -l524288 -b524288.
A glusterfs read of the same 640MB file generates no largetso when sending
to the NetBSD client, and 2159 when sending to the linux host.
So there's still something in our TCP that prevents sending large
TCP segments. I suspect we're back to someting with ACK (like,
we ack a small segment so the sender sends a small segment to keep the window
full).
--
Manuel Bouyer <bouyer%antioche.eu.org@localhost>
NetBSD: 26 ans d'experience feront toujours la difference
--
Home |
Main Index |
Thread Index |
Old Index