tech-net archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: NetBSD 5.1 TCP performance issue (lots of ACK)
On Thu, Oct 27, 2011 at 08:30:12AM -0400, Greg Troxel wrote:
>
> Manuel Bouyer <bouyer%antioche.eu.org@localhost> writes:
>
> > On Wed, Oct 26, 2011 at 08:15:44PM -0400, Greg Troxel wrote:
>
> > Yes, between 40 and 50MB/s
>
> ok, that matches what I see in the trace.
>
> >> What is between these two devices? Is this just a gigabit switch, or
> >> anything more complicated?
> >
> > they're all (the 2 NetBSD and the linux host) connected to a cisco 3750
> > gigabit switch. I also tested with a single crossover cable, this doens't
> > change anything .
>
> OK - I've just seen enough things that are supposed to be transparant
> and aren't.
>
> > that's easy. And yes, I get better performances: 77MB/s instead of < 50.
>
> And does gluster then match ttcp, as in both 77?
ttcp is at 108KB/s (so it's also faster without tso4). Looks like there's
definitively a problem with TSO on our side.
> [...]
>
> I have no idea. Also, is there receive offload? The receiver has
> packets arriving all together whereas they are showing up more spread
> out at the transmitter. It may be that reordering happens in the
> controller, or it may be that it happens at the receiver when the
> packets are regenerated from the large buffer (and then injected out of
> order).
there is ip/tcp checksum offload on the receveir side but nothing else.
> >> thrashing. What happens if you change gluster to have smaller buffers
>
> I would do this experiment; that may avoid the problem. I'm not
> suggesting that you run this way forever, but it will help us understand
> what's wrong.
>
> >> (I don't understand why it's ok to have the FS change the tcp socket
> >> buffer options from system default)?
> >
> > Because it knows the size of its packets, or its internal receive buffers ?
>
> This is TCP, so gluster can have a large buffer in user space
> independently of what the TCP socket buffer is. People set TCP socket
> buffers to control the advertised window and to balance throughput on
> long fat pipes with memory usage. In your case the RTT is only a few ms
> even under load, so it wouldn't seem that huge buffers are necessary.
>
> Do you have actual problems if gluster doesn't force the buffer to be
> large?
that's interesting: I now have 78MB/s with tso4, and 48MB/s without
tso4. Just as if the setsockopt would turn tso4 off.
>
> (That said, having buffers large enough to allow streaming is generally
> good. But if you need that, it's not really about one user of TCP. I
> have been turning on
>
> net.inet.tcp.recvbuf_auto = 1
> net.inet.tcp.sendbuf_auto = 1
> net.inet6.tcp6.recvbuf_auto = 1
> net.inet6.tcp6.sendbuf_auto = 1
>
> to let buffers get bigger when TCP would be blocked by socket buffer.
> In 5.1, that seems to lead to running out of mbuf clusters rather than
> reclaiming them (when there are lots of connections), but I'm hoping
> this is better in -current (or rather deferring looking into it until I
> jump to current).
I have these too, and no nmbclusters issues.
>
> If you can get ttcp to show the same performance problems (by setting
> buffer sizes, perhaps), then we can debug this without gluster, which
> would help.
I tried ttcp -l524288 (this is what gluster uses) but it doesn't cause
problems either.
>
> Also, it would be nice to have a third machine on the switch and run
> tcpdump (without any funky offload behavior) and see what the packets on
> the wire really look like. With the tso behavior I am not confident
> that either trace is exactly what's on the wire.
playing with rspan it should be possible; I'll have a look.
> Have you seen: http://gnats.netbsd.org/42323
Yes, but I'm not seeing the problems described here.
--
Manuel Bouyer <bouyer%antioche.eu.org@localhost>
NetBSD: 26 ans d'experience feront toujours la difference
--
Home |
Main Index |
Thread Index |
Old Index