tech-net archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Poor TCP performance as latency increases
Hi folks —
It seems that there is a performance issue in the TCP stack, and I’m hoping someone else can reproduce my findings as I’m not an expert in this area.
Simple test: Download a file from a web server. As latency between the client and server increases, there is a drastic reduction in throughput. I don’t see this with other (Linux, macOS) TCP stacks.
To rule out any issues with the NIC driver (I’ve tried both wm(4) and ixg(4) here), I can fetch a file hosted on a local Linux NAS and performance is reasonable. As the latency to the web server increases (20ms, 30ms), we see a drop off in performance:
> 0.2ms
berserk$ ping -c1 10.0.0.100
PING aigis (10.0.0.100): 56 data bytes
64 bytes from 10.0.0.100: icmp_seq=0 ttl=64 time=0.213683 ms
berserk$ curl -o /dev/null http://10.0.0.100/files/1Gb.dat
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 112M 0 0:00:09 0:00:09 --:--:— 112M
> 20ms
aigis$ ping -c1 cdn.netbsd.org
PING cdn.netbsd.org (151.101.137.6): 56 data bytes
64 bytes from 151.101.137.6: seq=0 ttl=57 time=21.185 ms
berserk$ curl -o /dev/null http://cdn.netbsd.org/pub/NetBSD/images/10.1/NetBSD-10.1-evbarm-aarch64.iso
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 261M 100 261M 0 0 21.2M 0 0:00:12 0:00:12 --:--:-- 26.7M
> 30ms
berserk$ ping -c1 bhs.proof.ovh.ca
PING bhs.proof.ovh.ca (51.222.154.207): 56 data bytes
64 bytes from 51.222.154.207: icmp_seq=0 ttl=50 time=28.642797 ms
berserk$ curl -o /dev/null https://bhs.proof.ovh.ca/files/1Gb.dat
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 10.5M 0 0:01:36 0:01:36 --:--:-- 9674k
From the Linux client (on the same network, behind the same router), performance is reasonable:
> 20ms
aigis$ ping -c1 cdn.netbsd.org
PING cdn.netbsd.org (151.101.137.6): 56 data bytes
64 bytes from 151.101.137.6: seq=0 ttl=57 time=21.185 ms
aigis$ curl -o /dev/null http://cdn.netbsd.org/pub/NetBSD/images/10.1/NetBSD-10.1-evbarm-aarch64.iso
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 261M 100 261M 0 0 137M 0 0:00:01 0:00:01 --:--:-- 137M
> 30ms
aigis$ ping -c1 bhs.proof.ovh.ca
PING bhs.proof.ovh.ca (51.222.154.207): 56 data bytes
64 bytes from 51.222.154.207: seq=0 ttl=50 time=28.453 ms
aigis$ curl -o /dev/null https://bhs.proof.ovh.ca/files/1Gb.dat
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 85.5M 0 0:00:11 0:00:11 --:--:— 95.7M
I’ve tried tweaking some sysctls and some have given small gains but nothing has given a radical improvement. The above test is with these settings, which so far have worked best for me:
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.init_win=10
net.inet.tcp.init_win_local=10
net.inet.ip.ifq.maxlen=4096
net.inet.tcp.congctl.selected=cubic
kern.sbmax=4194304
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
I can reproduce this on 10.1 and -current.
Take care,
Jared
Home |
Main Index |
Thread Index |
Old Index