Port-xen archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Dom0 bad network performance (was: NetBSD/Xen samba performance low (compared to NetBSD/amd64))
Hi Cherry,
Am 13.02.21 um 05:11 schrieb Mathew, Cherry G.:
Hi Matthias!
how big are your packet frames , and are the tests doing large numbers
of parallel connections ?
as far as I know, iperf3 doesn't use parallel connections so my test is
a single connection only.
The hint with the frames (I assume this is synonymous with the window
size) was good. I actually get very different results here (all done
with the NetBSD/Xen as Dom0):
1) 32K Window Size
mpeterma@nuc:~> iperf3 -w 32K -c 192.168.2.50
Connecting to host 192.168.2.50, port 5201
[ 5] local 192.168.2.40 port 44520 connected to 192.168.2.50 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 20.7 MBytes 173 Mbits/sec 0 35.4 KBytes
[ 5] 1.00-2.00 sec 21.1 MBytes 177 Mbits/sec 0 35.4 KBytes
[ 5] 2.00-3.00 sec 21.1 MBytes 177 Mbits/sec 0 35.4 KBytes
2) 64K Window Size
mpeterma@nuc:~> iperf3 -w 64K -c 192.168.2.50
Connecting to host 192.168.2.50, port 5201
[ 5] local 192.168.2.40 port 44388 connected to 192.168.2.50 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 25.3 MBytes 212 Mbits/sec 0 67.9 KBytes
[ 5] 1.00-2.00 sec 24.9 MBytes 209 Mbits/sec 0 67.9 KBytes
[ 5] 2.00-3.00 sec 25.2 MBytes 211 Mbits/sec 0 67.9 KBytes
3) 128K Window Size
mpeterma@nuc:~> iperf3 -w 128K -c 192.168.2.50
Connecting to host 192.168.2.50, port 5201
[ 5] local 192.168.2.40 port 44464 connected to 192.168.2.50 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 31.5 MBytes 265 Mbits/sec 0 158 KBytes
[ 5] 1.00-2.00 sec 31.7 MBytes 266 Mbits/sec 0 165 KBytes
[ 5] 2.00-3.00 sec 31.9 MBytes 268 Mbits/sec 0 165 KBytes
4) 160K Window Size
mpeterma@nuc:~> iperf3 -w 160K -c 192.168.2.50
Connecting to host 192.168.2.50, port 5201
[ 5] local 192.168.2.40 port 44560 connected to 192.168.2.50 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 31.6 MBytes 265 Mbits/sec 0 163 KBytes
[ 5] 1.00-2.00 sec 31.7 MBytes 266 Mbits/sec 0 163 KBytes
[ 5] 2.00-3.00 sec 31.8 MBytes 267 Mbits/sec 0 163 KBytes
5) 192K Window Size
mpeterma@nuc:~> iperf3 -w 192K -c 192.168.2.50
Connecting to host 192.168.2.50, port 5201
[ 5] local 192.168.2.40 port 44492 connected to 192.168.2.50 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 32.5 MBytes 273 Mbits/sec 0 192 KBytes
[ 5] 1.00-2.00 sec 32.3 MBytes 271 Mbits/sec 0 202 KBytes
[ 5] 2.00-3.00 sec 32.1 MBytes 270 Mbits/sec 0 202 KBytes
[ 5] 3.00-4.00 sec 31.9 MBytes 268 Mbits/sec 62 185 KBytes
It looks like the optimum in this case is a window size of 128K. As a
comparison measurement, I also repeated the measurement with 128K again
against the "pure" NetBSD kernel without Xen:
mpeterma@nuc:~> iperf3 -w 128K -c 192.168.2.50
Connecting to host 192.168.2.50, port 5201
[ 5] local 192.168.2.40 port 44802 connected to 192.168.2.50 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 111 MBytes 932 Mbits/sec 4 130 KBytes
[ 5] 1.00-2.00 sec 110 MBytes 922 Mbits/sec 27 164 KBytes
[ 5] 2.00-3.00 sec 111 MBytes 930 Mbits/sec 50 194 KBytes
This means that with a window size of 128K the difference is even more
significant, i.e. the speed under Xen is only a third of the speed
achieved with a "pure" NetBSD kernel.
Can we already conclude something from this, or did I misunderstand the
question about the packet frames?
Kind regards
Matthias
Home |
Main Index |
Thread Index |
Old Index