NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: SCP file transfer speed
I performed a quick benchmark with netio and it showed up the best
possible speed on TCP.
NETIO - Network Throughput Benchmark, Version 1.26
(C) 1997-2005 Kai Uwe Rommel
TCP connection established.
Packet size 1k bytes: 11506 KByte/s Tx, 11116 KByte/s Rx.
Packet size 2k bytes: 11512 KByte/s Tx, 11469 KByte/s Rx.
Packet size 4k bytes: 11513 KByte/s Tx, 11469 KByte/s Rx.
Packet size 8k bytes: 11513 KByte/s Tx, 11100 KByte/s Rx.
Packet size 16k bytes: 11513 KByte/s Tx, 11469 KByte/s Rx.
Packet size 32k bytes: 11513 KByte/s Tx, 11470 KByte/s Rx.
Done.
UDP, however, does not work out of the box. It continuously displays
these errors:
sendto(): No buffer space available
This might be tweakable through sysctl, but is however irrelevant in
this case. During an (uncompressed) scp copy, CPU usage on the
receiver side is largely below 10 % and disk utilization about 50%
(which makes me wonder because this disk can make about 100MB/s on
sequential write). I can copy to other Linux and Solaris boxes in this
network with about 10 - 11 MB/s from the NetBSD boxes - just receiving
is slow.
2015-03-19 14:40 GMT+01:00 Greg Troxel <gdt%ir.bbn.com@localhost>:
>
> Stephan <stephanwib%googlemail.com@localhost> writes:
>
>> When I copy large files through scp to a NetBSD box, I get transfer
>> speeds of only 7 MB/s on a 100 MBit connection. This should be around
>> 11 MB/s. I´ve seen this on different x86/amd64 hardware with NetBSD 5,
>> 6 and 7-BETA. The NICs are largely wm, fxp and bnx.
>
> I tested with the most bloated distfile I had handy -- qt4, at 230 MB.
> With compression off, I got 10 MB/s to three different machines
> (including a Xen DOM0), 9.0 MB/s to a xen domU, and 6.7 MB/s to another
> dom0 that was bought in 2005. These are all in use with a network with
> other traffic, all 100 Mb/s interfaces. I also saw 5 MB/s to another
> machine (many switches away in a different building), but it's a build
> server that's usually very heavily loaded.
>
> There are multiple things going on; you're testing packet delivery and
> how TCP reacts to it, ssh overhead (CPU time,b expansion), and slowdowns
> From writing to the filesystem. I agree that one would expect to be
> limited by the 100 Mb/s Ethernet itself with modern hardware.
>
> You could use ttcp to look at just TCP throughput.
>
> Check cpu usage with top during all this.
>
> Note that ssh is encrypting/macing the plaintext, which results in
> larger ciphertext plus ssh headers, plus there are TCP and IP headers,
> plus ethernet headers and inter-frame spacing. So you will not see 100
> Mb/s of payload.
>
> gson's results of 16 MB/s is interesting; that's 128 Mb/s of payload,
> which definitely seems limited by the Ethernet. manu@ has reported
> speeds that are most (800ish) of GbE with gluster.
>
> I wonder if ssh's internal flow control could be having an effect.
Home |
Main Index |
Thread Index |
Old Index