Stephan <stephanwib%googlemail.com@localhost> writes: > When I copy large files through scp to a NetBSD box, I get transfer > speeds of only 7 MB/s on a 100 MBit connection. This should be around > 11 MB/s. I´ve seen this on different x86/amd64 hardware with NetBSD 5, > 6 and 7-BETA. The NICs are largely wm, fxp and bnx. I tested with the most bloated distfile I had handy -- qt4, at 230 MB. With compression off, I got 10 MB/s to three different machines (including a Xen DOM0), 9.0 MB/s to a xen domU, and 6.7 MB/s to another dom0 that was bought in 2005. These are all in use with a network with other traffic, all 100 Mb/s interfaces. I also saw 5 MB/s to another machine (many switches away in a different building), but it's a build server that's usually very heavily loaded. There are multiple things going on; you're testing packet delivery and how TCP reacts to it, ssh overhead (CPU time,b expansion), and slowdowns From writing to the filesystem. I agree that one would expect to be limited by the 100 Mb/s Ethernet itself with modern hardware. You could use ttcp to look at just TCP throughput. Check cpu usage with top during all this. Note that ssh is encrypting/macing the plaintext, which results in larger ciphertext plus ssh headers, plus there are TCP and IP headers, plus ethernet headers and inter-frame spacing. So you will not see 100 Mb/s of payload. gson's results of 16 MB/s is interesting; that's 128 Mb/s of payload, which definitely seems limited by the Ethernet. manu@ has reported speeds that are most (800ish) of GbE with gluster. I wonder if ssh's internal flow control could be having an effect.
Attachment:
pgpK_ZieBjpVc.pgp
Description: PGP signature