NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
IPsec over GRE slow performances
Hi,
I have set up an IPsec over GRE connection with a remote host, both are NetBSD
6.1 based. The "client" is connected to the Internet through a 400Mbps fiber
connection. The "server" is located on a 10Gbps network. Both machines have
1Gbps NICs which behave perfectly, meaning they both reach the link speed limit
when transferring data outside the IPsec tunnel.
When doing a transfer through the tunnel, speed drops at a factor 5 to 10:
direct connection:
/dev/null 27%[====> ] 503.19M 45.3MB/s eta 83s
IPsec connection:
/dev/null 2%[ ] 47.76M 6.05MB/s eta 5m 3s
The tunnel is setup this way:
On the server, which is a NetBSD domU running on a debian/amd64 dom0:
$ cat /etc/ifconfig.xennet0
# server interface
up
inet 192.168.1.2 netmask 255.255.255.0
inet 172.16.1.1 netmask 0xfffffffc alias
$ cat /etc/ifconfig.gre0
create
tunnel 172.16.1.1 172.16.1.2 up
inet 172.16.1.5 172.16.1.6 netmask 255.255.255.252
IPsec traffic is forwarded from dom0's public IP to the domU's xennet0 interface
through an iptables NAT rule:
-A PREROUTING -i eth0 -p udp -m udp --dport 500 -j DNAT --to-destination 192.168.1.2:500
-A PREROUTING -i eth0 -p esp -j DNAT --to-destination 192.168.1.2
-A PREROUTING -i eth0 -p ah -j DNAT --to-destination 192.168.1.2
On the client:
$ cat /etc/ifconfig.vlan8
# client public interface
create
vlan 8 vlanif re0
!dhcpcd -i $int
inet 172.16.1.2 netmask 0xfffffffc alias
$ cat /etc/ifconfig.gre1
create
tunnel 172.16.1.2 172.16.1.1 up
inet 172.16.1.6 172.16.1.5 netmask 255.255.255.252
On racoon side, I tried various hash / encryption algorithms combinations, even
enc_null, but nothing changes really, transfer is still stuck at a 6MB/s max.
Here's the racoon setup:
On the server:
remote office.public.ip {
exchange_mode main;
lifetime time 28800 seconds;
proposal {
encryption_algorithm blowfish;
hash_algorithm sha1;
authentication_method pre_shared_key;
dh_group 2;
}
generate_policy off;
}
sainfo address 172.16.1.1/30 any address 172.16.1.2/30 any {
pfs_group 2;
encryption_algorithm blowfish;
authentication_algorithm hmac_sha1;
compression_algorithm deflate;
lifetime time 3600 seconds;
}
On the client:
remote node.public.ip {
exchange_mode main;
lifetime time 28800 seconds;
proposal {
encryption_algorithm blowfish;
hash_algorithm sha1;
authentication_method pre_shared_key;
dh_group 2;
}
generate_policy off;
}
sainfo address 172.16.1.2/30 any address 172.16.1.1/30 any {
pfs_group 2;
encryption_algorithm blowfish;
authentication_algorithm hmac_sha1;
compression_algorithm deflate;
lifetime time 3600 seconds;
}
The tunnel establishes with no issue, the only problem here is transfer drop.
Again, when transferring from / to the server from / to the client without
tunnel, speed is optimal, drop occurs _only_ through IPsec.
Both machines are intel-based CPUs running at 2+GHz, plenty of memory and very
little CPU time consumed by anything else than forwarding / NAT.
Has anyone witnessed such a behaviour? Any idea on where to look further?
Thanks,
----------------------------------------------------------------
Emile `iMil' Heitor * <imil@{home.imil.net,NetBSD.org,gcu.info}>
_
| http://imil.net | ASCII ribbon campaign ( )
| http://www.NetBSD.org | - against HTML email X
| http://gcu.info | & vCards / \
Home |
Main Index |
Thread Index |
Old Index