Wednesday, February 22, 2012

Tuning for a high-latency, high-speed link.

So we got a link from Amsterdam to the US, a fiber link with about 100ms RTT.
Initial speed testing (DONT USE SSH/SCP) gave us 25.9M/s.
That's with a defaul linux centOS5 install with the following settings:

ipv4/tcp_mem: 196608 262144 393216
ipv4/tcp_rmem: 4096 87380 262144

ipv4/tcp_wmem: 4096 87380 262144
core/rmem_default: 262144

core/rmem_max: 262144
core/wmem_default: 129024
core/wmem_max: 131071
core/optmem_max: 20480
ipv4/tcp_window_scaling: 1
ipv4/tcp_timestamps: 1
ipv4/tcp_sack: 1
core/netdev_max_backlog: 1000
interface tx/rq queuelen: 1000

Tuning this, I get the full 111M/s. I used these settings:

ipv4/tcp_mem: 196608 262144 393216
ipv4/tcp_rmem: 4096 87380 26843546
ipv4/tcp_wmem: 4096 87380 26843546 
core/rmem_default: 262144
core/rmem_max: 26843546
core/wmem_default: 129024
core/wmem_max: 26843546
core/optmem_max: 20480
ipv4/tcp_window_scaling: 1
ipv4/tcp_timestamps: 1
ipv4/tcp_sack: 0
core/netdev_max_backlog: 10000
interface tx/rq queuelen: 10000

For the max tcp buffer settings, I used: 2 * 1024 * 1024 * 1024 *0.1 / 8 (2 times bandwidth times RTT)
Now it takes about 15 seconds to get to the full speed, as the tcp congestion is still set to the default (BIC). When changed to HTCP (highspeed TCP), this only takes 8 seconds (echo htcp > /proc/sys/net/ipv4/tcp_congestion_control).

How cool is this? a 1Gb/s connection overseas? :)

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home