In the effort to explain why a 1M default buffer size works better than the more canonic 87k set also by the system and suggested in every network optimization site, as I wrote in this post about sonar tests to BNL, I tried the ss command suggested by John Green. Below are two results with 87k and 1M default buffer size respectively there are two things that jump to the eye for me
1) cwnd doesn't go beyond 30 in the first test, while in the second test cwnd is at least 556 and up to 1783. In my wikipedia knowledge of how this all really works it means the TCP window is not scaling with the first settings and it is with the second.
2) the memory field is completely different in the first case we have bytes in memory in some state in one (more often two streams, sometimes three) and nothing else. In the second case we have memory fields equally filled. This mirrors what observed with netstat and the Send-Q field.
Different state of the connection might report additional fields. For example when the connection is in CLOSE_WAIT state an ato (ack timeout) parameter appears. Also in CLOSE_WAIT the memory is fields are different r and f are uniformly filled instead of w and f.
Which confirms that with the fasterdata settings the TCP window doesn't scale (at least beyond a very small value such as 30).
Some interesting links I found to explain the fields
mem r,w,f,t values
cwnd or slow start and congestion control
wscale value
rto retransmission timeout
87k settings
ss -timeo|grep -A1 192.12.15.235|grep -v 192.12
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:292 rtt:92/0.75 cwnd:31 send 3.9Mbps rcv_space:14600
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:292 rtt:92/0.75 cwnd:30 send 3.8Mbps rcv_space:14600
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:307 rtt:104.125/0.75 cwnd:30 send 3.3Mbps rcv_space:14600
mem:(r0,w106648,f73576,t0) ts sack htcp wscale:7,9 rto:307 rtt:104/0.75 cwnd:21 send 2.3Mbps rcv_space:14600
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:307 rtt:104.125/0.75 cwnd:30 send 3.3Mbps rcv_space:14600
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:307 rtt:104/0.75 cwnd:30 send 3.3Mbps rcv_space:14600
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:292 rtt:92.125/0.75 cwnd:29 send 3.6Mbps rcv_space:14600
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:291 rtt:91.875/0.75 cwnd:30 send 3.8Mbps rcv_space:14600
mem:(r0,w0,f0,t0) ts sack htcp wscale:7,9 rto:292 rtt:92/0.75 cwnd:28 send 3.5Mbps rcv_space:14600
1MB settings
ss -timeo|grep -A1 192.12.15.226|grep -v 192.12
mem:(r0,w1620945,f267311,t0) ts sack htcp wscale:9,8 rto:312 rtt:112.5/26 cwnd:621 send 63.9Mbps rcv_space:14600
mem:(r0,w5074901,f213035,t0) ts sack htcp wscale:9,8 rto:320 rtt:120.5/34.5 cwnd:638 send 61.3Mbps rcv_space:14600
mem:(r0,w269841,f495,t0) ts sack htcp wscale:9,8 rto:333 rtt:130.625/18 cwnd:356 send 31.6Mbps rcv_space:14600
mem:(r0,w269841,f495,t0) ts sack htcp wscale:9,8 rto:319 rtt:119.5/18.75 cwnd:345 send 33.4Mbps rcv_space:14600
mem:(r0,w2944464,f266800,t0) ts sack htcp wscale:9,8 rto:317 rtt:117.125/34.5 cwnd:1236 send 122.2Mbps rcv_space:14600
mem:(r0,w269841,f495,t0) ts sack htcp wscale:9,8 rto:319 rtt:119.75/18.75 cwnd:320 send 31.0Mbps rcv_space:14600
mem:(r0,w5621967,f239409,t0) ts sack htcp wscale:9,8 rto:313 rtt:113.125/24.25 cwnd:624 send 63.9Mbps rcv_space:14600
mem:(r0,w269841,f495,t0) ts sack htcp wscale:9,8 rto:322 rtt:122.25/17.25 cwnd:318 send 30.1Mbps rcv_space:14600
mem:(r0,w2943432,f800312,t0) ts sack htcp wscale:9,8 rto:314 rtt:114.375/21.75 cwnd:655 send 66.3Mbps rcv_space:14600
Connections in different states (additional fields)
ss -timeo|grep -A1 192.12.15
ESTAB 0 1587665 195.194.108.50:43684
192.12.15.233:41628 timer:(on,299ms,0) uid:19536
ino:20107501 sk:ffff88010b09cb00
mem:(r0,w1620945,f267311,t0) ts sack htcp wscale:9,8
rto:312 rtt:112.5/26 cwnd:621 send 63.9Mbps rcv_space:14600
[...]
ss -timeo|grep -A1 192.12.15
CLOSE-WAIT 1 0 195.194.108.50:43684
192.12.15.233:41628 uid:19536 ino:20107501
sk:ffff88010b09cb00
mem:(r4352,w0,f3840,t0) ts sack htcp wscale:9,8
rto:300 rtt:100.5/9 ato:40 cwnd:1924 send 221.8Mbps
rcv_space:14600
[....]
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment