From: Rick Jones <rick.jones2@hp.com>
To: Ivan Novick <novickivan@gmail.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>,
netdev@vger.kernel.org, Tim Heath <theath@greenplum.com>
Subject: Re: Choppy TCP send performance
Date: Fri, 28 May 2010 15:08:37 -0700 [thread overview]
Message-ID: <4C003EE5.6070602@hp.com> (raw)
In-Reply-To: <AANLkTilZxoSEGOhmWdk2RbIGMmCMi2yNJm9rUwIzDIoL@mail.gmail.com>
Unless you think your application will run over 10G, or over a WAN, you
shouldn't need anywhere near the size of socket buffer you are getting via
autotuning to be able to achieve "link-rate" - link rate with a 1GbE LAN
connection can be achieved quite easily with a 256KB socket buffer.
The first test here is with autotuning going - disregard what netperf reports
for the socket buffer sizes here - it is calling getsockopt() before connect()
and before the end of the connection():
raj@spec-ptd2:~/netperf2_trunk$ src/netperf -H s9 -v 2 -l 30 -- -m 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s9.cup.hp.com
(16.89.132.29) port 0 AF_INET : histogram
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 131072 30.01 911.50
Alignment Offset Bytes Bytes Sends Bytes Recvs
Local Remote Local Remote Xfered Per Per
Send Recv Send Recv Send (avg) Recv (avg)
8 8 0 0 3.42e+09 131074.49 26090 11624.79 294176
Maximum
Segment
Size (bytes)
1448
Histogram of time spent in send() call.
UNIT_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
TEN_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
HUNDRED_USEC : 0: 3: 21578: 378: 94: 20: 3: 2: 0: 4
UNIT_MSEC : 0: 4: 2: 0: 0: 780: 3215: 6: 0: 1
TEN_MSEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
HUNDRED_MSEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
UNIT_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
TEN_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
>100_SECS: 0
HIST_TOTAL: 26090
Next, we have netperf make an explicit setsockopt() call for 128KB socket
buffers, which will get us 256K. Notice that the bandwidth remains the same,
but the distribution of the time spent in send() changes. It gets squeezed more
towards the middle of the range from before.
<unk$ src/netperf -H s9 -v 2 -l 30 -- -m 128K -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s9.cup.hp.com
(16.89.132.29) port 0 AF_INET : histogram
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
262142 262144 131072 30.00 911.91
Alignment Offset Bytes Bytes Sends Bytes Recvs
Local Remote Local Remote Xfered Per Per
Send Recv Send Recv Send (avg) Recv (avg)
8 8 0 0 3.42e+09 131074.74 26091 11361.40 301008
Maximum
Segment
Size (bytes)
1448
Histogram of time spent in send() call.
UNIT_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
TEN_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
HUNDRED_USEC : 0: 401: 64: 10: 0: 0: 1: 279: 10237: 3914
UNIT_MSEC : 0: 11149: 31: 0: 4: 0: 0: 0: 0: 0
TEN_MSEC : 0: 1: 0: 0: 0: 0: 0: 0: 0: 0
HUNDRED_MSEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
UNIT_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
TEN_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
>100_SECS: 0
HIST_TOTAL: 26091
happy benchmarking,
rick jones
Just for grins, netperf asking for 64K socket buffers:
<unk$ src/netperf -H s9 -v 2 -l 30 -- -m 128K -s 64K -S 64K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s9.cup.hp.com
(16.89.132.29) port 0 AF_INET : histogram
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
131072 131072 131072 30.00 921.12
Alignment Offset Bytes Bytes Sends Bytes Recvs
Local Remote Local Remote Xfered Per Per
Send Recv Send Recv Send (avg) Recv (avg)
8 8 0 0 3.454e+09 131074.33 26354 11399.69 303020
Maximum
Segment
Size (bytes)
1448
Histogram of time spent in send() call.
UNIT_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
TEN_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
HUNDRED_USEC : 0: 2: 0: 0: 0: 7: 2672: 1831: 19: 1
UNIT_MSEC : 0: 21811: 6: 0: 4: 0: 0: 0: 0: 0
TEN_MSEC : 0: 1: 0: 0: 0: 0: 0: 0: 0: 0
HUNDRED_MSEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
UNIT_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
TEN_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0
>100_SECS: 0
HIST_TOTAL: 26354
next prev parent reply other threads:[~2010-05-28 22:08 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-28 20:38 Choppy TCP send performance Ivan Novick
2010-05-28 21:16 ` Eric Dumazet
2010-05-28 21:35 ` Ivan Novick
2010-05-28 22:00 ` Eric Dumazet
2010-05-28 22:23 ` Ivan Novick
2010-05-28 22:08 ` Rick Jones [this message]
2010-05-28 22:28 ` Ivan Novick
2010-05-28 22:57 ` Rick Jones
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C003EE5.6070602@hp.com \
--to=rick.jones2@hp.com \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=novickivan@gmail.com \
--cc=theath@greenplum.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox