From mboxrd@z Thu Jan 1 00:00:00 1970 From: Rick Jones Subject: Re: Choppy TCP send performance Date: Fri, 28 May 2010 15:08:37 -0700 Message-ID: <4C003EE5.6070602@hp.com> References: <1275081377.2472.13.camel@edumazet-laptop> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Cc: Eric Dumazet , netdev@vger.kernel.org, Tim Heath To: Ivan Novick Return-path: Received: from g5t0006.atlanta.hp.com ([15.192.0.43]:37431 "EHLO g5t0006.atlanta.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753855Ab0E1WIl (ORCPT ); Fri, 28 May 2010 18:08:41 -0400 In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: Unless you think your application will run over 10G, or over a WAN, you shouldn't need anywhere near the size of socket buffer you are getting via autotuning to be able to achieve "link-rate" - link rate with a 1GbE LAN connection can be achieved quite easily with a 256KB socket buffer. The first test here is with autotuning going - disregard what netperf reports for the socket buffer sizes here - it is calling getsockopt() before connect() and before the end of the connection(): raj@spec-ptd2:~/netperf2_trunk$ src/netperf -H s9 -v 2 -l 30 -- -m 128K TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s9.cup.hp.com (16.89.132.29) port 0 AF_INET : histogram Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 131072 30.01 911.50 Alignment Offset Bytes Bytes Sends Bytes Recvs Local Remote Local Remote Xfered Per Per Send Recv Send Recv Send (avg) Recv (avg) 8 8 0 0 3.42e+09 131074.49 26090 11624.79 294176 Maximum Segment Size (bytes) 1448 Histogram of time spent in send() call. UNIT_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0 TEN_USEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0 HUNDRED_USEC : 0: 3: 21578: 378: 94: 20: 3: 2: 0: 4 UNIT_MSEC : 0: 4: 2: 0: 0: 780: 3215: 6: 0: 1 TEN_MSEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0 HUNDRED_MSEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0 UNIT_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0 TEN_SEC : 0: 0: 0: 0: 0: 0: 0: 0: 0: 0 >100_SECS: 0 HIST_TOTAL: 26090 Next, we have netperf make an explicit setsockopt() call for 128KB socket buffers, which will get us 256K. Notice that the bandwidth remains the same, but the distribution of the time spent in send() changes. It gets squeezed more towards the middle of the range from before. 100_SECS: 0 HIST_TOTAL: 26091 happy benchmarking, rick jones Just for grins, netperf asking for 64K socket buffers: 100_SECS: 0 HIST_TOTAL: 26354