public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Rick Jones <rick.jones2@hp.com>
Cc: Mike Galbraith <efault@gmx.de>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Suresh Siddha <suresh.b.siddha@intel.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: Netperf UDP_STREAM regression due to not sending IPIs in ttwu_queue()
Date: Fri, 5 Oct 2012 10:54:08 +0100	[thread overview]
Message-ID: <20121005095408.GB29125@suse.de> (raw)
In-Reply-To: <506C7E20.4090103@hp.com>

On Wed, Oct 03, 2012 at 11:04:16AM -0700, Rick Jones wrote:
> On 10/03/2012 02:47 AM, Mel Gorman wrote:
> >On Tue, Oct 02, 2012 at 03:48:57PM -0700, Rick Jones wrote:
> >>On 10/02/2012 01:45 AM, Mel Gorman wrote:
> >>
> >>>SIZE=64
> >>>taskset -c 0 netserver
> >>>taskset -c 1 netperf -t UDP_STREAM -i 50,6 -I 99,1 -l 20 -H 127.0.0.1 -- -P 15895 -s 32768 -S 32768 -m $SIZE -M $SIZE
> >>
> >>Just FYI, unless you are running a hacked version of netperf, the
> >>"50" in "-i 50,6" will be silently truncated to 30.
> >>
> >
> >I'm not using a hacked version of netperf. The 50,6 has been there a long
> >time so I'm not sure where I took it from any more. It might have been an
> >older version or me being over-zealous at the time.
> 
> No version has ever gone past 30.  It has been that way since the
> confidence interval code was contributed.  It doesn't change
> anything, so it hasn't messed-up any results.  It would be good to
> fix but not critical.
> 

It's fixed already. I don't know where the 50 came out of in that case.
I was probably thinking "many iterations" at the time without reading
the docs properly.

> >>PS - I trust it is the receive-side throughput being reported/used
> >>with UDP_STREAM :)
> >
> >Good question. Now that I examine the scripts, it is in fact the sending
> >side that is being reported which is flawed. Granted I'm not expecting any
> >UDP loss on loopback and looking through a range of results, the
> >difference is marginal. It's still wrong to report just the sending side
> >for UDP_STREAM and I'll correct the scripts for it in the future.
> 
> Switching from sending to receiving throughput in UDP_STREAM could
> be a non-trivial disconnect in throughputs.  As Eric mentions, the
> receiver could be dropping lots of datagrams if it cannot keep-up,
> and netperf makes not attempt to provide any application-layer
> flow-control.
> 

I'll bear that in mind in the future if there is a sudden change in
throughput and determine if that is because of this send/receive disconnect
or something else. The raw data is recorded in either case so it should
be manageable.

> Not sure which version of netperf you are using to know whether or
> not it has gone to the "omni" code path.  If you aren't using 2.5.0
> or 2.6.0 then the confidence intervals will have been computed based
> on the receive side throughput, so you will at least know that it
> was stable, even if it wasn't the same as the sending side.
> 

I'm using 2.4.5 because there was "no reason" to upgrade. I'll take a
closer look at upgrading soon based on this comment because there is
a possibility that the confidence interval detection is a little
broken in the version I'm using.

> The top of trunk will use the remote's receive stats for the omni
> migration of a UDP_STREAM test too.  I think it is that way in 2.5.0
> and 2.6.0 as well but I've not gone into the repository to check.
> 
> Of course, that means you don't necessarily know that the sending
> throughput met your confidence intervals :)
> 

:)

> If you are on 2.5.0 or later, you may find:
> 
> http://www.netperf.org/svn/netperf2/trunk/doc/netperf.html#Omni-Output-Selection
> 
> helpful when looking to parse results.
> 

Thanks.

> One more, little thing - taskset may indeed be better for what you
> are doing (it will happen "sooner" certainly), but there is also the
> global -T option to bind netperf/netserver to the specified CPU id. http://www.netperf.org/svn/netperf2/trunk/doc/netperf.html#index-g_t_002dT_002c-Global-41
> 

I was aware of the option but had avoided using it as I think when I
worked with taskset inially the option didn't exist.

Thanks a lot Rick for the suggestions, they are very helpful.

-- 
Mel Gorman
SUSE Labs

      reply	other threads:[~2012-10-05  9:54 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-02  6:51 Netperf UDP_STREAM regression due to not sending IPIs in ttwu_queue() Mel Gorman
2012-10-02  7:49 ` Mike Galbraith
2012-10-02  8:45   ` Mel Gorman
2012-10-02  9:31     ` Mike Galbraith
2012-10-02 13:14       ` Mel Gorman
2012-10-02 14:33         ` Mike Galbraith
2012-10-03  6:50         ` Mike Galbraith
2012-10-03  8:13           ` Mike Galbraith
2012-10-03 13:30             ` Mike Galbraith
2012-10-10 12:29               ` Mel Gorman
2012-10-10 13:02                 ` Mike Galbraith
2012-10-10 13:05                 ` Peter Zijlstra
2012-10-02 22:48     ` Rick Jones
2012-10-03  9:47       ` Mel Gorman
2012-10-03 10:22         ` Eric Dumazet
2012-10-03 18:04         ` Rick Jones
2012-10-05  9:54           ` Mel Gorman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121005095408.GB29125@suse.de \
    --to=mgorman@suse.de \
    --cc=a.p.zijlstra@chello.nl \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rick.jones2@hp.com \
    --cc=suresh.b.siddha@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox