public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Fengguang Wu <fengguang.wu@intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>, lkp@01.org
Subject: Re: [mm] f7b5d647946: -3.0% dbench.throughput-MB/sec
Date: Tue, 19 Aug 2014 20:11:51 +0100	[thread overview]
Message-ID: <20140819191139.GG10146@suse.de> (raw)
In-Reply-To: <20140819154351.GA2697@localhost>

On Tue, Aug 19, 2014 at 11:43:51PM +0800, Fengguang Wu wrote:
> On Tue, Aug 19, 2014 at 03:34:28PM +0100, Mel Gorman wrote:
> > On Tue, Aug 19, 2014 at 12:41:34PM +0800, Fengguang Wu wrote:
> > > Hi Mel,
> > > 
> > > We noticed a minor dbench throughput regression on commit
> > > f7b5d647946aae1647bf5cd26c16b3a793c1ac49 ("mm: page_alloc: abort fair
> > > zone allocation policy when remotes nodes are encountered").
> > > 
> > > testcase: ivb44/dbench/100%
> > > 
> > > bb0b6dffa2ccfbd  f7b5d647946aae1647bf5cd26
> > > ---------------  -------------------------
> > >      25692 ± 0%      -3.0%      24913 ± 0%  dbench.throughput-MB/sec
> > >    6974259 ± 6%     -12.1%    6127616 ± 0%  meminfo.DirectMap2M
> > >      18.43 ± 0%      -4.6%      17.59 ± 0%  turbostat.RAM_W
> > >       9302 ± 0%      -3.6%       8965 ± 1%  time.user_time
> > >    1425791 ± 1%      -2.0%    1396598 ± 0%  time.involuntary_context_switches
> > > 
> > > Disclaimer:
> > > Results have been estimated based on internal Intel analysis and are provided
> > > for informational purposes only. Any difference in system hardware or software
> > > design or configuration may affect actual performance.
> > > 
> > 
> > DirectMap2M changing is a major surprise and doesn't make sense for this
> > machine.
> 
> The ivb44's hardware configuration is
> 
>         model: Ivytown Ivy Bridge-EP
>         nr_cpu: 48
>         memory: 64G
> 
> And note that this is an in-memory dbench run, which is why
> dbench.throughput-MB/sec is so high.
> 

Ok, it's a NUMA machine. I expect in this case that prior to the patch more
local memory would have been used on node 0 due to the fair zone allocation
policy skipping remote nodes. The patch corrects the behaviour of zonelist
but the downside is more remote accesses for processes running on node 0. The
behaviour is correct although not necessarily desirable from a performance
point of view.  Users should boot with numa_zonelist_order=node if it's
a problem.

-- 
Mel Gorman
SUSE Labs

      parent reply	other threads:[~2014-08-19 19:14 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-19  4:41 [mm] f7b5d647946: -3.0% dbench.throughput-MB/sec Fengguang Wu
2014-08-19 14:34 ` Mel Gorman
2014-08-19 15:43   ` Fengguang Wu
2014-08-19 16:12     ` Mel Gorman
2014-08-19 17:36       ` Fengguang Wu
2014-08-19 19:11     ` Mel Gorman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140819191139.GG10146@suse.de \
    --to=mgorman@suse.de \
    --cc=fengguang.wu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox