linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Rik van Riel <riel@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Hugh Dickins <hughd@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Paul Turner <pjt@google.com>,
	Lee Schermerhorn <Lee.Schermerhorn@hp.com>,
	Alex Shi <lkml.alex@gmail.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 00/46] Automatic NUMA Balancing V4
Date: Thu, 22 Nov 2012 09:32:30 +0000	[thread overview]
Message-ID: <20121122093230.GR8218@suse.de> (raw)
In-Reply-To: <20121121232715.GA4638@gmail.com>

On Thu, Nov 22, 2012 at 12:27:15AM +0100, Ingo Molnar wrote:
> 
> * Mel Gorman <mgorman@suse.de> wrote:
> 
> > > I did a quick SPECjbb 32-warehouses run as well:
> > > 
> > >                                 numa/core      balancenuma-v4
> > >       SPECjbb  +THP:               655 k/sec      607 k/sec
> > > 
> > 
> > Cool. Lets see what we have here. I have some questions;
> > 
> > You say you ran with 32 warehouses. Was this a single run with 
> > just 32 warehouses or you did a specjbb run up to 32 
> > warehouses and use the figure specjbb spits out? [...]
> 
> "32 warehouses" obviously means single instance...
> 

Considering the amount of flak you gave me over the THP problem, it is
not unreasonable to ask a questions in clarification.

On running just 32 warehouse, please remember what I said about specjbb
benchmarks. MMTests reports each warehouse figure because indications
are that the low number of warehouses regressed while the higher numbers
showed performance improvements. Further, specjbb itself uses only figures
from around the expected peak it estimates unless it is overridden by the
config file (I expect you left it at the default).

So, you've answered my first question. You did not run for multiple
warehouses so you do not know what the lower number of warehouses were.
That's ok, the comparison is still valid.  Can you now answer my other
questions please? They were;

	What is the comparison with a baseline kernel?

	You say you ran with balancenuma-v4. Was that the full series
	including the broken placement policy or did you test with just
	patches 1-37 as I asked in the patch leader?

I'll also reiterate my final point. The objective of balancenuma is to be
better than mainline and at worst, be no worse than mainline (which with
PTE updates may be impossible but it's the bar). It puts in place a *basic*
placement policy that could be summarised as "migrate on reference with
a two stage filter". It is a common foundation that either the policies
of numacore *or* autonuma could be rebased upon so they can be compared in
terms of placement policy, shared page identification, scheduler policy and
load balance policy. Where they share policies (e.g. scheduler accounting
and load balance), we'd agree on those patches and move on until the two

Of course, a rebase may require changes to the task_numa_fault() interface
betwen the VM and the scheduler depending on the information the policies
are interested. There also might be differing requirements of the PTE
scanner but they should be marginal.

balancenuma is not expected to beat a smart placement policy but when
it does, the question becomes if the difference is due to the underlying
mechanics such as how it updates PTEs and traps fauls or the scheduler and
placement policies built on top. If we can eliminate the possibility that
it's the underlying mechanics our lives will become a lot easier.

Is there a fundamental reason why the scheduler modifications, placement
policies, shared page identification etc. from numacore cannot be rebased on
top of balancenuma? If there are no fundamental reasons, then why will you
not rebase so that we can potentially compare autonuma's policies directly
if it gets rebased? That will tell us if autonumas policies (placement,
scheduler, load balancer) are really better or if it actually depended on
its implementation of the underlying mechanics (use of a kernel thread to
do the PTE updates for example).

> Any multi-instance configuration is explicitly referred to as 
> multi-instance. In my numbers I sometimes tabulate them as "4x8 
> multi-JVM", that means the obvious as well: 4 instances, 8 
> warehouses each.
> 

Understood.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-11-22  9:32 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-21 10:21 [PATCH 00/46] Automatic NUMA Balancing V4 Mel Gorman
2012-11-21 10:21 ` [PATCH 01/46] x86: mm: only do a local tlb flush in ptep_set_access_flags() Mel Gorman
2012-11-21 10:21 ` [PATCH 02/46] x86: mm: drop TLB flush from ptep_set_access_flags Mel Gorman
2012-11-21 10:21 ` [PATCH 03/46] mm,generic: only flush the local TLB in ptep_set_access_flags Mel Gorman
2012-11-21 10:21 ` [PATCH 04/46] x86/mm: Introduce pte_accessible() Mel Gorman
2012-11-21 10:21 ` [PATCH 05/46] mm: Only flush the TLB when clearing an accessible pte Mel Gorman
2012-11-21 10:21 ` [PATCH 06/46] mm: Count the number of pages affected in change_protection() Mel Gorman
2012-11-21 10:21 ` [PATCH 07/46] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Mel Gorman
2012-11-21 10:21 ` [PATCH 08/46] mm: compaction: Move migration fail/success stats to migrate.c Mel Gorman
2012-11-21 10:21 ` [PATCH 09/46] mm: migrate: Add a tracepoint for migrate_pages Mel Gorman
2012-11-21 10:21 ` [PATCH 10/46] mm: compaction: Add scanned and isolated counters for compaction Mel Gorman
2012-11-21 10:21 ` [PATCH 11/46] mm: numa: define _PAGE_NUMA Mel Gorman
2012-11-21 10:21 ` [PATCH 12/46] mm: numa: pte_numa() and pmd_numa() Mel Gorman
2012-11-21 10:21 ` [PATCH 13/46] mm: numa: Support NUMA hinting page faults from gup/gup_fast Mel Gorman
2012-11-21 10:21 ` [PATCH 14/46] mm: numa: split_huge_page: transfer the NUMA type from the pmd to the pte Mel Gorman
2012-11-21 10:21 ` [PATCH 15/46] mm: numa: Create basic numa page hinting infrastructure Mel Gorman
2012-11-21 10:21 ` [PATCH 16/46] mm: mempolicy: Make MPOL_LOCAL a real policy Mel Gorman
2012-11-21 10:21 ` [PATCH 17/46] mm: mempolicy: Add MPOL_MF_NOOP Mel Gorman
2012-11-21 10:21 ` [PATCH 18/46] mm: mempolicy: Check for misplaced page Mel Gorman
2012-11-21 10:21 ` [PATCH 19/46] mm: migrate: Introduce migrate_misplaced_page() Mel Gorman
2012-11-21 10:21 ` [PATCH 20/46] mm: mempolicy: Use _PAGE_NUMA to migrate pages Mel Gorman
2012-11-21 10:21 ` [PATCH 21/46] mm: mempolicy: Add MPOL_MF_LAZY Mel Gorman
2012-11-21 10:21 ` [PATCH 22/46] mm: mempolicy: Implement change_prot_numa() in terms of change_protection() Mel Gorman
2012-11-21 10:21 ` [PATCH 23/46] mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now Mel Gorman
2012-11-21 10:21 ` [PATCH 24/46] mm: numa: Add fault driven placement and migration Mel Gorman
2012-11-21 10:21 ` [PATCH 25/46] mm: sched: numa: Implement constant, per task Working Set Sampling (WSS) rate Mel Gorman
2012-11-21 10:21 ` [PATCH 26/46] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Mel Gorman
2012-11-21 10:21 ` [PATCH 27/46] mm: sched: numa: Implement slow start for working set sampling Mel Gorman
2012-11-21 10:21 ` [PATCH 28/46] mm: numa: Add pte updates, hinting and migration stats Mel Gorman
2012-11-21 10:21 ` [PATCH 29/46] mm: numa: Migrate on reference policy Mel Gorman
2012-11-21 10:21 ` [PATCH 30/46] mm: numa: Migrate pages handled during a pmd_numa hinting fault Mel Gorman
2012-11-21 10:21 ` [PATCH 31/46] mm: numa: Structures for Migrate On Fault per NUMA migration rate limiting Mel Gorman
2012-11-21 10:21 ` [PATCH 32/46] mm: numa: Rate limit the amount of memory that is migrated between nodes Mel Gorman
2012-11-21 10:21 ` [PATCH 33/46] mm: numa: Rate limit setting of pte_numa if node is saturated Mel Gorman
2012-11-21 10:21 ` [PATCH 34/46] sched: numa: Slowly increase the scanning period as NUMA faults are handled Mel Gorman
2012-11-21 10:21 ` [PATCH 35/46] mm: numa: Introduce last_nid to the page frame Mel Gorman
2012-11-21 10:21 ` [PATCH 36/46] mm: numa: Use a two-stage filter to restrict pages being migrated for unlikely task<->node relationships Mel Gorman
2012-11-21 18:25   ` Ingo Molnar
2012-11-21 19:15     ` Mel Gorman
2012-11-21 19:39       ` Mel Gorman
2012-11-21 19:46       ` Rik van Riel
2012-11-22  0:05         ` Ingo Molnar
2012-11-21 10:21 ` [PATCH 37/46] mm: numa: Add THP migration for the NUMA working set scanning fault case Mel Gorman
2012-11-21 11:24   ` Mel Gorman
2012-11-21 12:21   ` Mel Gorman
2012-11-21 10:21 ` [PATCH 38/46] sched: numa: Introduce tsk_home_node() Mel Gorman
2012-11-21 10:21 ` [PATCH 39/46] sched: numa: Make find_busiest_queue() a method Mel Gorman
2012-11-21 10:21 ` [PATCH 40/46] sched: numa: Implement home-node awareness Mel Gorman
2012-11-21 10:21 ` [PATCH 41/46] sched: numa: Introduce per-mm and per-task structures Mel Gorman
2012-11-21 10:21 ` [PATCH 42/46] sched: numa: CPU follows memory Mel Gorman
2012-11-21 10:21 ` [PATCH 43/46] sched: numa: Rename mempolicy to HOME Mel Gorman
2012-11-21 10:21 ` [PATCH 44/46] sched: numa: Consider only one CPU per node for CPU-follows-memory Mel Gorman
2012-11-21 10:21 ` [PATCH 45/46] balancenuma: no task swap in finding placement Mel Gorman
2012-11-21 10:21 ` [PATCH 46/46] Simple CPU follow Mel Gorman
2012-11-21 16:53 ` [PATCH 00/46] Automatic NUMA Balancing V4 Mel Gorman
2012-11-21 17:03   ` Ingo Molnar
2012-11-21 17:20     ` Mel Gorman
2012-11-21 17:33       ` Ingo Molnar
2012-11-21 18:02         ` Mel Gorman
2012-11-21 18:21           ` Ingo Molnar
2012-11-21 19:01             ` Mel Gorman
2012-11-21 23:27           ` Ingo Molnar
2012-11-22  9:32             ` Mel Gorman [this message]
2012-11-22  9:05         ` Ingo Molnar
2012-11-22  9:43           ` Mel Gorman
2012-11-22 12:56   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121122093230.GR8218@suse.de \
    --to=mgorman@suse.de \
    --cc=Lee.Schermerhorn@hp.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lkml.alex@gmail.com \
    --cc=mingo@kernel.org \
    --cc=pjt@google.com \
    --cc=riel@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).