linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Nick Piggin <npiggin@suse.de>,
	Pekka Enberg <penberg@cs.helsinki.fi>,
	Christoph Lameter <cl@linux-foundation.org>
Cc: heiko.carstens@de.ibm.com, sachinp@in.ibm.com,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Mel Gorman <mel@csn.ul.ie>, Tejun Heo <tj@kernel.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>
Subject: [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2
Date: Mon, 21 Sep 2009 17:10:23 +0100	[thread overview]
Message-ID: <1253549426-917-1-git-send-email-mel@csn.ul.ie> (raw)

Currently SLQB is not allowed to be configured on PPC and S390 machines as
CPUs can belong to memoryless nodes. SLQB does not deal with this very well
and crashes reliably.

These patches fix the problem on PPC64 and it appears to be fairly stable.
At least, basic actions that were previously silently halting the machine
complete successfully. There might still be per-cpu problems as Sachin
reported the stability problems on this machine did not depend on SLQB.

Patch 1 notes that the per-node hack in SLQB only works if every node in
	the system has a CPU of the same ID. If this is not the case,
	the per-node areas are not necessarily allocated. This fix only
	applies to ppc64. It's possible that s390 needs a similar hack. The
	alternative is to statically allocate the per-node structures but
	this is both sub-optimal in terms of performance and memory usage.

Patch 2 notes that on memoryless configurations, memory is always freed
	remotely but always allocates locally and falls back to the page
	allocator on failure. This effectively is a memory leak. This patch
	records in kmem_cache_cpu what node it considers local to be either
	the real local node or the closest node available

Patch 3 allows SLQB to be configured on PPC again. It's not enabled on
	S390 because I can't test for sure on a suitable configuration there.

This is not ready for merging just yet.

It needs signed-off from the powerpc side because it's now allocating more
memory potentially (Ben?). An alternative to this patch is in V1 that
statically declares the per-node structures but this is potentially
sub-optimal but from a performance and memory utilisation perspective.

>From an SLQB side, how does patch 2 now look from a potential list-corruption
point of view (Christoph, Nick, Pekka?). Certainly this version seems a
lot more sensible than the patch in V1 because the per-cpu list is now
always being used for pages from the closest node.

It would also be nice if the S390 guys could retest as well with SLQB to see
if special action with respect to per-cpu areas is still needed.

 arch/powerpc/kernel/setup_64.c |   20 ++++++++++++++++++++
 include/linux/slqb_def.h       |    3 +++
 init/Kconfig                   |    2 +-
 mm/slqb.c                      |   23 +++++++++++++++++------
 4 files changed, 41 insertions(+), 7 deletions(-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2009-09-21 16:10 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-09-21 16:10 Mel Gorman [this message]
2009-09-21 16:10 ` [PATCH 1/3] powerpc: Allocate per-cpu areas for node IDs for SLQB to use as per-node areas Mel Gorman
2009-09-21 17:17   ` Daniel Walker
2009-09-21 17:24     ` Randy Dunlap
2009-09-21 17:29       ` Daniel Walker
2009-09-21 17:42     ` Mel Gorman
2009-09-22  0:01   ` Tejun Heo
2009-09-22  9:32     ` Mel Gorman
2009-09-21 16:10 ` [PATCH 2/3] slqb: Record what node is local to a kmem_cache_cpu Mel Gorman
2009-09-21 16:10 ` [PATCH 3/3] slqb: Allow SLQB to be used on PPC Mel Gorman
2009-09-22  9:30   ` Heiko Carstens
2009-09-22  9:32     ` Mel Gorman
2009-09-21 17:46 ` [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2 Mel Gorman
2009-09-21 17:54   ` Christoph Lameter
2009-09-21 18:05     ` Pekka Enberg
2009-09-21 18:07     ` Mel Gorman
2009-09-21 18:17       ` Christoph Lameter
2009-09-22 10:05         ` Mel Gorman
2009-09-22 10:21           ` Pekka Enberg
2009-09-22 10:24             ` Mel Gorman
2009-09-22  5:03       ` Sachin Sant
2009-09-22 10:07         ` Mel Gorman
2009-09-22 12:55         ` Mel Gorman
2009-09-22 13:05           ` Sachin Sant
2009-09-22 13:20             ` Mel Gorman
     [not found]               ` <363172900909220629j2f5174cbo9fe027354948d37@mail.gmail.com>
2009-09-22 13:38                 ` Mel Gorman
2009-09-22 23:07                 ` Christoph Lameter
2009-09-22  0:00 ` Benjamin Herrenschmidt
2009-09-22  0:19   ` David Rientjes
2009-09-22  6:30     ` Christoph Lameter
2009-09-22  7:59       ` David Rientjes
2009-09-22  8:11         ` Benjamin Herrenschmidt
2009-09-22  8:44           ` David Rientjes
2009-09-22 15:26   ` Mel Gorman
2009-09-22 17:31     ` David Rientjes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1253549426-917-1-git-send-email-mel@csn.ul.ie \
    --to=mel@csn.ul.ie \
    --cc=benh@kernel.crashing.org \
    --cc=cl@linux-foundation.org \
    --cc=heiko.carstens@de.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=npiggin@suse.de \
    --cc=penberg@cs.helsinki.fi \
    --cc=sachinp@in.ibm.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).