linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sachin Sant <sachinp@in.ibm.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Christoph Lameter <cl@linux-foundation.org>,
	Nick Piggin <npiggin@suse.de>,
	Pekka Enberg <penberg@cs.helsinki.fi>,
	heiko.carstens@de.ibm.com, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, Tejun Heo <tj@kernel.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>
Subject: Re: [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2
Date: Tue, 22 Sep 2009 10:33:11 +0530	[thread overview]
Message-ID: <4AB85A8F.6010106@in.ibm.com> (raw)
In-Reply-To: <20090921180739.GT12726@csn.ul.ie>

Mel Gorman wrote:
> On Mon, Sep 21, 2009 at 01:54:12PM -0400, Christoph Lameter wrote:
>   
>> Lets just keep SLQB back until the basic issues with memoryless nodes are
>> resolved.
>>     
>
> It's not even super-clear that the memoryless nodes issues are entirely
> related to SLQB. Sachin for example says that there was a stall issue
> with memoryless nodes that could be triggered without SLQB. Sachin, is
> that still accurate?
>   
I think there are two different problems that we are dealing with.

First one is the SLQB not working on a ppc64 box which seems to be specific
to only one machine and i haven't seen that on other power boxes.The patches
that you have posted seems to allow the box to boot, but eventually it hits
the stall issue(related to percpu dynamic allocator not working on ppc64),
which is the second problem we are dealing with.

The stall issue seems to be much more critical as it is affecting almost
all of the power boxes that i have tested with (4 in all).
This issue is seen with Linus tree as well and was first seen with
2.6.31-git5 (0cb583fd..) 

The stall issue was reported here:
http://lists.ozlabs.org/pipermail/linuxppc-dev/2009-September/075791.html

Thanks
-Sachin


> If so, it's possible that SLQB somehow exasperates the problem in some
> unknown fashion.
>
>   
>> There does not seem to be an easy way to deal with this. Some
>> thought needs to go into how memoryless node handling relates to per cpu
>> lists and locking. List handling issues need to be addressed before SLQB.
>> can work reliably. The same issues can surface on x86 platforms with weird
>> NUMA memory setups.
>>
>>     
>
> Can you spot if there is something fundamentally wrong with patch 2? I.e. what
> is wrong with treating the closest node as local instead of only the
> closest node?
>
>   
>> Or just allow SLQB for !NUMA configurations and merge it now.
>>
>>     
>
> Forcing SLQB !NUMA will not rattle out any existing list issues
> unfortunately :(.
>
>   


-- 

---------------------------------
Sachin Sant
IBM Linux Technology Center
India Systems and Technology Labs
Bangalore, India
---------------------------------

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2009-09-22  5:03 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-09-21 16:10 [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2 Mel Gorman
2009-09-21 16:10 ` [PATCH 1/3] powerpc: Allocate per-cpu areas for node IDs for SLQB to use as per-node areas Mel Gorman
2009-09-21 17:17   ` Daniel Walker
2009-09-21 17:24     ` Randy Dunlap
2009-09-21 17:29       ` Daniel Walker
2009-09-21 17:42     ` Mel Gorman
2009-09-22  0:01   ` Tejun Heo
2009-09-22  9:32     ` Mel Gorman
2009-09-21 16:10 ` [PATCH 2/3] slqb: Record what node is local to a kmem_cache_cpu Mel Gorman
2009-09-21 16:10 ` [PATCH 3/3] slqb: Allow SLQB to be used on PPC Mel Gorman
2009-09-22  9:30   ` Heiko Carstens
2009-09-22  9:32     ` Mel Gorman
2009-09-21 17:46 ` [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2 Mel Gorman
2009-09-21 17:54   ` Christoph Lameter
2009-09-21 18:05     ` Pekka Enberg
2009-09-21 18:07     ` Mel Gorman
2009-09-21 18:17       ` Christoph Lameter
2009-09-22 10:05         ` Mel Gorman
2009-09-22 10:21           ` Pekka Enberg
2009-09-22 10:24             ` Mel Gorman
2009-09-22  5:03       ` Sachin Sant [this message]
2009-09-22 10:07         ` Mel Gorman
2009-09-22 12:55         ` Mel Gorman
2009-09-22 13:05           ` Sachin Sant
2009-09-22 13:20             ` Mel Gorman
     [not found]               ` <363172900909220629j2f5174cbo9fe027354948d37@mail.gmail.com>
2009-09-22 13:38                 ` Mel Gorman
2009-09-22 23:07                 ` Christoph Lameter
2009-09-22  0:00 ` Benjamin Herrenschmidt
2009-09-22  0:19   ` David Rientjes
2009-09-22  6:30     ` Christoph Lameter
2009-09-22  7:59       ` David Rientjes
2009-09-22  8:11         ` Benjamin Herrenschmidt
2009-09-22  8:44           ` David Rientjes
2009-09-22 15:26   ` Mel Gorman
2009-09-22 17:31     ` David Rientjes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4AB85A8F.6010106@in.ibm.com \
    --to=sachinp@in.ibm.com \
    --cc=benh@kernel.crashing.org \
    --cc=cl@linux-foundation.org \
    --cc=heiko.carstens@de.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=npiggin@suse.de \
    --cc=penberg@cs.helsinki.fi \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).