From: Mel Gorman <mgorman@suse.de>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>,
Ingo Molnar <mingo@kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Paul Turner <pjt@google.com>,
Lee Schermerhorn <Lee.Schermerhorn@hp.com>,
Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Johannes Weiner <hannes@cmpxchg.org>,
Hugh Dickins <hughd@google.com>,
Sasha Levin <levinsasha928@gmail.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Subject: Re: [patch] mm, mempolicy: Introduce spinlock to read shared policy tree
Date: Fri, 21 Dec 2012 19:58:17 +0000 [thread overview]
Message-ID: <20121221195817.GE13367@suse.de> (raw)
In-Reply-To: <CA+55aFxrdPpMWLD8LF0NNqgJqmB-L-HW3Xyxht6e5AwnoaueTw@mail.gmail.com>
On Fri, Dec 21, 2012 at 08:53:33AM -0800, Linus Torvalds wrote:
> > Jeez, I'm still not keen on that approach for the reasons that are explained
> > in the changelog for b22d127a39dd.
>
> Christ, Mel.
>
> Your reasons in b22d127a39dd are weak as hell, and then you come up
> with *THIS* shit instead:
>
The complaint about duplicated code was based on the fact that the mempolicy
code was a complete mess and duplicating code did not help. I'll accept
that it's weak.
> > That leads to this third *ugly* option that conditionally drops the lock
> > and it's up to the caller to figure out what happened. Fooling around with
> > how it conditionally releases the lock results in different sorts of ugly.
> > We now have three ugly sister patches for this. Who wants to be Cinderalla?
> >
> > ---8<---
> > mm: numa: Release the PTL if calling vm_ops->get_policy during NUMA hinting faults
>
> Heck no. In fact, not a f*cking way in hell. Look yourself in the
> mirror, Mel.
I could do with a shave, a glass of wine and a holiday in that order.
> This patch is ugly, and *guaranteed* to result in subtle
> locking issues, and then you have the *gall* to quote the "uhh, that's
> a bit ugly due to some trivial duplication" thing in commit
> b22d127a39dd.
>
No argument.
> Reverting commit b22d127a39dd and just having a "ok, if we need to
> allocate, then drop the lock, allocate, re-get the lock, and see if we
> still need the new allocation" is *beautiful* code compared to the
> diseased abortion you just posted.
>
> Seriously. Conditional locking is error-prone, and about a million
> times worse than the trivial fix that Kosaki suggested.
>
Kosaki's patch does not fix the actual problem with NUMA hinting
faults. Converting to a spinlock is nice but we'd still hold the PTL at
the time sp_alloc is called and potentially allocating GFP_KERNEL with a
spinlock held.
At the risk of making your head explode, here is another patch. It does
the conversion to spinlock as Kosaki originally did. It's unnecessary for
the actual problem at hand but I felt that avoiding it would piss you off
more. The actual fix is changing how the PTL is handled by NUMA hinting fault
handler. It's still conditionally locking but only at the location it matters
where it'll be obvious. As before, we could unconditionally unlock but then
a PMD fault potentially releases/acquires the PTL a large number of times.
This survived a trinity fuzz test for mbind running for 5 minutes and
autonumabench. CONFIG_SLUB_DEBUG, CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK
and CONFIG_NUMA_BALANCING were enabled. Unfortunately, as part of the
same test I also checked slabinfo and I see that shared_policy_nodes is
continually increasing indicating that it's leaking again. This also
happens with current git so it's another regression.
---8<---
mm: mempolicy: Convert shared_policy mutex to spinlock and do not hold PTL across a shmem vm_ops->get_vma_policy().
Sasha was fuzzing with trinity and reported the following problem:
BUG: sleeping function called from invalid context at kernel/mutex.c:269
in_atomic(): 1, irqs_disabled(): 0, pid: 6361, name: trinity-main
2 locks held by trinity-main/6361:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff810aa314>] __do_page_fault+0x1e4/0x4f0
#1: (&(&mm->page_table_lock)->rlock){+.+...}, at: [<ffffffff8122f017>] handle_pte_fault+0x3f7/0x6a0
Pid: 6361, comm: trinity-main Tainted: G W
3.7.0-rc2-next-20121024-sasha-00001-gd95ef01-dirty #74
Call Trace:
[<ffffffff8114e393>] __might_sleep+0x1c3/0x1e0
[<ffffffff83ae5209>] mutex_lock_nested+0x29/0x50
[<ffffffff8124fc3e>] mpol_shared_policy_lookup+0x2e/0x90
[<ffffffff81219ebe>] shmem_get_policy+0x2e/0x30
[<ffffffff8124e99a>] get_vma_policy+0x5a/0xa0
[<ffffffff8124fce1>] mpol_misplaced+0x41/0x1d0
[<ffffffff8122f085>] handle_pte_fault+0x465/0x6a0
This was triggered by a different version of automatic NUMA balancing but
in theory the current version is vunerable to the same problem.
do_numa_page
-> numa_migrate_prep
-> mpol_misplaced
-> get_vma_policy
-> shmem_get_policy
It's very unlikely this will happen as shared pages are not marked
pte_numa -- see the page_mapcount() check in change_pte_range() -- but
it is possible.
To address this, this patch is in two parts. First it restores sp->lock as
originally implemented by Kosaki Motohiro. This is not actually necessary at
this point but the related flames were such that I felt that hand-waving at
it would result in a second kick in the arse.
The second part alters how PTL is acquired and released during a NUMA
hinting fault. numa_migrate_prep() only takes a reference to the page and
the caller calls mpol_misplaced(). It is up to the caller how to handle
the PTL. In the case of do_numa_page(), it just releases it. For PMDs,
it will hold the PTL if there is no vm_ops->get_vma_policy(). Otherwise
it will release the PTL and reacquire it if necessary.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
include/linux/mempolicy.h | 2 +-
mm/memory.c | 27 +++++++++++++-----
mm/mempolicy.c | 68 ++++++++++++++++++++++++++++++++-------------
3 files changed, 69 insertions(+), 28 deletions(-)
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 9adc270..cc51d17 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -123,7 +123,7 @@ struct sp_node {
struct shared_policy {
struct rb_root root;
- struct mutex mutex;
+ spinlock_t lock;
};
void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
diff --git a/mm/memory.c b/mm/memory.c
index e0a9b0c..d8c2a5c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3431,7 +3431,7 @@ static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
return __do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
}
-int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
+static void numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
unsigned long addr, int current_nid)
{
get_page(page);
@@ -3439,8 +3439,6 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
count_vm_numa_event(NUMA_HINT_FAULTS);
if (current_nid == numa_node_id())
count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
-
- return mpol_misplaced(page, vma, addr);
}
int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
@@ -3479,8 +3477,10 @@ int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
}
current_nid = page_to_nid(page);
- target_nid = numa_migrate_prep(page, vma, addr, current_nid);
+ numa_migrate_prep(page, vma, addr, current_nid);
pte_unmap_unlock(ptep, ptl);
+
+ target_nid = mpol_misplaced(page, vma, addr);
if (target_nid == -1) {
/*
* Account for the fault against the current node if it not
@@ -3513,6 +3513,7 @@ static int do_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long offset;
spinlock_t *ptl;
bool numa = false;
+ bool policy_vma = (vma->vm_ops && vma->vm_ops->get_policy);
int local_nid = numa_node_id();
spin_lock(&mm->page_table_lock);
@@ -3566,15 +3567,27 @@ static int do_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
* migrated to.
*/
curr_nid = local_nid;
- target_nid = numa_migrate_prep(page, vma, addr,
- page_to_nid(page));
+ numa_migrate_prep(page, vma, addr, page_to_nid(page));
+
+ /*
+ * If there is a possibility that mpol_misplaced will need
+ * to allocate for a shared memory policy then we have to
+ * release the PTL now and reacquire later if necessary.
+ */
+ if (policy_vma)
+ pte_unmap_unlock(pte, ptl);
+
+ target_nid = mpol_misplaced(page, vma, addr);
if (target_nid == -1) {
+ if (policy_vma)
+ pte = pte_offset_map_lock(mm, pmdp, addr, &ptl);
put_page(page);
continue;
}
/* Migrate to the requested node */
- pte_unmap_unlock(pte, ptl);
+ if (!policy_vma)
+ pte_unmap_unlock(pte, ptl);
migrated = migrate_misplaced_page(page, target_nid);
if (migrated)
curr_nid = target_nid;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d1b315e..ed8ebbf 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2132,7 +2132,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
*/
/* lookup first element intersecting start-end */
-/* Caller holds sp->mutex */
+/* Caller holds sp->lock */
static struct sp_node *
sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
{
@@ -2196,13 +2196,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
if (!sp->root.rb_node)
return NULL;
- mutex_lock(&sp->mutex);
+ spin_lock(&sp->lock);
sn = sp_lookup(sp, idx, idx+1);
if (sn) {
mpol_get(sn->policy);
pol = sn->policy;
}
- mutex_unlock(&sp->mutex);
+ spin_unlock(&sp->lock);
return pol;
}
@@ -2328,6 +2328,14 @@ static void sp_delete(struct shared_policy *sp, struct sp_node *n)
sp_free(n);
}
+static void sp_node_init(struct sp_node *node, unsigned long start,
+ unsigned long end, struct mempolicy *pol)
+{
+ node->start = start;
+ node->end = end;
+ node->policy = pol;
+}
+
static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
struct mempolicy *pol)
{
@@ -2344,10 +2352,7 @@ static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
return NULL;
}
newpol->flags |= MPOL_F_SHARED;
-
- n->start = start;
- n->end = end;
- n->policy = newpol;
+ sp_node_init(n, start, end, newpol);
return n;
}
@@ -2357,9 +2362,12 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
unsigned long end, struct sp_node *new)
{
struct sp_node *n;
+ struct sp_node *n_new = NULL;
+ struct mempolicy *mpol_new = NULL;
int ret = 0;
- mutex_lock(&sp->mutex);
+restart:
+ spin_lock(&sp->lock);
n = sp_lookup(sp, start, end);
/* Take care of old policies in the same range. */
while (n && n->start < end) {
@@ -2372,14 +2380,16 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
} else {
/* Old policy spanning whole new range. */
if (n->end > end) {
- struct sp_node *new2;
- new2 = sp_alloc(end, n->end, n->policy);
- if (!new2) {
- ret = -ENOMEM;
- goto out;
- }
+ if (!n_new)
+ goto alloc_new;
+
+ *mpol_new = *n->policy;
+ atomic_set(&mpol_new->refcnt, 1);
+ sp_node_init(n_new, n->end, end, mpol_new);
+ sp_insert(sp, n_new);
n->end = start;
- sp_insert(sp, new2);
+ n_new = NULL;
+ mpol_new = NULL;
break;
} else
n->end = start;
@@ -2390,9 +2400,27 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
}
if (new)
sp_insert(sp, new);
-out:
- mutex_unlock(&sp->mutex);
+ spin_unlock(&sp->lock);
+ ret = 0;
+
+err_out:
+ if (mpol_new)
+ mpol_put(mpol_new);
+ if (n_new)
+ kmem_cache_free(sn_cache, n_new);
+
return ret;
+
+alloc_new:
+ spin_unlock(&sp->lock);
+ ret = -ENOMEM;
+ n_new = kmem_cache_alloc(sn_cache, GFP_KERNEL);
+ if (!n_new)
+ goto err_out;
+ mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
+ if (!mpol_new)
+ goto err_out;
+ goto restart;
}
/**
@@ -2410,7 +2438,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
int ret;
sp->root = RB_ROOT; /* empty tree == default mempolicy */
- mutex_init(&sp->mutex);
+ spin_lock_init(&sp->lock);
if (mpol) {
struct vm_area_struct pvma;
@@ -2476,14 +2504,14 @@ void mpol_free_shared_policy(struct shared_policy *p)
if (!p->root.rb_node)
return;
- mutex_lock(&p->mutex);
+ spin_lock(&p->lock);
next = rb_first(&p->root);
while (next) {
n = rb_entry(next, struct sp_node, nd);
next = rb_next(&n->nd);
sp_delete(p, n);
}
- mutex_unlock(&p->mutex);
+ spin_unlock(&p->lock);
}
#ifdef CONFIG_NUMA_BALANCING
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-12-21 19:58 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-22 22:49 [PATCH 00/33] Latest numa/core release, v17 Ingo Molnar
2012-11-22 22:49 ` [PATCH 01/33] mm/generic: Only flush the local TLB in ptep_set_access_flags() Ingo Molnar
2012-11-22 22:49 ` [PATCH 02/33] x86/mm: Only do a local tlb flush " Ingo Molnar
2012-11-22 22:49 ` [PATCH 03/33] x86/mm: Introduce pte_accessible() Ingo Molnar
2012-11-22 22:49 ` [PATCH 04/33] mm: Only flush the TLB when clearing an accessible pte Ingo Molnar
2012-11-22 22:49 ` [PATCH 05/33] x86/mm: Completely drop the TLB flush from ptep_set_access_flags() Ingo Molnar
2012-11-22 22:49 ` [PATCH 06/33] mm: Count the number of pages affected in change_protection() Ingo Molnar
2012-11-22 22:49 ` [PATCH 07/33] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Ingo Molnar
2012-11-22 22:49 ` [PATCH 08/33] sched, numa, mm: Add last_cpu to page flags Ingo Molnar
2012-11-22 22:49 ` [PATCH 09/33] sched, mm, numa: Create generic NUMA fault infrastructure, with architectures overrides Ingo Molnar
2012-11-22 22:49 ` [PATCH 10/33] sched: Make find_busiest_queue() a method Ingo Molnar
2012-11-22 22:49 ` [PATCH 11/33] sched, numa, mm: Describe the NUMA scheduling problem formally Ingo Molnar
2012-11-22 22:49 ` [PATCH 12/33] numa, mm: Support NUMA hinting page faults from gup/gup_fast Ingo Molnar
2012-11-22 22:49 ` [PATCH 13/33] mm/migrate: Introduce migrate_misplaced_page() Ingo Molnar
2012-11-22 22:49 ` [PATCH 14/33] mm/migration: Improve migrate_misplaced_page() Ingo Molnar
2012-11-22 22:49 ` [PATCH 15/33] sched, numa, mm, arch: Add variable locality exception Ingo Molnar
2012-11-22 22:49 ` [PATCH 16/33] sched, numa, mm: Add credits for NUMA placement Ingo Molnar
2012-11-22 22:49 ` [PATCH 17/33] sched, mm, x86: Add the ARCH_SUPPORTS_NUMA_BALANCING flag Ingo Molnar
2012-11-22 22:49 ` [PATCH 18/33] sched, numa, mm: Add the scanning page fault machinery Ingo Molnar
2012-12-04 0:56 ` [patch] mm, mempolicy: Introduce spinlock to read shared policy tree David Rientjes
2012-12-20 18:34 ` Linus Torvalds
2012-12-20 22:55 ` David Rientjes
2012-12-21 13:47 ` Mel Gorman
2012-12-21 16:53 ` Linus Torvalds
2012-12-21 18:21 ` Hugh Dickins
2012-12-21 21:51 ` Linus Torvalds
2012-12-21 19:58 ` Mel Gorman [this message]
2012-12-21 22:02 ` Linus Torvalds
2012-12-21 23:10 ` Mel Gorman
2012-12-22 0:36 ` Linus Torvalds
2013-01-02 19:43 ` KOSAKI Motohiro
2012-11-22 22:49 ` [PATCH 19/33] sched: Add adaptive NUMA affinity support Ingo Molnar
2012-11-26 20:32 ` Sasha Levin
2012-11-22 22:49 ` [PATCH 20/33] sched: Implement constant, per task Working Set Sampling (WSS) rate Ingo Molnar
2012-11-22 22:49 ` [PATCH 21/33] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
2012-11-22 22:49 ` [PATCH 22/33] sched: Implement slow start for working set sampling Ingo Molnar
2012-11-22 22:49 ` [PATCH 23/33] sched, numa, mm: Interleave shared tasks Ingo Molnar
2012-11-22 22:49 ` [PATCH 24/33] sched: Implement NUMA scanning backoff Ingo Molnar
2012-11-22 22:49 ` [PATCH 25/33] sched: Improve convergence Ingo Molnar
2012-11-22 22:49 ` [PATCH 26/33] sched: Introduce staged average NUMA faults Ingo Molnar
2012-11-22 22:49 ` [PATCH 27/33] sched: Track groups of shared tasks Ingo Molnar
2012-11-22 22:49 ` [PATCH 28/33] sched: Use the best-buddy 'ideal cpu' in balancing decisions Ingo Molnar
2012-11-22 22:49 ` [PATCH 29/33] sched, mm, mempolicy: Add per task mempolicy Ingo Molnar
2012-11-22 22:49 ` [PATCH 30/33] sched: Average the fault stats longer Ingo Molnar
2012-11-22 22:49 ` [PATCH 31/33] sched: Use the ideal CPU to drive active balancing Ingo Molnar
2012-11-22 22:49 ` [PATCH 32/33] sched: Add hysteresis to p->numa_shared Ingo Molnar
2012-11-22 22:49 ` [PATCH 33/33] sched: Track shared task's node groups and interleave their memory allocations Ingo Molnar
2012-11-22 22:53 ` [PATCH 00/33] Latest numa/core release, v17 Ingo Molnar
2012-11-23 6:47 ` Zhouping Liu
2012-11-23 17:32 ` Comparison between three trees (was: Latest numa/core release, v17) Mel Gorman
2012-11-25 8:47 ` Hillf Danton
2012-11-26 9:38 ` Mel Gorman
2012-11-25 23:37 ` Mel Gorman
2012-11-25 23:40 ` Mel Gorman
2012-11-26 13:33 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121221195817.GE13367@suse.de \
--to=mgorman@suse.de \
--cc=Lee.Schermerhorn@hp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=levinsasha928@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=pjt@google.com \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).