linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Minchan Kim <minchan.kim@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Christoph Lameter <cl@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Adam Litke <agl@us.ibm.com>, Avi Kivity <avi@redhat.com>,
	David Rientjes <rientjes@google.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Rik van Riel <riel@redhat.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 04/14] mm,migration: Allow the migration of PageSwapCache pages
Date: Thu, 22 Apr 2010 16:40:04 +0100	[thread overview]
Message-ID: <20100422154003.GC30306@csn.ul.ie> (raw)
In-Reply-To: <p2y28c262361004220718m3a5e3e2ekee1fef7ebdae8e73@mail.gmail.com>

On Thu, Apr 22, 2010 at 11:18:14PM +0900, Minchan Kim wrote:
> On Thu, Apr 22, 2010 at 11:14 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> > On Thu, Apr 22, 2010 at 07:51:53PM +0900, KAMEZAWA Hiroyuki wrote:
> >> On Thu, 22 Apr 2010 19:31:06 +0900
> >> KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> >>
> >> > On Thu, 22 Apr 2010 19:13:12 +0900
> >> > Minchan Kim <minchan.kim@gmail.com> wrote:
> >> >
> >> > > On Thu, Apr 22, 2010 at 6:46 PM, KAMEZAWA Hiroyuki
> >> > > <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> >> >
> >> > > > Hmm..in my test, the case was.
> >> > > >
> >> > > > Before try_to_unmap:
> >> > > >        mapcount=1, SwapCache, remap_swapcache=1
> >> > > > After remap
> >> > > >        mapcount=0, SwapCache, rc=0.
> >> > > >
> >> > > > So, I think there may be some race in rmap_walk() and vma handling or
> >> > > > anon_vma handling. migration_entry isn't found by rmap_walk.
> >> > > >
> >> > > > Hmm..it seems this kind patch will be required for debug.
> >> > >
> >>
> >> Ok, here is my patch for _fix_. But still testing...
> >> Running well at least for 30 minutes, where I can see bug in 10minutes.
> >> But this patch is too naive. please think about something better fix.
> >>
> >> ==
> >> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> >>
> >> At adjust_vma(), vma's start address and pgoff is updated under
> >> write lock of mmap_sem. This means the vma's rmap information
> >> update is atoimic only under read lock of mmap_sem.
> >>
> >>
> >> Even if it's not atomic, in usual case, try_to_ummap() etc...
> >> just fails to decrease mapcount to be 0. no problem.
> >>
> >> But at page migration's rmap_walk(), it requires to know all
> >> migration_entry in page tables and recover mapcount.
> >>
> >> So, this race in vma's address is critical. When rmap_walk meet
> >> the race, rmap_walk will mistakenly get -EFAULT and don't call
> >> rmap_one(). This patch adds a lock for vma's rmap information.
> >> But, this is _very slow_.
> >
> > Ok wow. That is exceptionally well-spotted. This looks like a proper bug
> > that compaction exposes as opposed to a bug that compaction introduces.
> >
> >> We need something sophisitcated, light-weight update for this..
> >>
> >
> > In the event the VMA is backed by a file, the mapping i_mmap_lock is taken for
> > the duration of the update and is  taken elsewhere where the VMA information
> > is read such as rmap_walk_file()
> >
> > In the event the VMA is anon, vma_adjust currently talks no locks and your
> > patch introduces a new one but why not use the anon_vma lock here? Am I
> > missing something that requires the new lock?
> 
> rmap_walk_anon doesn't hold vma's anon_vma->lock.
> It holds page->anon_vma->lock.
> 

Of course, thank you for pointing out my error. With multiple
anon_vma's, the locking is a bit of a mess. We cannot hold spinlocks on
two vma's in the same list at the same time without potentially causing
a livelock. The problem becomes how we can safely drop one anon_vma and
acquire the other without them disappearing from under us.

See the XXX mark in the following incomplete patch for example. It's
incomplete because the list traversal is also not safe once the lock has
been dropped and -EFAULT is returned by vma_address.

==== CUT HERE ====
mm: Take the vma anon_vma lock in vma_adjust and during rmap_walk

vma_adjust() is updating anon VMA information without any locks taken.
In constract, file-backed mappings use the i_mmap_lock. This lack of
locking can result in races with page migration. During rmap_walk(),
vma_address() can return -EFAULT for an address that will soon be valid.
This leaves a dangling migration PTE behind which can later cause a
BUG_ON to trigger when the page is faulted in.

This patch takes the anon_vma->lock during vma_adjust to avoid such
races. During rmap_walk, the page anon_vma is locked but as it walks the
VMA list, it'll lock the VMA->anon_vma if they differ as well.

---
 mm/mmap.c |    6 ++++++
 mm/rmap.c |   48 ++++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 46 insertions(+), 8 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index f90ea92..61d6f1d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -578,6 +578,9 @@ again:			remove_next = 1 + (end > next->vm_end);
 		}
 	}
 
+	if (vma->anon_vma)
+		spin_lock(&vma->anon_vma->lock);
+
 	if (root) {
 		flush_dcache_mmap_lock(mapping);
 		vma_prio_tree_remove(vma, root);
@@ -620,6 +623,9 @@ again:			remove_next = 1 + (end > next->vm_end);
 	if (mapping)
 		spin_unlock(&mapping->i_mmap_lock);
 
+	if (vma->anon_vma)
+		spin_unlock(&vma->anon_vma->lock);
+
 	if (remove_next) {
 		if (file) {
 			fput(file);
diff --git a/mm/rmap.c b/mm/rmap.c
index 85f203e..1ea0cae 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1358,7 +1358,7 @@ int try_to_munlock(struct page *page)
 static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
 		struct vm_area_struct *, unsigned long, void *), void *arg)
 {
-	struct anon_vma *anon_vma;
+	struct anon_vma *page_avma;
 	struct anon_vma_chain *avc;
 	int ret = SWAP_AGAIN;
 
@@ -1368,20 +1368,52 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
 	 * are holding mmap_sem. Users without mmap_sem are required to
 	 * take a reference count to prevent the anon_vma disappearing
 	 */
-	anon_vma = page_anon_vma(page);
-	if (!anon_vma)
+	page_avma = page_anon_vma(page);
+	if (!page_avma)
 		return ret;
-	spin_lock(&anon_vma->lock);
-	list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
+	spin_lock(&page_avma->lock);
+restart:
+	list_for_each_entry(avc, &page_avma->head, same_anon_vma) {
+		struct anon_vma *vma_avma;
 		struct vm_area_struct *vma = avc->vma;
 		unsigned long address = vma_address(page, vma);
-		if (address == -EFAULT)
-			continue;
+		if (address == -EFAULT) {
+			/*
+			 * If the pages anon_vma and the VMAs anon_vma differ,
+			 * vma_address was called without the lock being held
+			 * but we cannot hold more than one lock on the anon_vma
+			 * list at a time without potentially causing a livelock.
+			 * Drop the page anon_vma lock, acquire the vma one and
+			 * then restart the whole operation
+			 */
+			if (vma->anon_vma != page_avma) {
+				vma_avma = vma->anon_vma;
+				spin_unlock(&page_avma->lock);
+
+				/*
+				 * XXX: rcu_read_lock will ensure that the
+				 *      anon_vma still exists but how can we be
+				 *      sure it has not been freed and reused?
+				 */
+				spin_lock(&vma_avma->lock);
+				address = vma_address(page, vma);
+				spin_unlock(&vma_avma->lock);
+
+				/* page_avma with elevated external_refcount exists */
+				spin_lock(&page_avma->lock);
+				if (address == -EFAULT)
+					continue;
+			}
+		}
 		ret = rmap_one(page, vma, address, arg);
 		if (ret != SWAP_AGAIN)
 			break;
+
+		/* Restart the whole list walk if the lock was dropped */
+		if (vma_avma)
+			goto restart;
 	}
-	spin_unlock(&anon_vma->lock);
+	spin_unlock(&page_avma->lock);
 	return ret;
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-04-22 15:40 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-20 21:01 [PATCH 0/14] Memory Compaction v8 Mel Gorman
2010-04-20 21:01 ` [PATCH 01/14] mm,migration: Take a reference to the anon_vma before migrating Mel Gorman
2010-04-21  2:49   ` KAMEZAWA Hiroyuki
2010-04-20 21:01 ` [PATCH 02/14] mm,migration: Share the anon_vma ref counts between KSM and page migration Mel Gorman
2010-04-20 21:01 ` [PATCH 03/14] mm,migration: Do not try to migrate unmapped anonymous pages Mel Gorman
2010-04-20 21:01 ` [PATCH 04/14] mm,migration: Allow the migration of PageSwapCache pages Mel Gorman
2010-04-21 14:30   ` Christoph Lameter
2010-04-21 15:00     ` Mel Gorman
2010-04-21 15:05       ` Christoph Lameter
2010-04-21 15:14         ` Mel Gorman
2010-04-21 15:31           ` Christoph Lameter
2010-04-21 15:34             ` Mel Gorman
2010-04-21 15:46               ` Christoph Lameter
2010-04-22  9:28                 ` Mel Gorman
2010-04-22  9:46                   ` KAMEZAWA Hiroyuki
2010-04-22 10:13                     ` Minchan Kim
2010-04-22 10:31                       ` KAMEZAWA Hiroyuki
2010-04-22 10:51                         ` KAMEZAWA Hiroyuki
2010-04-22 14:14                           ` Mel Gorman
2010-04-22 14:18                             ` Minchan Kim
2010-04-22 15:40                               ` Mel Gorman [this message]
2010-04-22 16:13                                 ` Mel Gorman
2010-04-22 19:29                                 ` Mel Gorman
2010-04-22 19:40                                   ` Christoph Lameter
2010-04-22 23:52                                     ` KAMEZAWA Hiroyuki
2010-04-23  9:03                                       ` Mel Gorman
2010-04-22 14:23                           ` Minchan Kim
2010-04-22 14:40                             ` Minchan Kim
2010-04-22 15:44                               ` Mel Gorman
2010-04-23 18:31                                 ` Andrea Arcangeli
2010-04-23 19:23                                   ` Mel Gorman
2010-04-23 19:39                                     ` Andrea Arcangeli
2010-04-23 21:35                                       ` Andrea Arcangeli
2010-04-24 10:52                                         ` Mel Gorman
2010-04-24 11:13                                           ` Andrea Arcangeli
2010-04-24 11:59                                             ` Mel Gorman
2010-04-24 14:30                                               ` Andrea Arcangeli
2010-04-26 21:54                                               ` Rik van Riel
2010-04-26 22:11                                                 ` Mel Gorman
2010-04-26 22:26                                                   ` Andrea Arcangeli
2010-04-25 14:41                                           ` Andrea Arcangeli
2010-04-27  9:40                                             ` Mel Gorman
2010-04-27 10:41                                               ` KAMEZAWA Hiroyuki
2010-04-27 11:12                                                 ` Mel Gorman
2010-04-27 15:42                                               ` Andrea Arcangeli
2010-04-24 10:50                                       ` Mel Gorman
2010-04-22 15:14                             ` Christoph Lameter
2010-04-23  3:39                               ` Paul E. McKenney
2010-04-23  4:55                               ` Minchan Kim
2010-04-21 23:59     ` KAMEZAWA Hiroyuki
2010-04-22  0:11       ` Minchan Kim
2010-04-20 21:01 ` [PATCH 05/14] mm: Allow CONFIG_MIGRATION to be set without CONFIG_NUMA or memory hot-remove Mel Gorman
2010-04-20 21:01 ` [PATCH 06/14] mm: Export unusable free space index via debugfs Mel Gorman
2010-04-20 21:01 ` [PATCH 07/14] mm: Export fragmentation " Mel Gorman
2010-04-20 21:01 ` [PATCH 08/14] mm: Move definition for LRU isolation modes to a header Mel Gorman
2010-04-20 21:01 ` [PATCH 09/14] mm,compaction: Memory compaction core Mel Gorman
2010-04-20 21:01 ` [PATCH 10/14] mm,compaction: Add /proc trigger for memory compaction Mel Gorman
2010-04-20 21:01 ` [PATCH 11/14] mm,compaction: Add /sys trigger for per-node " Mel Gorman
2010-04-20 21:01 ` [PATCH 12/14] mm,compaction: Direct compact when a high-order allocation fails Mel Gorman
2010-05-05 12:19   ` [PATCH] fix count_vm_event preempt in memory compaction direct reclaim Andrea Arcangeli
2010-05-05 12:51     ` Mel Gorman
2010-05-05 13:11       ` Andrea Arcangeli
2010-05-05 13:55         ` Mel Gorman
2010-05-05 14:48           ` Andrea Arcangeli
2010-05-05 15:14             ` Mel Gorman
2010-05-05 15:25               ` Andrea Arcangeli
2010-05-05 15:32                 ` Mel Gorman
2010-04-20 21:01 ` [PATCH 13/14] mm,compaction: Add a tunable that decides when memory should be compacted and when it should be reclaimed Mel Gorman
2010-04-20 21:01 ` [PATCH 14/14] mm,compaction: Defer compaction using an exponential backoff when compaction fails Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100422154003.GC30306@csn.ul.ie \
    --to=mel@csn.ul.ie \
    --cc=aarcange@redhat.com \
    --cc=agl@us.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=avi@redhat.com \
    --cc=cl@linux-foundation.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan.kim@gmail.com \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).