linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC] rmap: fix "race" between do_wp_page and shrink_active_list
@ 2015-05-11  7:51 Vladimir Davydov
  2015-05-11  8:59 ` yalin wang
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Vladimir Davydov @ 2015-05-11  7:51 UTC (permalink / raw)
  To: Rik van Riel, Hugh Dickins, Christoph Lameter, Paul E. McKenney,
	Peter Zijlstra, Andrew Morton
  Cc: Minchan Kim, Johannes Weiner, Michal Hocko, Greg Thelen,
	Michel Lespinasse, David Rientjes, Pavel Emelyanov,
	Cyrill Gorcunov, linux-mm, linux-kernel

Hi,

I've been arguing with Minchan for a while about whether store-tearing
is possible while setting page->mapping in __page_set_anon_rmap and
friends, see

  http://thread.gmane.org/gmane.linux.kernel.mm/131949/focus=132132

This patch is intended to draw attention to this discussion. It fixes a
race that could happen if store-tearing were possible. The race is as
follows.

In do_wp_page() we can call page_move_anon_rmap(), which sets
page->mapping as follows:

        anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
        page->mapping = (struct address_space *) anon_vma;

The page in question may be on an LRU list, because nowhere in
do_wp_page() we remove it from the list, neither do we take any LRU
related locks. Although the page is locked, shrink_active_list() can
still call page_referenced() on it concurrently, because the latter does
not require an anonymous page to be locked.

If store tearing described in the thread were possible, we could face
the following race resulting in kernel panic:

  CPU0                          CPU1
  ----                          ----
  do_wp_page                    shrink_active_list
   lock_page                     page_referenced
                                  PageAnon->yes, so skip trylock_page
   page_move_anon_rmap
    page->mapping = anon_vma
                                  rmap_walk
                                   PageAnon->no
                                   rmap_walk_file
                                    BUG
    page->mapping += PAGE_MAPPING_ANON

This patch fixes this race by explicitly forbidding the compiler to
split page->mapping store in __page_set_anon_rmap() and friends and load
in PageAnon() with the aid of WRITE/READ_ONCE.

Personally, I don't believe that this can ever happen on any sane
compiler, because such an "optimization" would only result in two stores
vs one (note, anon_vma is not a constant), but since I can be mistaken I
would like to hear from synchronization experts what they think about
it.

Thanks,
Vladimir
---
 include/linux/page-flags.h |    3 ++-
 mm/rmap.c                  |    6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 5e7c4f50a644..a529e0a35fe9 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -320,7 +320,8 @@ PAGEFLAG(Idle, idle)
 
 static inline int PageAnon(struct page *page)
 {
-	return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
+	return ((unsigned long)READ_ONCE(page->mapping) &
+		PAGE_MAPPING_ANON) != 0;
 }
 
 #ifdef CONFIG_KSM
diff --git a/mm/rmap.c b/mm/rmap.c
index eca7416f55d7..aa60c63704e6 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -958,7 +958,7 @@ void page_move_anon_rmap(struct page *page,
 	VM_BUG_ON_PAGE(page->index != linear_page_index(vma, address), page);
 
 	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
-	page->mapping = (struct address_space *) anon_vma;
+	WRITE_ONCE(page->mapping, (struct address_space *) anon_vma);
 }
 
 /**
@@ -987,7 +987,7 @@ static void __page_set_anon_rmap(struct page *page,
 		anon_vma = anon_vma->root;
 
 	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
-	page->mapping = (struct address_space *) anon_vma;
+	WRITE_ONCE(page->mapping, (struct address_space *) anon_vma);
 	page->index = linear_page_index(vma, address);
 }
 
@@ -1579,7 +1579,7 @@ static void __hugepage_set_anon_rmap(struct page *page,
 		anon_vma = anon_vma->root;
 
 	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
-	page->mapping = (struct address_space *) anon_vma;
+	WRITE_ONCE(page->mapping, (struct address_space *) anon_vma);
 	page->index = linear_page_index(vma, address);
 }
 
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-05-17 12:48 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-11  7:51 [RFC] rmap: fix "race" between do_wp_page and shrink_active_list Vladimir Davydov
2015-05-11  8:59 ` yalin wang
2015-05-12  8:34   ` Vladimir Davydov
2015-05-17 12:44     ` yalin
2015-05-11  9:36 ` Kirill A. Shutemov
2015-05-12  9:27   ` Vladimir Davydov
2015-05-11 14:24 ` Paul E. McKenney
2015-05-12  9:31   ` Vladimir Davydov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).