linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 1/2] mm: try to detect that page->ptl is in use
@ 2013-10-14 14:32 Kirill A. Shutemov
  2013-10-14 14:32 ` [PATCHv2 2/2] xtensa: use buddy allocator for PTE table Kirill A. Shutemov
  0 siblings, 1 reply; 8+ messages in thread
From: Kirill A. Shutemov @ 2013-10-14 14:32 UTC (permalink / raw)
  To: Andrew Morton, Peter Zijlstra
  Cc: Max Filippov, Chris Zankel, Christoph Lameter, Pekka Enberg,
	Matt Mackall, linux-kernel, linux-mm, linux-arch, linux-xtensa,
	Kirill A. Shutemov

prep_new_page() initialize page->private (and therefore page->ptl) with
0. Make sure nobody took it in use in between allocation of the page and
page table constructor.

It can happen if arch try to use slab for page table allocation: slab
code uses page->slab_cache and page->first_page (for tail pages), which
share storage with page->ptl.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
v2:
 - fix typo;

 Documentation/vm/split_page_table_lock | 4 ++++
 include/linux/mm.h                     | 9 +++++++++
 2 files changed, 13 insertions(+)

diff --git a/Documentation/vm/split_page_table_lock b/Documentation/vm/split_page_table_lock
index e2f617b732..7521d367f2 100644
--- a/Documentation/vm/split_page_table_lock
+++ b/Documentation/vm/split_page_table_lock
@@ -53,6 +53,10 @@ There's no need in special enabling of PTE split page table lock:
 everything required is done by pgtable_page_ctor() and pgtable_page_dtor(),
 which must be called on PTE table allocation / freeing.
 
+Make sure the architecture doesn't use slab allocator for page table
+allocation: slab uses page->slab_cache and page->first_page for its pages.
+These fields share storage with page->ptl.
+
 PMD split lock only makes sense if you have more than two page table
 levels.
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 658e8b317f..9a4a873b2f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1262,6 +1262,15 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 
 static inline bool ptlock_init(struct page *page)
 {
+	/*
+	 * prep_new_page() initialize page->private (and therefore page->ptl)
+	 * with 0. Make sure nobody took it in use in between.
+	 *
+	 * It can happen if arch try to use slab for page table allocation:
+	 * slab code uses page->slab_cache and page->first_page (for tail
+	 * pages), which share storage with page->ptl.
+	 */
+	VM_BUG_ON(page->ptl);
 	if (!ptlock_alloc(page))
 		return false;
 	spin_lock_init(ptlock_ptr(page));
-- 
1.8.4.rc3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-10-15 20:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-14 14:32 [PATCHv2 1/2] mm: try to detect that page->ptl is in use Kirill A. Shutemov
2013-10-14 14:32 ` [PATCHv2 2/2] xtensa: use buddy allocator for PTE table Kirill A. Shutemov
2013-10-14 14:32   ` Kirill A. Shutemov
2013-10-14 15:19   ` Max Filippov
2013-10-14 15:19     ` Max Filippov
2013-10-14 15:44     ` Kirill A. Shutemov
2013-10-14 15:44       ` Kirill A. Shutemov
2013-10-15 20:44       ` Max Filippov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).