From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
Hugh Dickins <hughd@google.com>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
Sasha Levin <sasha.levin@oracle.com>,
Minchan Kim <minchan@kernel.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCH 3/4] thp: fix split vs. unmap race
Date: Tue, 3 Nov 2015 17:26:14 +0200 [thread overview]
Message-ID: <1446564375-72143-4-git-send-email-kirill.shutemov@linux.intel.com> (raw)
In-Reply-To: <1446564375-72143-1-git-send-email-kirill.shutemov@linux.intel.com>
To stabilize compound page during split we use migration entries.
The code to implement this is buggy: I wrongly assumed that kernel would
wait migration to finish, before zapping ptes.
But turn out that's not true.
As result if zap_pte_range() races with split_huge_page(), we can end up
with page which is not mapped anymore but has _count and _mapcount
elevated. The page is on LRU too. So it's still reachable by vmscan and by
pfn scanners. It's likely that page->mapping in this case would point to
freed anon_vma.
BOOM!
The patch modify freeze/unfreeze_page() code to match normal migration
entries logic: on setup we remove page from rmap and drop pin, on removing
we get pin back and put page on rmap. This way even if migration entry
will be removed under us we don't corrupt page's state.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Minchan Kim <minchan@kernel.org>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
---
mm/huge_memory.c | 22 ++++++++++++++++++----
mm/rmap.c | 19 +++++--------------
2 files changed, 23 insertions(+), 18 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5009f68786d0..3700981f8035 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2934,6 +2934,13 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
smp_wmb(); /* make pte visible before pmd */
pmd_populate(mm, pmd, pgtable);
+
+ if (freeze) {
+ for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
+ page_remove_rmap(page + i, false);
+ put_page(page + i);
+ }
+ }
}
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
@@ -3079,6 +3086,8 @@ static void freeze_page_vma(struct vm_area_struct *vma, struct page *page,
if (pte_soft_dirty(entry))
swp_pte = pte_swp_mksoft_dirty(swp_pte);
set_pte_at(vma->vm_mm, address, pte + i, swp_pte);
+ page_remove_rmap(page, false);
+ put_page(page);
}
pte_unmap_unlock(pte, ptl);
}
@@ -3117,8 +3126,6 @@ static void unfreeze_page_vma(struct vm_area_struct *vma, struct page *page,
return;
pte = pte_offset_map_lock(vma->vm_mm, pmd, address, &ptl);
for (i = 0; i < HPAGE_PMD_NR; i++, address += PAGE_SIZE, page++) {
- if (!page_mapped(page))
- continue;
if (!is_swap_pte(pte[i]))
continue;
@@ -3128,6 +3135,9 @@ static void unfreeze_page_vma(struct vm_area_struct *vma, struct page *page,
if (migration_entry_to_page(swp_entry) != page)
continue;
+ get_page(page);
+ page_add_anon_rmap(page, vma, address, false);
+
entry = pte_mkold(mk_pte(page, vma->vm_page_prot));
entry = pte_mkdirty(entry);
if (is_write_migration_entry(swp_entry))
@@ -3195,8 +3205,6 @@ static int __split_huge_page_tail(struct page *head, int tail,
*/
atomic_add(mapcount + 1, &page_tail->_count);
- /* after clearing PageTail the gup refcount can be released */
- smp_mb__after_atomic();
page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
page_tail->flags |= (head->flags &
@@ -3209,6 +3217,12 @@ static int __split_huge_page_tail(struct page *head, int tail,
(1L << PG_unevictable)));
page_tail->flags |= (1L << PG_dirty);
+ /*
+ * After clearing PageTail the gup refcount can be released.
+ * Page flags also must be visible before we make the page non-compound.
+ */
+ smp_wmb();
+
clear_compound_head(page_tail);
if (page_is_young(head))
diff --git a/mm/rmap.c b/mm/rmap.c
index 288622f5f34d..ad9af8b3a381 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1135,20 +1135,12 @@ void do_page_add_anon_rmap(struct page *page,
bool compound = flags & RMAP_COMPOUND;
bool first;
- if (PageTransCompound(page)) {
+ if (compound) {
+ atomic_t *mapcount;
VM_BUG_ON_PAGE(!PageLocked(page), page);
- if (compound) {
- atomic_t *mapcount;
-
- VM_BUG_ON_PAGE(!PageTransHuge(page), page);
- mapcount = compound_mapcount_ptr(page);
- first = atomic_inc_and_test(mapcount);
- } else {
- /* Anon THP always mapped first with PMD */
- first = 0;
- VM_BUG_ON_PAGE(!page_mapcount(page), page);
- atomic_inc(&page->_mapcount);
- }
+ VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+ mapcount = compound_mapcount_ptr(page);
+ first = atomic_inc_and_test(mapcount);
} else {
VM_BUG_ON_PAGE(compound, page);
first = atomic_inc_and_test(&page->_mapcount);
@@ -1163,7 +1155,6 @@ void do_page_add_anon_rmap(struct page *page,
* disabled.
*/
if (compound) {
- VM_BUG_ON_PAGE(!PageTransHuge(page), page);
__inc_zone_page_state(page,
NR_ANON_TRANSPARENT_HUGEPAGES);
}
--
2.6.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-11-03 15:26 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-03 15:26 [PATCH 0/4] Bugfixes for THP refcounting Kirill A. Shutemov
2015-11-03 15:26 ` [PATCH 1/4] mm: do not crash on PageDoubleMap() for non-head pages Kirill A. Shutemov
2015-11-03 15:26 ` [PATCH 2/4] mm: duplicate rmap reference for hugetlb pages as compound Kirill A. Shutemov
2015-11-03 15:26 ` Kirill A. Shutemov [this message]
2015-11-03 15:26 ` [PATCH 4/4] mm: prepare page_referenced() and page_idle to new THP refcounting Kirill A. Shutemov
2015-11-05 9:10 ` Vladimir Davydov
2015-11-05 9:24 ` Kirill A. Shutemov
2015-11-05 12:07 ` Vladimir Davydov
2015-11-05 12:36 ` Kirill A. Shutemov
2015-11-05 12:53 ` Vladimir Davydov
2015-11-05 12:58 ` Kirill A. Shutemov
2015-11-05 16:31 ` Vladimir Davydov
2015-11-06 14:37 ` [PATCH] mm: add page_check_address_transhuge helper Vladimir Davydov
2015-11-06 15:24 ` Kirill A. Shutemov
2015-11-05 16:03 ` [PATCH 4/4] mm: prepare page_referenced() and page_idle to new THP refcounting Vladimir Davydov
2015-11-05 17:27 ` Kirill A. Shutemov
2015-11-06 0:32 ` Andrew Morton
2015-11-06 10:29 ` Kirill A. Shutemov
2015-11-06 22:39 ` Andrew Morton
2015-11-08 23:40 ` Kirill A. Shutemov
[not found] <052401d116e0$c3ac0110$4b040330$@alibaba-inc.com>
2015-11-04 9:20 ` [PATCH 3/4] thp: fix split vs. unmap race Hillf Danton
2015-11-05 9:26 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1446564375-72143-4-git-send-email-kirill.shutemov@linux.intel.com \
--to=kirill.shutemov@linux.intel.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=n-horiguchi@ah.jp.nec.com \
--cc=sasha.levin@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).