From: David Hildenbrand <david@redhat.com>
To: Vlastimil Babka <vbabka@suse.cz>, linux-kernel@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Hugh Dickins <hughd@google.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
David Rientjes <rientjes@google.com>,
Shakeel Butt <shakeelb@google.com>,
John Hubbard <jhubbard@nvidia.com>,
Jason Gunthorpe <jgg@nvidia.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Mike Rapoport <rppt@linux.ibm.com>,
Yang Shi <shy828301@gmail.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
Matthew Wilcox <willy@infradead.org>,
Jann Horn <jannh@google.com>, Michal Hocko <mhocko@kernel.org>,
Nadav Amit <namit@vmware.com>, Rik van Riel <riel@surriel.com>,
Roman Gushchin <guro@fb.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Peter Xu <peterx@redhat.com>, Donald Dutile <ddutile@redhat.com>,
Christoph Hellwig <hch@lst.de>, Oleg Nesterov <oleg@redhat.com>,
Jan Kara <jack@suse.cz>, Liang Zhang <zhangliang5@huawei.com>,
Pedro Gomes <pedrodemargomes@gmail.com>,
Oded Gabbay <oded.gabbay@gmail.com>,
linux-mm@kvack.org
Subject: Re: [PATCH v3 12/16] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive
Date: Tue, 19 Apr 2022 18:46:43 +0200 [thread overview]
Message-ID: <219bd2d0-92ef-bcac-458a-0df6190fa387@redhat.com> (raw)
In-Reply-To: <5fc7d007-e59b-de8d-4d88-3f1b5adfa95b@suse.cz>
On 13.04.22 20:28, Vlastimil Babka wrote:
> On 4/13/22 18:39, David Hildenbrand wrote:
>>>> @@ -3035,10 +3083,19 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>>>>
>>>> flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
>>>> pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
>>>> +
>>>> + anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
>>>> + if (anon_exclusive && page_try_share_anon_rmap(page)) {
>>>> + set_pmd_at(mm, address, pvmw->pmd, pmdval);
>>>> + return;
>>>
>>> I am admittedly not too familiar with this code, but looks like this means
>>> we fail to migrate the THP, right? But we don't seem to be telling the
>>> caller, which is try_to_migrate_one(), so it will continue and not terminate
>>> the walk and return false?
>>
>> Right, we're not returning "false". Returning "false" would be an
>> optimization to make rmap_walk_anon() fail faster.
>
> Ah right, that's what I missed, it's an optimization and we will realize
> elsewhere afterwards that the page has still mappings and we can't migrate...
I'll include that patch in v4 (to be tested):
From 08fb0e45404e3d0f85c2ad23a473e95053396376 Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Tue, 19 Apr 2022 18:39:23 +0200
Subject: [PATCH] mm/rmap: fail try_to_migrate() early when setting a PMD
migration entry fails
Let's fail right away in case we cannot clear PG_anon_exclusive because
the anon THP may be pinned. Right now, we continue trying to
install migration entries and the caller of try_to_migrate() will
realize that the page is still mapped and has to restore the migration
entries. Let's just fail fast just like for PTE migration entries.
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/swapops.h | 4 ++--
mm/huge_memory.c | 8 +++++---
mm/rmap.c | 6 +++++-
3 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 06280fc1c99b..8b6e4cd1fab8 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -299,7 +299,7 @@ static inline bool is_pfn_swap_entry(swp_entry_t entry)
struct page_vma_mapped_walk;
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
-extern void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+extern int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
struct page *page);
extern void remove_migration_pmd(struct page_vma_mapped_walk *pvmw,
@@ -332,7 +332,7 @@ static inline int is_pmd_migration_entry(pmd_t pmd)
return !pmd_present(pmd) && is_migration_entry(pmd_to_swp_entry(pmd));
}
#else
-static inline void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+static inline int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
struct page *page)
{
BUILD_BUG();
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c7ac1b462543..390f22334ee9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3080,7 +3080,7 @@ late_initcall(split_huge_pages_debugfs);
#endif
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
-void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
struct page *page)
{
struct vm_area_struct *vma = pvmw->vma;
@@ -3092,7 +3092,7 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
pmd_t pmdswp;
if (!(pvmw->pmd && !pvmw->pte))
- return;
+ return 0;
flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
@@ -3100,7 +3100,7 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
if (anon_exclusive && page_try_share_anon_rmap(page)) {
set_pmd_at(mm, address, pvmw->pmd, pmdval);
- return;
+ return -EBUSY;
}
if (pmd_dirty(pmdval))
@@ -3118,6 +3118,8 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
page_remove_rmap(page, vma, true);
put_page(page);
trace_set_migration_pmd(address, pmd_val(pmdswp));
+
+ return 0;
}
void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
diff --git a/mm/rmap.c b/mm/rmap.c
index 00418faaf4ce..68c2f61bf212 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1814,7 +1814,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
!folio_test_pmd_mappable(folio), folio);
- set_pmd_migration_entry(&pvmw, subpage);
+ if (set_pmd_migration_entry(&pvmw, subpage)) {
+ ret = false;
+ page_vma_mapped_walk_done(&pvmw);
+ break;
+ }
continue;
}
#endif
--
2.35.1
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2022-04-19 16:46 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-29 16:04 [PATCH v3 00/16] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
2022-03-29 16:04 ` [PATCH v3 01/16] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed David Hildenbrand
2022-04-11 16:04 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 02/16] mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range() David Hildenbrand
2022-04-11 16:15 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 03/16] mm/memory: slightly simplify copy_present_pte() David Hildenbrand
2022-04-11 16:38 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 04/16] mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and page_try_dup_anon_rmap() David Hildenbrand
2022-04-11 18:18 ` Vlastimil Babka
2022-04-12 8:06 ` David Hildenbrand
2022-03-29 16:04 ` [PATCH v3 05/16] mm/rmap: convert RMAP flags to a proper distinct rmap_t type David Hildenbrand
2022-04-12 8:11 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 06/16] mm/rmap: remove do_page_add_anon_rmap() David Hildenbrand
2022-04-12 8:13 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 07/16] mm/rmap: pass rmap flags to hugepage_add_anon_rmap() David Hildenbrand
2022-04-12 8:37 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 08/16] mm/rmap: drop "compound" parameter from page_add_new_anon_rmap() David Hildenbrand
2022-04-12 8:47 ` Vlastimil Babka
2022-04-12 9:37 ` David Hildenbrand
2022-04-13 12:26 ` Matthew Wilcox
2022-04-13 12:28 ` David Hildenbrand
2022-04-13 12:48 ` Matthew Wilcox
2022-04-13 16:20 ` David Hildenbrand
2022-03-29 16:04 ` [PATCH v3 09/16] mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively David Hildenbrand
2022-04-12 9:26 ` Vlastimil Babka
2022-04-12 9:28 ` David Hildenbrand
2022-03-29 16:04 ` [PATCH v3 10/16] mm/huge_memory: remove outdated VM_WARN_ON_ONCE_PAGE from unmap_page() David Hildenbrand
2022-04-12 9:37 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 11/16] mm/page-flags: reuse PG_mappedtodisk as PG_anon_exclusive for PageAnon() pages David Hildenbrand
2022-04-13 8:25 ` Vlastimil Babka
2022-04-13 10:28 ` David Hildenbrand
2022-04-13 14:55 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 12/16] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive David Hildenbrand
2022-04-13 16:28 ` Vlastimil Babka
2022-04-13 16:39 ` David Hildenbrand
2022-04-13 18:28 ` Vlastimil Babka
2022-04-19 16:46 ` David Hildenbrand [this message]
2022-04-13 18:29 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 13/16] mm/gup: disallow follow_page(FOLL_PIN) David Hildenbrand
2022-04-14 15:18 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 14/16] mm: support GUP-triggered unsharing of anonymous pages David Hildenbrand
2022-04-14 17:15 ` Vlastimil Babka
2022-04-19 16:29 ` David Hildenbrand
2022-04-19 16:31 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 15/16] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page David Hildenbrand
2022-04-19 15:56 ` Vlastimil Babka
2022-03-29 16:04 ` [PATCH v3 16/16] mm/gup: sanity-check with CONFIG_DEBUG_VM that anonymous pages are exclusive when (un)pinning David Hildenbrand
2022-04-19 17:40 ` Vlastimil Babka
2022-04-21 9:15 ` David Hildenbrand
2022-04-22 6:54 ` Vlastimil Babka
2022-03-29 16:09 ` [PATCH v3 00/16] mm: COW fixes part 2: reliable GUP pins of anonymous pages David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=219bd2d0-92ef-bcac-458a-0df6190fa387@redhat.com \
--to=david@redhat.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=ddutile@redhat.com \
--cc=guro@fb.com \
--cc=hch@lst.de \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=namit@vmware.com \
--cc=oded.gabbay@gmail.com \
--cc=oleg@redhat.com \
--cc=pedrodemargomes@gmail.com \
--cc=peterx@redhat.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=rppt@linux.ibm.com \
--cc=shakeelb@google.com \
--cc=shy828301@gmail.com \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=zhangliang5@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).