From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Miaohe Lin <linmiaohe@huawei.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-mm@kvack.org, Jane Chu <jane.chu@oracle.com>
Subject: [PATCH v3 08/11] mm/memory-failure: Convert hwpoison_user_mappings to take a folio
Date: Fri, 12 Apr 2024 20:35:05 +0100 [thread overview]
Message-ID: <20240412193510.2356957-9-willy@infradead.org> (raw)
In-Reply-To: <20240412193510.2356957-1-willy@infradead.org>
Pass the folio from the callers, and use it throughout instead of hpage.
Saves dozens of calls to compound_head().
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 2c12bcc9c550..0fcf749682ab 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1553,24 +1553,24 @@ static int get_hwpoison_page(struct page *p, unsigned long flags)
* Do all that is necessary to remove user space mappings. Unmap
* the pages and send SIGBUS to the processes if the data was dirty.
*/
-static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
- int flags, struct page *hpage)
+static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
+ unsigned long pfn, int flags)
{
- struct folio *folio = page_folio(hpage);
enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;
struct address_space *mapping;
LIST_HEAD(tokill);
bool unmap_success;
int forcekill;
- bool mlocked = PageMlocked(hpage);
+ bool mlocked = folio_test_mlocked(folio);
/*
* Here we are interested only in user-mapped pages, so skip any
* other types of pages.
*/
- if (PageReserved(p) || PageSlab(p) || PageTable(p) || PageOffline(p))
+ if (folio_test_reserved(folio) || folio_test_slab(folio) ||
+ folio_test_pgtable(folio) || folio_test_offline(folio))
return true;
- if (!(PageLRU(hpage) || PageHuge(p)))
+ if (!(folio_test_lru(folio) || folio_test_hugetlb(folio)))
return true;
/*
@@ -1580,7 +1580,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
if (!page_mapped(p))
return true;
- if (PageSwapCache(p)) {
+ if (folio_test_swapcache(folio)) {
pr_err("%#lx: keeping poisoned page in swap cache\n", pfn);
ttu &= ~TTU_HWPOISON;
}
@@ -1591,11 +1591,11 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* XXX: the dirty test could be racy: set_page_dirty() may not always
* be called inside page lock (it's recommended but not enforced).
*/
- mapping = page_mapping(hpage);
- if (!(flags & MF_MUST_KILL) && !PageDirty(hpage) && mapping &&
+ mapping = folio_mapping(folio);
+ if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping &&
mapping_can_writeback(mapping)) {
- if (page_mkclean(hpage)) {
- SetPageDirty(hpage);
+ if (folio_mkclean(folio)) {
+ folio_set_dirty(folio);
} else {
ttu &= ~TTU_HWPOISON;
pr_info("%#lx: corrupted page was clean: dropped without side effects\n",
@@ -1610,7 +1610,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
*/
collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED);
- if (PageHuge(hpage) && !PageAnon(hpage)) {
+ if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
/*
* For hugetlb pages in shared mappings, try_to_unmap
* could potentially call huge_pmd_unshare. Because of
@@ -1650,7 +1650,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* use a more force-full uncatchable kill to prevent
* any accesses to the poisoned memory.
*/
- forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL) ||
+ forcekill = folio_test_dirty(folio) || (flags & MF_MUST_KILL) ||
!unmap_success;
kill_procs(&tokill, forcekill, !unmap_success, pfn, flags);
@@ -2094,7 +2094,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb
page_flags = folio->flags;
- if (!hwpoison_user_mappings(p, pfn, flags, &folio->page)) {
+ if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
folio_unlock(folio);
return action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
}
@@ -2361,7 +2361,7 @@ int memory_failure(unsigned long pfn, int flags)
* Now take care of user space mappings.
* Abort on fail: __filemap_remove_folio() assumes unmapped page.
*/
- if (!hwpoison_user_mappings(p, pfn, flags, p)) {
+ if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
res = action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
goto unlock_page;
}
--
2.43.0
next prev parent reply other threads:[~2024-04-12 19:35 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 01/11] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 02/11] mm/memory-failure: Pass addr to __add_to_kill() Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 03/11] mm: Return the address from page_mapped_in_vma() Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE Matthew Wilcox (Oracle)
2024-04-18 8:31 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 05/11] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
2024-04-18 8:44 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 06/11] mm: Convert hugetlb_page_mapping_lock_write to folio Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 07/11] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
2024-04-18 9:24 ` Miaohe Lin
2024-04-12 19:35 ` Matthew Wilcox (Oracle) [this message]
2024-04-12 19:35 ` [PATCH v3 09/11] mm/memory-failure: Add some folio conversions to unpoison_memory Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 10/11] mm/memory-failure: Use folio functions throughout collect_procs() Matthew Wilcox (Oracle)
2024-04-18 9:31 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm() Matthew Wilcox (Oracle)
2024-04-18 12:31 ` Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240412193510.2356957-9-willy@infradead.org \
--to=willy@infradead.org \
--cc=jane.chu@oracle.com \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).