* [PATCH v3 00/11] Some cleanups for memory-failure
@ 2024-04-12 19:34 Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 01/11] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
` (10 more replies)
0 siblings, 11 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:34 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm
A lot of folio conversions, plus some other simplifications.
v3:
- Rebase on next-20240412
- Reinstate missing hunks of "Return the address from page_mapped_in_vma()"
- Accumulate R-b tags
Matthew Wilcox (Oracle) (11):
mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill
mm/memory-failure: Pass addr to __add_to_kill()
mm: Return the address from page_mapped_in_vma()
mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE
mm/memory-failure: Convert shake_page() to shake_folio()
mm: Convert hugetlb_page_mapping_lock_write to folio
mm/memory-failure: Convert memory_failure() to use a folio
mm/memory-failure: Convert hwpoison_user_mappings to take a folio
mm/memory-failure: Add some folio conversions to unpoison_memory
mm/memory-failure: Use folio functions throughout collect_procs()
mm/memory-failure: Pass the folio to collect_procs_ksm()
include/linux/hugetlb.h | 6 +-
include/linux/ksm.h | 14 +---
include/linux/mm.h | 1 -
include/linux/rmap.h | 2 +-
mm/hugetlb.c | 6 +-
mm/hwpoison-inject.c | 11 +--
mm/internal.h | 1 +
mm/ksm.c | 5 +-
mm/memory-failure.c | 150 +++++++++++++++++++++-------------------
mm/migrate.c | 2 +-
mm/page_vma_mapped.c | 18 +++--
11 files changed, 108 insertions(+), 108 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 01/11] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
@ 2024-04-12 19:34 ` Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 02/11] mm/memory-failure: Pass addr to __add_to_kill() Matthew Wilcox (Oracle)
` (9 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:34 UTC (permalink / raw)
To: Miaohe Lin
Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu, Dan Williams,
Oscar Salvador
Unify the KSM and DAX codepaths by calculating the addr in
add_to_kill_fsdax() instead of telling __add_to_kill() to calculate it.
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 27 +++++++++------------------
1 file changed, 9 insertions(+), 18 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index ee2f4b8905ef..8adf233837bf 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -421,21 +421,13 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
* not much we can do. We just print a message and ignore otherwise.
*/
-#define FSDAX_INVALID_PGOFF ULONG_MAX
-
/*
* Schedule a process for later kill.
* Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
- *
- * Note: @fsdax_pgoff is used only when @p is a fsdax page and a
- * filesystem with a memory failure handler has claimed the
- * memory_failure event. In all other cases, page->index and
- * page->mapping are sufficient for mapping the page back to its
- * corresponding user virtual address.
*/
static void __add_to_kill(struct task_struct *tsk, struct page *p,
struct vm_area_struct *vma, struct list_head *to_kill,
- unsigned long ksm_addr, pgoff_t fsdax_pgoff)
+ unsigned long addr)
{
struct to_kill *tk;
@@ -445,12 +437,10 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p,
return;
}
- tk->addr = ksm_addr ? ksm_addr : page_address_in_vma(p, vma);
- if (is_zone_device_page(p)) {
- if (fsdax_pgoff != FSDAX_INVALID_PGOFF)
- tk->addr = vma_address(vma, fsdax_pgoff, 1);
+ tk->addr = addr ? addr : page_address_in_vma(p, vma);
+ if (is_zone_device_page(p))
tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr);
- } else
+ else
tk->size_shift = page_shift(compound_head(p));
/*
@@ -480,7 +470,7 @@ static void add_to_kill_anon_file(struct task_struct *tsk, struct page *p,
struct vm_area_struct *vma,
struct list_head *to_kill)
{
- __add_to_kill(tsk, p, vma, to_kill, 0, FSDAX_INVALID_PGOFF);
+ __add_to_kill(tsk, p, vma, to_kill, 0);
}
#ifdef CONFIG_KSM
@@ -498,10 +488,10 @@ static bool task_in_to_kill_list(struct list_head *to_kill,
}
void add_to_kill_ksm(struct task_struct *tsk, struct page *p,
struct vm_area_struct *vma, struct list_head *to_kill,
- unsigned long ksm_addr)
+ unsigned long addr)
{
if (!task_in_to_kill_list(to_kill, tsk))
- __add_to_kill(tsk, p, vma, to_kill, ksm_addr, FSDAX_INVALID_PGOFF);
+ __add_to_kill(tsk, p, vma, to_kill, addr);
}
#endif
/*
@@ -675,7 +665,8 @@ static void add_to_kill_fsdax(struct task_struct *tsk, struct page *p,
struct vm_area_struct *vma,
struct list_head *to_kill, pgoff_t pgoff)
{
- __add_to_kill(tsk, p, vma, to_kill, 0, pgoff);
+ unsigned long addr = vma_address(vma, pgoff, 1);
+ __add_to_kill(tsk, p, vma, to_kill, addr);
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 02/11] mm/memory-failure: Pass addr to __add_to_kill()
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 01/11] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
@ 2024-04-12 19:34 ` Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 03/11] mm: Return the address from page_mapped_in_vma() Matthew Wilcox (Oracle)
` (8 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:34 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu, Oscar Salvador
Handle anon/file folios the same way as KSM & DAX folios by passing in
the address.
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 8adf233837bf..aec407788df1 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -437,7 +437,7 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p,
return;
}
- tk->addr = addr ? addr : page_address_in_vma(p, vma);
+ tk->addr = addr;
if (is_zone_device_page(p))
tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr);
else
@@ -470,7 +470,8 @@ static void add_to_kill_anon_file(struct task_struct *tsk, struct page *p,
struct vm_area_struct *vma,
struct list_head *to_kill)
{
- __add_to_kill(tsk, p, vma, to_kill, 0);
+ unsigned long addr = page_address_in_vma(p, vma);
+ __add_to_kill(tsk, p, vma, to_kill, addr);
}
#ifdef CONFIG_KSM
@@ -486,6 +487,7 @@ static bool task_in_to_kill_list(struct list_head *to_kill,
return false;
}
+
void add_to_kill_ksm(struct task_struct *tsk, struct page *p,
struct vm_area_struct *vma, struct list_head *to_kill,
unsigned long addr)
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 03/11] mm: Return the address from page_mapped_in_vma()
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 01/11] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 02/11] mm/memory-failure: Pass addr to __add_to_kill() Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE Matthew Wilcox (Oracle)
` (7 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu
The only user of this function calls page_address_in_vma() immediately
after page_mapped_in_vma() calculates it and uses it to return true/false.
Return the address instead, allowing memory-failure to skip the call
to page_address_in_vma().
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/rmap.h | 2 +-
mm/memory-failure.c | 22 +++++++++++++---------
mm/page_vma_mapped.c | 16 +++++++++-------
3 files changed, 23 insertions(+), 17 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 0f906dc6d280..7229b9baf20d 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -730,7 +730,7 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
-int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
+unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
/*
* rmap_walk_control: To control rmap traversing for specific needs
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index aec407788df1..0ad6b8936512 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -467,10 +467,11 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p,
}
static void add_to_kill_anon_file(struct task_struct *tsk, struct page *p,
- struct vm_area_struct *vma,
- struct list_head *to_kill)
+ struct vm_area_struct *vma, struct list_head *to_kill,
+ unsigned long addr)
{
- unsigned long addr = page_address_in_vma(p, vma);
+ if (addr == -EFAULT)
+ return;
__add_to_kill(tsk, p, vma, to_kill, addr);
}
@@ -595,7 +596,6 @@ struct task_struct *task_early_kill(struct task_struct *tsk, int force_early)
static void collect_procs_anon(struct folio *folio, struct page *page,
struct list_head *to_kill, int force_early)
{
- struct vm_area_struct *vma;
struct task_struct *tsk;
struct anon_vma *av;
pgoff_t pgoff;
@@ -607,8 +607,10 @@ static void collect_procs_anon(struct folio *folio, struct page *page,
pgoff = page_to_pgoff(page);
rcu_read_lock();
for_each_process(tsk) {
+ struct vm_area_struct *vma;
struct anon_vma_chain *vmac;
struct task_struct *t = task_early_kill(tsk, force_early);
+ unsigned long addr;
if (!t)
continue;
@@ -617,9 +619,8 @@ static void collect_procs_anon(struct folio *folio, struct page *page,
vma = vmac->vma;
if (vma->vm_mm != t->mm)
continue;
- if (!page_mapped_in_vma(page, vma))
- continue;
- add_to_kill_anon_file(t, page, vma, to_kill);
+ addr = page_mapped_in_vma(page, vma);
+ add_to_kill_anon_file(t, page, vma, to_kill, addr);
}
}
rcu_read_unlock();
@@ -642,6 +643,7 @@ static void collect_procs_file(struct folio *folio, struct page *page,
pgoff = page_to_pgoff(page);
for_each_process(tsk) {
struct task_struct *t = task_early_kill(tsk, force_early);
+ unsigned long addr;
if (!t)
continue;
@@ -654,8 +656,10 @@ static void collect_procs_file(struct folio *folio, struct page *page,
* Assume applications who requested early kill want
* to be informed of all such data corruptions.
*/
- if (vma->vm_mm == t->mm)
- add_to_kill_anon_file(t, page, vma, to_kill);
+ if (vma->vm_mm != t->mm)
+ continue;
+ addr = page_address_in_vma(page, vma);
+ add_to_kill_anon_file(t, page, vma, to_kill, addr);
}
}
rcu_read_unlock();
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 53b8868ede61..c202eab84936 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -319,11 +319,12 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
* @page: the page to test
* @vma: the VMA to test
*
- * Returns 1 if the page is mapped into the page tables of the VMA, 0
- * if the page is not mapped into the page tables of this VMA. Only
- * valid for normal file or anonymous VMAs.
+ * Return: The address the page is mapped at if the page is in the range
+ * covered by the VMA and present in the page table. If the page is
+ * outside the VMA or not present, returns -EFAULT.
+ * Only valid for normal file or anonymous VMAs.
*/
-int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
+unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
{
struct folio *folio = page_folio(page);
pgoff_t pgoff = folio->index + folio_page_idx(folio, page);
@@ -336,9 +337,10 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
pvmw.address = vma_address(vma, pgoff, 1);
if (pvmw.address == -EFAULT)
- return 0;
+ goto out;
if (!page_vma_mapped_walk(&pvmw))
- return 0;
+ return -EFAULT;
page_vma_mapped_walk_done(&pvmw);
- return 1;
+out:
+ return pvmw.address;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 03/11] mm: Return the address from page_mapped_in_vma() Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-18 8:31 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 05/11] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
` (6 subsequent siblings)
10 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu, Oscar Salvador
This function is only currently used by the memory-failure code, so
we can omit it if we're not compiling in the memory-failure code.
Suggested-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/page_vma_mapped.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index c202eab84936..ae5cc42aa208 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -314,6 +314,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
return false;
}
+#ifdef CONFIG_MEMORY_FAILURE
/**
* page_mapped_in_vma - check whether a page is really mapped in a VMA
* @page: the page to test
@@ -344,3 +345,4 @@ unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
out:
return pvmw.address;
}
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 05/11] mm/memory-failure: Convert shake_page() to shake_folio()
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (3 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-18 8:44 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 06/11] mm: Convert hugetlb_page_mapping_lock_write to folio Matthew Wilcox (Oracle)
` (5 subsequent siblings)
10 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu
Removes two calls to compound_head(). Move the prototype to
internal.h; we definitely don't want code outside mm using it.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
---
include/linux/mm.h | 1 -
mm/hwpoison-inject.c | 11 ++++++-----
mm/internal.h | 1 +
mm/memory-failure.c | 15 ++++++++++-----
4 files changed, 17 insertions(+), 11 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 326a4ce0cff8..b636504c176e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4028,7 +4028,6 @@ int mf_dax_kill_procs(struct address_space *mapping, pgoff_t index,
extern int memory_failure(unsigned long pfn, int flags);
extern void memory_failure_queue_kick(int cpu);
extern int unpoison_memory(unsigned long pfn);
-extern void shake_page(struct page *p);
extern atomic_long_t num_poisoned_pages __read_mostly;
extern int soft_offline_page(unsigned long pfn, int flags);
#ifdef CONFIG_MEMORY_FAILURE
diff --git a/mm/hwpoison-inject.c b/mm/hwpoison-inject.c
index d0548e382b6b..c9d653f51e45 100644
--- a/mm/hwpoison-inject.c
+++ b/mm/hwpoison-inject.c
@@ -15,7 +15,7 @@ static int hwpoison_inject(void *data, u64 val)
{
unsigned long pfn = val;
struct page *p;
- struct page *hpage;
+ struct folio *folio;
int err;
if (!capable(CAP_SYS_ADMIN))
@@ -25,16 +25,17 @@ static int hwpoison_inject(void *data, u64 val)
return -ENXIO;
p = pfn_to_page(pfn);
- hpage = compound_head(p);
+ folio = page_folio(p);
if (!hwpoison_filter_enable)
goto inject;
- shake_page(hpage);
+ shake_folio(folio);
/*
* This implies unable to support non-LRU pages except free page.
*/
- if (!PageLRU(hpage) && !PageHuge(p) && !is_free_buddy_page(p))
+ if (!folio_test_lru(folio) && !folio_test_hugetlb(folio) &&
+ !is_free_buddy_page(p))
return 0;
/*
@@ -42,7 +43,7 @@ static int hwpoison_inject(void *data, u64 val)
* the targeted owner (or on a free page).
* memory_failure() will redo the check reliably inside page lock.
*/
- err = hwpoison_filter(hpage);
+ err = hwpoison_filter(&folio->page);
if (err)
return 0;
diff --git a/mm/internal.h b/mm/internal.h
index ab8fcdeaf6eb..b20c736e05f2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1044,6 +1044,7 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask)
/*
* mm/memory-failure.c
*/
+void shake_folio(struct folio *folio);
extern int hwpoison_filter(struct page *p);
extern u32 hwpoison_filter_dev_major;
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 0ad6b8936512..bf9da2b46426 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -363,20 +363,25 @@ static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags)
* Unknown page type encountered. Try to check whether it can turn PageLRU by
* lru_add_drain_all.
*/
-void shake_page(struct page *p)
+void shake_folio(struct folio *folio)
{
- if (PageHuge(p))
+ if (folio_test_hugetlb(folio))
return;
/*
* TODO: Could shrink slab caches here if a lightweight range-based
* shrinker will be available.
*/
- if (PageSlab(p))
+ if (folio_test_slab(folio))
return;
lru_add_drain_all();
}
-EXPORT_SYMBOL_GPL(shake_page);
+EXPORT_SYMBOL_GPL(shake_folio);
+
+static void shake_page(struct page *page)
+{
+ shake_folio(page_folio(page));
+}
static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
unsigned long address)
@@ -1633,7 +1638,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* shake_page() again to ensure that it's flushed.
*/
if (mlocked)
- shake_page(hpage);
+ shake_folio(folio);
/*
* Now that the dirty bit has been propagated to the
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 06/11] mm: Convert hugetlb_page_mapping_lock_write to folio
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (4 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 05/11] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 07/11] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
` (4 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin
Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu , Oscar Salvador
The page is only used to get the mapping, so the folio will do just
as well. Both callers already have a folio available, so this saves
a call to compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
---
include/linux/hugetlb.h | 6 +++---
mm/hugetlb.c | 6 +++---
mm/memory-failure.c | 2 +-
mm/migrate.c | 2 +-
4 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 8c97ac48ae6d..aaa11eeb939d 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -178,7 +178,7 @@ bool hugetlbfs_pagecache_present(struct hstate *h,
struct vm_area_struct *vma,
unsigned long address);
-struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
+struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio);
extern int sysctl_hugetlb_shm_group;
extern struct list_head huge_boot_pages[MAX_NUMNODES];
@@ -297,8 +297,8 @@ static inline unsigned long hugetlb_total_pages(void)
return 0;
}
-static inline struct address_space *hugetlb_page_mapping_lock_write(
- struct page *hpage)
+static inline struct address_space *hugetlb_folio_mapping_lock_write(
+ struct folio *folio)
{
return NULL;
}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a8536349de13..001016993cca 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2155,13 +2155,13 @@ static bool prep_compound_gigantic_folio_for_demote(struct folio *folio,
/*
* Find and lock address space (mapping) in write mode.
*
- * Upon entry, the page is locked which means that page_mapping() is
+ * Upon entry, the folio is locked which means that folio_mapping() is
* stable. Due to locking order, we can only trylock_write. If we can
* not get the lock, simply return NULL to caller.
*/
-struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage)
+struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio)
{
- struct address_space *mapping = page_mapping(hpage);
+ struct address_space *mapping = folio_mapping(folio);
if (!mapping)
return mapping;
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index bf9da2b46426..52feaa9de9b4 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1618,7 +1618,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* TTU_RMAP_LOCKED to indicate we have taken the lock
* at this higher level.
*/
- mapping = hugetlb_page_mapping_lock_write(hpage);
+ mapping = hugetlb_folio_mapping_lock_write(folio);
if (mapping) {
try_to_unmap(folio, ttu|TTU_RMAP_LOCKED);
i_mmap_unlock_write(mapping);
diff --git a/mm/migrate.c b/mm/migrate.c
index d87ce32645d4..fb0c83438fa8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1425,7 +1425,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
* semaphore in write mode here and set TTU_RMAP_LOCKED
* to let lower levels know we have taken the lock.
*/
- mapping = hugetlb_page_mapping_lock_write(&src->page);
+ mapping = hugetlb_folio_mapping_lock_write(src);
if (unlikely(!mapping))
goto unlock_put_anon;
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 07/11] mm/memory-failure: Convert memory_failure() to use a folio
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (5 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 06/11] mm: Convert hugetlb_page_mapping_lock_write to folio Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-18 9:24 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 08/11] mm/memory-failure: Convert hwpoison_user_mappings to take " Matthew Wilcox (Oracle)
` (3 subsequent siblings)
10 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm
Saves dozens of calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 40 +++++++++++++++++++++-------------------
1 file changed, 21 insertions(+), 19 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 52feaa9de9b4..2c12bcc9c550 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2183,7 +2183,7 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
int memory_failure(unsigned long pfn, int flags)
{
struct page *p;
- struct page *hpage;
+ struct folio *folio;
struct dev_pagemap *pgmap;
int res = 0;
unsigned long page_flags;
@@ -2271,8 +2271,8 @@ int memory_failure(unsigned long pfn, int flags)
}
}
- hpage = compound_head(p);
- if (PageTransHuge(hpage)) {
+ folio = page_folio(p);
+ if (folio_test_large(folio)) {
/*
* The flag must be set after the refcount is bumped
* otherwise it may race with THP split.
@@ -2286,12 +2286,13 @@ int memory_failure(unsigned long pfn, int flags)
* or unhandlable page. The refcount is bumped iff the
* page is a valid handlable page.
*/
- SetPageHasHWPoisoned(hpage);
+ folio_set_has_hwpoisoned(folio);
if (try_to_split_thp_page(p) < 0) {
res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
goto unlock_mutex;
}
VM_BUG_ON_PAGE(!page_count(p), p);
+ folio = page_folio(p);
}
/*
@@ -2302,9 +2303,9 @@ int memory_failure(unsigned long pfn, int flags)
* The check (unnecessarily) ignores LRU pages being isolated and
* walked by the page reclaim code, however that's not a big loss.
*/
- shake_page(p);
+ shake_folio(folio);
- lock_page(p);
+ folio_lock(folio);
/*
* We're only intended to deal with the non-Compound page here.
@@ -2312,11 +2313,11 @@ int memory_failure(unsigned long pfn, int flags)
* race window. If this happens, we could try again to hopefully
* handle the page next round.
*/
- if (PageCompound(p)) {
+ if (folio_test_large(folio)) {
if (retry) {
ClearPageHWPoison(p);
- unlock_page(p);
- put_page(p);
+ folio_unlock(folio);
+ folio_put(folio);
flags &= ~MF_COUNT_INCREASED;
retry = false;
goto try_again;
@@ -2332,29 +2333,29 @@ int memory_failure(unsigned long pfn, int flags)
* folio_remove_rmap_*() in try_to_unmap_one(). So to determine page
* status correctly, we save a copy of the page flags at this time.
*/
- page_flags = p->flags;
+ page_flags = folio->flags;
if (hwpoison_filter(p)) {
ClearPageHWPoison(p);
- unlock_page(p);
- put_page(p);
+ folio_unlock(folio);
+ folio_put(folio);
res = -EOPNOTSUPP;
goto unlock_mutex;
}
/*
- * __munlock_folio() may clear a writeback page's LRU flag without
- * page_lock. We need wait writeback completion for this page or it
- * may trigger vfs BUG while evict inode.
+ * __munlock_folio() may clear a writeback folio's LRU flag without
+ * the folio lock. We need to wait for writeback completion for this
+ * folio or it may trigger a vfs BUG while evicting inode.
*/
- if (!PageLRU(p) && !PageWriteback(p))
+ if (!folio_test_lru(folio) && !folio_test_writeback(folio))
goto identify_page_state;
/*
* It's very difficult to mess with pages currently under IO
* and in many cases impossible, so we just avoid it here.
*/
- wait_on_page_writeback(p);
+ folio_wait_writeback(folio);
/*
* Now take care of user space mappings.
@@ -2368,7 +2369,8 @@ int memory_failure(unsigned long pfn, int flags)
/*
* Torn down by someone else?
*/
- if (PageLRU(p) && !PageSwapCache(p) && p->mapping == NULL) {
+ if (folio_test_lru(folio) && !folio_test_swapcache(folio) &&
+ folio->mapping == NULL) {
res = action_result(pfn, MF_MSG_TRUNCATED_LRU, MF_IGNORED);
goto unlock_page;
}
@@ -2378,7 +2380,7 @@ int memory_failure(unsigned long pfn, int flags)
mutex_unlock(&mf_mutex);
return res;
unlock_page:
- unlock_page(p);
+ folio_unlock(folio);
unlock_mutex:
mutex_unlock(&mf_mutex);
return res;
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 08/11] mm/memory-failure: Convert hwpoison_user_mappings to take a folio
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (6 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 07/11] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 09/11] mm/memory-failure: Add some folio conversions to unpoison_memory Matthew Wilcox (Oracle)
` (2 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu
Pass the folio from the callers, and use it throughout instead of hpage.
Saves dozens of calls to compound_head().
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 2c12bcc9c550..0fcf749682ab 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1553,24 +1553,24 @@ static int get_hwpoison_page(struct page *p, unsigned long flags)
* Do all that is necessary to remove user space mappings. Unmap
* the pages and send SIGBUS to the processes if the data was dirty.
*/
-static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
- int flags, struct page *hpage)
+static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
+ unsigned long pfn, int flags)
{
- struct folio *folio = page_folio(hpage);
enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;
struct address_space *mapping;
LIST_HEAD(tokill);
bool unmap_success;
int forcekill;
- bool mlocked = PageMlocked(hpage);
+ bool mlocked = folio_test_mlocked(folio);
/*
* Here we are interested only in user-mapped pages, so skip any
* other types of pages.
*/
- if (PageReserved(p) || PageSlab(p) || PageTable(p) || PageOffline(p))
+ if (folio_test_reserved(folio) || folio_test_slab(folio) ||
+ folio_test_pgtable(folio) || folio_test_offline(folio))
return true;
- if (!(PageLRU(hpage) || PageHuge(p)))
+ if (!(folio_test_lru(folio) || folio_test_hugetlb(folio)))
return true;
/*
@@ -1580,7 +1580,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
if (!page_mapped(p))
return true;
- if (PageSwapCache(p)) {
+ if (folio_test_swapcache(folio)) {
pr_err("%#lx: keeping poisoned page in swap cache\n", pfn);
ttu &= ~TTU_HWPOISON;
}
@@ -1591,11 +1591,11 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* XXX: the dirty test could be racy: set_page_dirty() may not always
* be called inside page lock (it's recommended but not enforced).
*/
- mapping = page_mapping(hpage);
- if (!(flags & MF_MUST_KILL) && !PageDirty(hpage) && mapping &&
+ mapping = folio_mapping(folio);
+ if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping &&
mapping_can_writeback(mapping)) {
- if (page_mkclean(hpage)) {
- SetPageDirty(hpage);
+ if (folio_mkclean(folio)) {
+ folio_set_dirty(folio);
} else {
ttu &= ~TTU_HWPOISON;
pr_info("%#lx: corrupted page was clean: dropped without side effects\n",
@@ -1610,7 +1610,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
*/
collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED);
- if (PageHuge(hpage) && !PageAnon(hpage)) {
+ if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
/*
* For hugetlb pages in shared mappings, try_to_unmap
* could potentially call huge_pmd_unshare. Because of
@@ -1650,7 +1650,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* use a more force-full uncatchable kill to prevent
* any accesses to the poisoned memory.
*/
- forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL) ||
+ forcekill = folio_test_dirty(folio) || (flags & MF_MUST_KILL) ||
!unmap_success;
kill_procs(&tokill, forcekill, !unmap_success, pfn, flags);
@@ -2094,7 +2094,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb
page_flags = folio->flags;
- if (!hwpoison_user_mappings(p, pfn, flags, &folio->page)) {
+ if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
folio_unlock(folio);
return action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
}
@@ -2361,7 +2361,7 @@ int memory_failure(unsigned long pfn, int flags)
* Now take care of user space mappings.
* Abort on fail: __filemap_remove_folio() assumes unmapped page.
*/
- if (!hwpoison_user_mappings(p, pfn, flags, p)) {
+ if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
res = action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
goto unlock_page;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 09/11] mm/memory-failure: Add some folio conversions to unpoison_memory
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (7 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 08/11] mm/memory-failure: Convert hwpoison_user_mappings to take " Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 10/11] mm/memory-failure: Use folio functions throughout collect_procs() Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm() Matthew Wilcox (Oracle)
10 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu
Some of these folio APIs didn't exist when the unpoison_memory()
conversion was done originally.
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 0fcf749682ab..e42c5f2179dc 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2550,8 +2550,8 @@ int unpoison_memory(unsigned long pfn)
goto unlock_mutex;
}
- if (folio_test_slab(folio) || PageTable(&folio->page) ||
- folio_test_reserved(folio) || PageOffline(&folio->page))
+ if (folio_test_slab(folio) || folio_test_pgtable(folio) ||
+ folio_test_reserved(folio) || folio_test_offline(folio))
goto unlock_mutex;
/*
@@ -2572,7 +2572,7 @@ int unpoison_memory(unsigned long pfn)
ghp = get_hwpoison_page(p, MF_UNPOISON);
if (!ghp) {
- if (PageHuge(p)) {
+ if (folio_test_hugetlb(folio)) {
huge = true;
count = folio_free_raw_hwp(folio, false);
if (count == 0)
@@ -2588,7 +2588,7 @@ int unpoison_memory(unsigned long pfn)
pfn, &unpoison_rs);
}
} else {
- if (PageHuge(p)) {
+ if (folio_test_hugetlb(folio)) {
huge = true;
count = folio_free_raw_hwp(folio, false);
if (count == 0) {
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 10/11] mm/memory-failure: Use folio functions throughout collect_procs()
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (8 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 09/11] mm/memory-failure: Add some folio conversions to unpoison_memory Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-18 9:31 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm() Matthew Wilcox (Oracle)
10 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu
Saves a couple of calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
---
mm/memory-failure.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index e42c5f2179dc..a9fa5901b48c 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -722,9 +722,9 @@ static void collect_procs(struct folio *folio, struct page *page,
{
if (!folio->mapping)
return;
- if (unlikely(PageKsm(page)))
+ if (unlikely(folio_test_ksm(folio)))
collect_procs_ksm(page, tokill, force_early);
- else if (PageAnon(page))
+ else if (folio_test_anon(folio))
collect_procs_anon(folio, page, tokill, force_early);
else
collect_procs_file(folio, page, tokill, force_early);
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm()
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
` (9 preceding siblings ...)
2024-04-12 19:35 ` [PATCH v3 10/11] mm/memory-failure: Use folio functions throughout collect_procs() Matthew Wilcox (Oracle)
@ 2024-04-12 19:35 ` Matthew Wilcox (Oracle)
2024-04-18 12:31 ` Miaohe Lin
10 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-04-12 19:35 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Matthew Wilcox (Oracle), linux-mm, Jane Chu
We've already calculated it, so pass it in instead of recalculating it
in collect_procs_ksm().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
---
include/linux/ksm.h | 14 +++-----------
mm/ksm.c | 5 ++---
mm/memory-failure.c | 2 +-
3 files changed, 6 insertions(+), 15 deletions(-)
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 358803cfd4d5..52c63a9c5a9c 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -81,15 +81,9 @@ struct folio *ksm_might_need_to_copy(struct folio *folio,
void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc);
void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
-
-#ifdef CONFIG_MEMORY_FAILURE
-void collect_procs_ksm(struct page *page, struct list_head *to_kill,
- int force_early);
-#endif
-
-#ifdef CONFIG_PROC_FS
+void collect_procs_ksm(struct folio *folio, struct page *page,
+ struct list_head *to_kill, int force_early);
long ksm_process_profit(struct mm_struct *);
-#endif /* CONFIG_PROC_FS */
#else /* !CONFIG_KSM */
@@ -120,12 +114,10 @@ static inline void ksm_might_unmap_zero_page(struct mm_struct *mm, pte_t pte)
{
}
-#ifdef CONFIG_MEMORY_FAILURE
-static inline void collect_procs_ksm(struct page *page,
+static inline void collect_procs_ksm(struct folio *folio, struct page *page,
struct list_head *to_kill, int force_early)
{
}
-#endif
#ifdef CONFIG_MMU
static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
diff --git a/mm/ksm.c b/mm/ksm.c
index 108a4d167824..0bdd4d8b4c17 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3172,12 +3172,11 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
/*
* Collect processes when the error hit an ksm page.
*/
-void collect_procs_ksm(struct page *page, struct list_head *to_kill,
- int force_early)
+void collect_procs_ksm(struct folio *folio, struct page *page,
+ struct list_head *to_kill, int force_early)
{
struct ksm_stable_node *stable_node;
struct ksm_rmap_item *rmap_item;
- struct folio *folio = page_folio(page);
struct vm_area_struct *vma;
struct task_struct *tsk;
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index a9fa5901b48c..c7cce73333f6 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -723,7 +723,7 @@ static void collect_procs(struct folio *folio, struct page *page,
if (!folio->mapping)
return;
if (unlikely(folio_test_ksm(folio)))
- collect_procs_ksm(page, tokill, force_early);
+ collect_procs_ksm(folio, page, tokill, force_early);
else if (folio_test_anon(folio))
collect_procs_anon(folio, page, tokill, force_early);
else
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v3 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE
2024-04-12 19:35 ` [PATCH v3 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE Matthew Wilcox (Oracle)
@ 2024-04-18 8:31 ` Miaohe Lin
0 siblings, 0 replies; 17+ messages in thread
From: Miaohe Lin @ 2024-04-18 8:31 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-mm, Jane Chu, Oscar Salvador
On 2024/4/13 3:35, Matthew Wilcox (Oracle) wrote:
> This function is only currently used by the memory-failure code, so
> we can omit it if we're not compiling in the memory-failure code.
>
> Suggested-by: Miaohe Lin <linmiaohe@huawei.com>
> Reviewed-by: Jane Chu <jane.chu@oracle.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Thanks.
.
> ---
> mm/page_vma_mapped.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index c202eab84936..ae5cc42aa208 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -314,6 +314,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> return false;
> }
>
> +#ifdef CONFIG_MEMORY_FAILURE
> /**
> * page_mapped_in_vma - check whether a page is really mapped in a VMA
> * @page: the page to test
> @@ -344,3 +345,4 @@ unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
> out:
> return pvmw.address;
> }
> +#endif
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 05/11] mm/memory-failure: Convert shake_page() to shake_folio()
2024-04-12 19:35 ` [PATCH v3 05/11] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
@ 2024-04-18 8:44 ` Miaohe Lin
0 siblings, 0 replies; 17+ messages in thread
From: Miaohe Lin @ 2024-04-18 8:44 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-mm, Jane Chu
On 2024/4/13 3:35, Matthew Wilcox (Oracle) wrote:
> Removes two calls to compound_head(). Move the prototype to
> internal.h; we definitely don't want code outside mm using it.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Jane Chu <jane.chu@oracle.com>
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Thanks.
.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 07/11] mm/memory-failure: Convert memory_failure() to use a folio
2024-04-12 19:35 ` [PATCH v3 07/11] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
@ 2024-04-18 9:24 ` Miaohe Lin
0 siblings, 0 replies; 17+ messages in thread
From: Miaohe Lin @ 2024-04-18 9:24 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-mm
On 2024/4/13 3:35, Matthew Wilcox (Oracle) wrote:
>
> /*
> * Now take care of user space mappings.
> @@ -2368,7 +2369,8 @@ int memory_failure(unsigned long pfn, int flags)
> /*
> * Torn down by someone else?
> */
> - if (PageLRU(p) && !PageSwapCache(p) && p->mapping == NULL) {
> + if (folio_test_lru(folio) && !folio_test_swapcache(folio) &&
> + folio->mapping == NULL) {
> res = action_result(pfn, MF_MSG_TRUNCATED_LRU, MF_IGNORED);
> goto unlock_page;
> }
> @@ -2378,7 +2380,7 @@ int memory_failure(unsigned long pfn, int flags)
> mutex_unlock(&mf_mutex);
> return res;
> unlock_page:
This label might be replaced with unlock_folio: ? Anyway, this patch looks good to me.
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Thanks.
.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 10/11] mm/memory-failure: Use folio functions throughout collect_procs()
2024-04-12 19:35 ` [PATCH v3 10/11] mm/memory-failure: Use folio functions throughout collect_procs() Matthew Wilcox (Oracle)
@ 2024-04-18 9:31 ` Miaohe Lin
0 siblings, 0 replies; 17+ messages in thread
From: Miaohe Lin @ 2024-04-18 9:31 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-mm, Jane Chu
On 2024/4/13 3:35, Matthew Wilcox (Oracle) wrote:
> Saves a couple of calls to compound_head().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Jane Chu <jane.chu@oracle.com>
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
Thanks.
.
> ---
> mm/memory-failure.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index e42c5f2179dc..a9fa5901b48c 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -722,9 +722,9 @@ static void collect_procs(struct folio *folio, struct page *page,
> {
> if (!folio->mapping)
> return;
> - if (unlikely(PageKsm(page)))
> + if (unlikely(folio_test_ksm(folio)))
> collect_procs_ksm(page, tokill, force_early);
> - else if (PageAnon(page))
> + else if (folio_test_anon(folio))
> collect_procs_anon(folio, page, tokill, force_early);
> else
> collect_procs_file(folio, page, tokill, force_early);
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm()
2024-04-12 19:35 ` [PATCH v3 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm() Matthew Wilcox (Oracle)
@ 2024-04-18 12:31 ` Miaohe Lin
0 siblings, 0 replies; 17+ messages in thread
From: Miaohe Lin @ 2024-04-18 12:31 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: linux-mm, Jane Chu
On 2024/4/13 3:35, Matthew Wilcox (Oracle) wrote:
> We've already calculated it, so pass it in instead of recalculating it
> in collect_procs_ksm().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Jane Chu <jane.chu@oracle.com>
> ---
> include/linux/ksm.h | 14 +++-----------
> mm/ksm.c | 5 ++---
> mm/memory-failure.c | 2 +-
> 3 files changed, 6 insertions(+), 15 deletions(-)
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Thanks.
.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2024-04-18 12:32 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-12 19:34 [PATCH v3 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 01/11] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
2024-04-12 19:34 ` [PATCH v3 02/11] mm/memory-failure: Pass addr to __add_to_kill() Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 03/11] mm: Return the address from page_mapped_in_vma() Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE Matthew Wilcox (Oracle)
2024-04-18 8:31 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 05/11] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
2024-04-18 8:44 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 06/11] mm: Convert hugetlb_page_mapping_lock_write to folio Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 07/11] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
2024-04-18 9:24 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 08/11] mm/memory-failure: Convert hwpoison_user_mappings to take " Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 09/11] mm/memory-failure: Add some folio conversions to unpoison_memory Matthew Wilcox (Oracle)
2024-04-12 19:35 ` [PATCH v3 10/11] mm/memory-failure: Use folio functions throughout collect_procs() Matthew Wilcox (Oracle)
2024-04-18 9:31 ` Miaohe Lin
2024-04-12 19:35 ` [PATCH v3 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm() Matthew Wilcox (Oracle)
2024-04-18 12:31 ` Miaohe Lin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).