* [PATCH v2 0/4] mm: page_ext: Introduce new iteration API
@ 2025-02-24 21:59 Luiz Capitulino
2025-02-24 21:59 ` [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public Luiz Capitulino
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-24 21:59 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song, luizcap
This series is against v6.14-rc4. The introduction is after the changelog.
v1 -> v2
========
- Drop for_each_page_ext_order() (David)
- Make page_ext_iter_begin() and page_ext_iter_next() inline functions
(David)
- Move loop logic into page_ext_iter_begin() and page_ext_iter_next()
(David)
RFC -> v1
=========
- Revamped the API by introducing for_each_page_ext macros
- Implemented various suggestions from David Hildenbrand, including page_ext
lookup optimization
- Fixed changelogs
Introduction
============
[ Thanks to David Hildenbrand for identifying the root cause of this
issue and proving guidance on how to fix it. The new API idea, bugs
and misconceptions are all mine though ]
Currently, trying to reserve 1G pages with page_owner=on and sparsemem
causes a crash. The reproducer is very simple:
1. Build the kernel with CONFIG_SPARSEMEM=y and the table extensions
2. Pass 'default_hugepagesz=1 page_owner=on' in the kernel command-line
3. Reserve one 1G page at run-time, this should crash (see patch 1 for
backtrace)
[ A crash with page_table_check is also possible, but harder to trigger ]
Apparently, starting with commit cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP
for gigantic folios") we now pass the full allocation order to page
extension clients and the page extension implementation assumes that all
PFNs of an allocation range will be stored in the same memory section (which
is not true for 1G pages).
To fix this, this series introduces a new iteration API for page extension
objects. The API checks if the next page extension object can be retrieved
from the current section or if it needs to look up for it in another
section.
Please, find all details in patch 2.
Luiz Capitulino (4):
mm: page_ext: make lookup_page_ext() public
mm: page_ext: add an iteration API for page extensions
mm: page_table_check: use new iteration API
mm: page_owner: use new iteration API
include/linux/page_ext.h | 93 ++++++++++++++++++++++++++++++++++++++++
mm/page_ext.c | 4 +-
mm/page_owner.c | 61 +++++++++++++-------------
mm/page_table_check.c | 39 ++++++-----------
4 files changed, 136 insertions(+), 61 deletions(-)
--
2.48.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public
2025-02-24 21:59 [PATCH v2 0/4] mm: page_ext: Introduce new iteration API Luiz Capitulino
@ 2025-02-24 21:59 ` Luiz Capitulino
2025-02-25 16:38 ` David Hildenbrand
2025-02-24 21:59 ` [PATCH v2 2/4] mm: page_ext: add an iteration API for page extensions Luiz Capitulino
` (2 subsequent siblings)
3 siblings, 1 reply; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-24 21:59 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song, luizcap
The next commit will use it.
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
include/linux/page_ext.h | 1 +
mm/page_ext.c | 4 ++--
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index e4b48a0dda244..d6fb891c51d1d 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -79,6 +79,7 @@ static inline void page_ext_init(void)
extern struct page_ext *page_ext_get(const struct page *page);
extern void page_ext_put(struct page_ext *page_ext);
+extern struct page_ext *lookup_page_ext(const struct page *page);
static inline void *page_ext_data(struct page_ext *page_ext,
struct page_ext_operations *ops)
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 641d93f6af4c1..23ad30597c05c 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -165,7 +165,7 @@ void __meminit pgdat_page_ext_init(struct pglist_data *pgdat)
pgdat->node_page_ext = NULL;
}
-static struct page_ext *lookup_page_ext(const struct page *page)
+struct page_ext *lookup_page_ext(const struct page *page)
{
unsigned long pfn = page_to_pfn(page);
unsigned long index;
@@ -245,7 +245,7 @@ static bool page_ext_invalid(struct page_ext *page_ext)
return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) == PAGE_EXT_INVALID);
}
-static struct page_ext *lookup_page_ext(const struct page *page)
+struct page_ext *lookup_page_ext(const struct page *page)
{
unsigned long pfn = page_to_pfn(page);
struct mem_section *section = __pfn_to_section(pfn);
--
2.48.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v2 2/4] mm: page_ext: add an iteration API for page extensions
2025-02-24 21:59 [PATCH v2 0/4] mm: page_ext: Introduce new iteration API Luiz Capitulino
2025-02-24 21:59 ` [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public Luiz Capitulino
@ 2025-02-24 21:59 ` Luiz Capitulino
2025-02-25 16:39 ` David Hildenbrand
2025-02-24 21:59 ` [PATCH v2 3/4] mm: page_table_check: use new iteration API Luiz Capitulino
2025-02-24 21:59 ` [PATCH v2 4/4] mm: page_owner: " Luiz Capitulino
3 siblings, 1 reply; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-24 21:59 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song, luizcap
The page extension implementation assumes that all page extensions of
a given page order are stored in the same memory section. The function
page_ext_next() relies on this assumption by adding an offset to the
current object to return the next adjacent page extension.
This behavior works as expected for flatmem but fails for sparsemem when
using 1G pages. The commit cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for
gigantic folios") exposes this issue, making it possible for a crash when
using page_owner or page_table_check page extensions.
The problem is that for 1G pages, the page extensions may span memory
section boundaries and be stored in different memory sections. This issue
was not visible before commit cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP
for gigantic folios") because alloc_contig_pages() never passed more than
MAX_PAGE_ORDER to post_alloc_hook(). However, the series introducing
mentioned commit changed this behavior allowing the full 1G page order
to be passed.
Reproducer:
1. Build the kernel with CONFIG_SPARSEMEM=y and table extensions
support
2. Pass 'default_hugepagesz=1 page_owner=on' in the kernel command-line
3. Reserve one 1G page at run-time, this should crash (backtrace below)
To address this issue, this commit introduces a new API for iterating
through page extensions. The main iteration macro is for_each_page_ext()
and it must be called with the RCU read lock taken. Here's an usage
example:
"""
struct page_ext_iter iter;
struct page_ext *page_ext;
...
rcu_read_lock();
for_each_page_ext(page, 1 << order, page_ext, iter) {
struct my_page_ext *obj = get_my_page_ext_obj(page_ext);
...
}
rcu_read_unlock();
"""
The loop construct uses page_ext_iter_next() which checks to see if we
have crossed sections in the iteration. In this case, page_ext_iter_next()
retrieves the next page_ext object from another section.
Thanks to David Hildenbrand for helping identify the root cause and
providing suggestions on how to fix and optmize the solution (final
implementation and bugs are all mine through).
Lastly, here's the backtrace, without kasan you can get random crashes:
[ 76.052526] BUG: KASAN: slab-out-of-bounds in __update_page_owner_handle+0x238/0x298
[ 76.060283] Write of size 4 at addr ffff07ff96240038 by task tee/3598
[ 76.066714]
[ 76.068203] CPU: 88 UID: 0 PID: 3598 Comm: tee Kdump: loaded Not tainted 6.13.0-rep1 #3
[ 76.076202] Hardware name: WIWYNN Mt.Jade Server System B81.030Z1.0007/Mt.Jade Motherboard, BIOS 2.10.20220810 (SCP: 2.10.20220810) 2022/08/10
[ 76.088972] Call trace:
[ 76.091411] show_stack+0x20/0x38 (C)
[ 76.095073] dump_stack_lvl+0x80/0xf8
[ 76.098733] print_address_description.constprop.0+0x88/0x398
[ 76.104476] print_report+0xa8/0x278
[ 76.108041] kasan_report+0xa8/0xf8
[ 76.111520] __asan_report_store4_noabort+0x20/0x30
[ 76.116391] __update_page_owner_handle+0x238/0x298
[ 76.121259] __set_page_owner+0xdc/0x140
[ 76.125173] post_alloc_hook+0x190/0x1d8
[ 76.129090] alloc_contig_range_noprof+0x54c/0x890
[ 76.133874] alloc_contig_pages_noprof+0x35c/0x4a8
[ 76.138656] alloc_gigantic_folio.isra.0+0x2c0/0x368
[ 76.143616] only_alloc_fresh_hugetlb_folio.isra.0+0x24/0x150
[ 76.149353] alloc_pool_huge_folio+0x11c/0x1f8
[ 76.153787] set_max_huge_pages+0x364/0xca8
[ 76.157961] __nr_hugepages_store_common+0xb0/0x1a0
[ 76.162829] nr_hugepages_store+0x108/0x118
[ 76.167003] kobj_attr_store+0x3c/0x70
[ 76.170745] sysfs_kf_write+0xfc/0x188
[ 76.174492] kernfs_fop_write_iter+0x274/0x3e0
[ 76.178927] vfs_write+0x64c/0x8e0
[ 76.182323] ksys_write+0xf8/0x1f0
[ 76.185716] __arm64_sys_write+0x74/0xb0
[ 76.189630] invoke_syscall.constprop.0+0xd8/0x1e0
[ 76.194412] do_el0_svc+0x164/0x1e0
[ 76.197891] el0_svc+0x40/0xe0
[ 76.200939] el0t_64_sync_handler+0x144/0x168
[ 76.205287] el0t_64_sync+0x1ac/0x1b0
Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
include/linux/page_ext.h | 92 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 92 insertions(+)
diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index d6fb891c51d1d..33a118b31a222 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -2,7 +2,9 @@
#ifndef __LINUX_PAGE_EXT_H
#define __LINUX_PAGE_EXT_H
+#include <linux/pgtable.h>
#include <linux/types.h>
+#include <linux/mmzone.h>
#include <linux/stacktrace.h>
struct pglist_data;
@@ -69,12 +71,26 @@ extern void page_ext_init(void);
static inline void page_ext_init_flatmem_late(void)
{
}
+
+static inline bool page_ext_iter_next_fast_possible(unsigned long next_pfn)
+{
+ /*
+ * page_ext is allocated per memory section. Once we cross a
+ * memory section, we have to fetch the new pointer.
+ */
+ return next_pfn % PAGES_PER_SECTION;
+}
#else
extern void page_ext_init_flatmem(void);
extern void page_ext_init_flatmem_late(void);
static inline void page_ext_init(void)
{
}
+
+static inline bool page_ext_iter_next_fast_possible(unsigned long next_pfn)
+{
+ return true;
+}
#endif
extern struct page_ext *page_ext_get(const struct page *page);
@@ -94,6 +110,82 @@ static inline struct page_ext *page_ext_next(struct page_ext *curr)
return next;
}
+struct page_ext_iter {
+ unsigned long index;
+ unsigned long start_pfn;
+ struct page_ext *page_ext;
+};
+
+/**
+ * page_ext_iter_begin() - Prepare for iterating through page extensions.
+ * @iter: page extension iterator.
+ * @page: The page we're interested in.
+ *
+ * Must be called with RCU read lock taken.
+ *
+ * Return: NULL if no page_ext exists for this page.
+ */
+static inline struct page_ext *page_ext_iter_begin(struct page_ext_iter *iter, struct page *page)
+{
+ iter->index = 0;
+ iter->start_pfn = page_to_pfn(page);
+ iter->page_ext = lookup_page_ext(page);
+
+ return iter->page_ext;
+}
+
+/**
+ * page_ext_iter_next() - Get next page extension
+ * @iter: page extension iterator.
+ *
+ * Must be called with RCU read lock taken.
+ *
+ * Return: NULL if no next page_ext exists.
+ */
+static inline struct page_ext *page_ext_iter_next(struct page_ext_iter *iter)
+{
+ unsigned long pfn;
+
+ if (WARN_ON_ONCE(!iter->page_ext))
+ return NULL;
+
+ iter->index++;
+ pfn = iter->start_pfn + iter->index;
+
+ if (page_ext_iter_next_fast_possible(pfn))
+ iter->page_ext = page_ext_next(iter->page_ext);
+ else
+ iter->page_ext = lookup_page_ext(pfn_to_page(pfn));
+
+ return iter->page_ext;
+}
+
+/**
+ * page_ext_iter_get() - Get current page extension
+ * @iter: page extension iterator.
+ *
+ * Return: NULL if no page_ext exists for this iterator.
+ */
+static inline struct page_ext *page_ext_iter_get(const struct page_ext_iter *iter)
+{
+ return iter->page_ext;
+}
+
+/**
+ * for_each_page_ext(): iterate through page_ext objects.
+ * @__page: the page we're interested in
+ * @__pgcount: how many pages to iterate through
+ * @__page_ext: struct page_ext pointer where the current page_ext
+ * object is returned
+ * @__iter: struct page_ext_iter object (defined in the stack)
+ *
+ * IMPORTANT: must be called with RCU read lock taken.
+ */
+#define for_each_page_ext(__page, __pgcount, __page_ext, __iter) \
+ for (__page_ext = page_ext_iter_begin(&__iter, __page); \
+ __page_ext && __iter.index < __pgcount; \
+ __page_ext = page_ext_iter_next(&__iter))
+
#else /* !CONFIG_PAGE_EXTENSION */
struct page_ext;
--
2.48.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v2 3/4] mm: page_table_check: use new iteration API
2025-02-24 21:59 [PATCH v2 0/4] mm: page_ext: Introduce new iteration API Luiz Capitulino
2025-02-24 21:59 ` [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public Luiz Capitulino
2025-02-24 21:59 ` [PATCH v2 2/4] mm: page_ext: add an iteration API for page extensions Luiz Capitulino
@ 2025-02-24 21:59 ` Luiz Capitulino
2025-02-25 16:40 ` David Hildenbrand
2025-02-24 21:59 ` [PATCH v2 4/4] mm: page_owner: " Luiz Capitulino
3 siblings, 1 reply; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-24 21:59 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song, luizcap
The page_ext_next() function assumes that page extension objects for a
page order allocation always reside in the same memory section, which
may not be true and could lead to crashes. Use the new page_ext
iteration API instead.
Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
mm/page_table_check.c | 39 ++++++++++++---------------------------
1 file changed, 12 insertions(+), 27 deletions(-)
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 509c6ef8de400..e11bebf23e36f 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -62,24 +62,20 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
*/
static void page_table_check_clear(unsigned long pfn, unsigned long pgcnt)
{
+ struct page_ext_iter iter;
struct page_ext *page_ext;
struct page *page;
- unsigned long i;
bool anon;
if (!pfn_valid(pfn))
return;
page = pfn_to_page(pfn);
- page_ext = page_ext_get(page);
-
- if (!page_ext)
- return;
-
BUG_ON(PageSlab(page));
anon = PageAnon(page);
- for (i = 0; i < pgcnt; i++) {
+ rcu_read_lock();
+ for_each_page_ext(page, pgcnt, page_ext, iter) {
struct page_table_check *ptc = get_page_table_check(page_ext);
if (anon) {
@@ -89,9 +85,8 @@ static void page_table_check_clear(unsigned long pfn, unsigned long pgcnt)
BUG_ON(atomic_read(&ptc->anon_map_count));
BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
}
- page_ext = page_ext_next(page_ext);
}
- page_ext_put(page_ext);
+ rcu_read_unlock();
}
/*
@@ -102,24 +97,20 @@ static void page_table_check_clear(unsigned long pfn, unsigned long pgcnt)
static void page_table_check_set(unsigned long pfn, unsigned long pgcnt,
bool rw)
{
+ struct page_ext_iter iter;
struct page_ext *page_ext;
struct page *page;
- unsigned long i;
bool anon;
if (!pfn_valid(pfn))
return;
page = pfn_to_page(pfn);
- page_ext = page_ext_get(page);
-
- if (!page_ext)
- return;
-
BUG_ON(PageSlab(page));
anon = PageAnon(page);
- for (i = 0; i < pgcnt; i++) {
+ rcu_read_lock();
+ for_each_page_ext(page, pgcnt, page_ext, iter) {
struct page_table_check *ptc = get_page_table_check(page_ext);
if (anon) {
@@ -129,9 +120,8 @@ static void page_table_check_set(unsigned long pfn, unsigned long pgcnt,
BUG_ON(atomic_read(&ptc->anon_map_count));
BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0);
}
- page_ext = page_ext_next(page_ext);
}
- page_ext_put(page_ext);
+ rcu_read_unlock();
}
/*
@@ -140,24 +130,19 @@ static void page_table_check_set(unsigned long pfn, unsigned long pgcnt,
*/
void __page_table_check_zero(struct page *page, unsigned int order)
{
+ struct page_ext_iter iter;
struct page_ext *page_ext;
- unsigned long i;
BUG_ON(PageSlab(page));
- page_ext = page_ext_get(page);
-
- if (!page_ext)
- return;
-
- for (i = 0; i < (1ul << order); i++) {
+ rcu_read_lock();
+ for_each_page_ext(page, 1 << order, page_ext, iter) {
struct page_table_check *ptc = get_page_table_check(page_ext);
BUG_ON(atomic_read(&ptc->anon_map_count));
BUG_ON(atomic_read(&ptc->file_map_count));
- page_ext = page_ext_next(page_ext);
}
- page_ext_put(page_ext);
+ rcu_read_unlock();
}
void __page_table_check_pte_clear(struct mm_struct *mm, pte_t pte)
--
2.48.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v2 4/4] mm: page_owner: use new iteration API
2025-02-24 21:59 [PATCH v2 0/4] mm: page_ext: Introduce new iteration API Luiz Capitulino
` (2 preceding siblings ...)
2025-02-24 21:59 ` [PATCH v2 3/4] mm: page_table_check: use new iteration API Luiz Capitulino
@ 2025-02-24 21:59 ` Luiz Capitulino
2025-02-25 16:44 ` David Hildenbrand
3 siblings, 1 reply; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-24 21:59 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song, luizcap
The page_ext_next() function assumes that page extension objects for a
page order allocation always reside in the same memory section, which
may not be true and could lead to crashes. Use the new page_ext
iteration API instead.
Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
mm/page_owner.c | 61 +++++++++++++++++++++++--------------------------
1 file changed, 29 insertions(+), 32 deletions(-)
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 2d6360eaccbb6..c9d2c688eb981 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -229,17 +229,19 @@ static void dec_stack_record_count(depot_stack_handle_t handle,
handle);
}
-static inline void __update_page_owner_handle(struct page_ext *page_ext,
+static inline void __update_page_owner_handle(struct page *page,
depot_stack_handle_t handle,
unsigned short order,
gfp_t gfp_mask,
short last_migrate_reason, u64 ts_nsec,
pid_t pid, pid_t tgid, char *comm)
{
- int i;
+ struct page_ext_iter iter;
+ struct page_ext *page_ext;
struct page_owner *page_owner;
- for (i = 0; i < (1 << order); i++) {
+ rcu_read_lock();
+ for_each_page_ext(page, 1 << order, page_ext, iter) {
page_owner = get_page_owner(page_ext);
page_owner->handle = handle;
page_owner->order = order;
@@ -252,20 +254,22 @@ static inline void __update_page_owner_handle(struct page_ext *page_ext,
sizeof(page_owner->comm));
__set_bit(PAGE_EXT_OWNER, &page_ext->flags);
__set_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags);
- page_ext = page_ext_next(page_ext);
}
+ rcu_read_unlock();
}
-static inline void __update_page_owner_free_handle(struct page_ext *page_ext,
+static inline void __update_page_owner_free_handle(struct page *page,
depot_stack_handle_t handle,
unsigned short order,
pid_t pid, pid_t tgid,
u64 free_ts_nsec)
{
- int i;
+ struct page_ext_iter iter;
+ struct page_ext *page_ext;
struct page_owner *page_owner;
- for (i = 0; i < (1 << order); i++) {
+ rcu_read_lock();
+ for_each_page_ext(page, 1 << order, page_ext, iter) {
page_owner = get_page_owner(page_ext);
/* Only __reset_page_owner() wants to clear the bit */
if (handle) {
@@ -275,8 +279,8 @@ static inline void __update_page_owner_free_handle(struct page_ext *page_ext,
page_owner->free_ts_nsec = free_ts_nsec;
page_owner->free_pid = current->pid;
page_owner->free_tgid = current->tgid;
- page_ext = page_ext_next(page_ext);
}
+ rcu_read_unlock();
}
void __reset_page_owner(struct page *page, unsigned short order)
@@ -293,11 +297,11 @@ void __reset_page_owner(struct page *page, unsigned short order)
page_owner = get_page_owner(page_ext);
alloc_handle = page_owner->handle;
+ page_ext_put(page_ext);
handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
- __update_page_owner_free_handle(page_ext, handle, order, current->pid,
+ __update_page_owner_free_handle(page, handle, order, current->pid,
current->tgid, free_ts_nsec);
- page_ext_put(page_ext);
if (alloc_handle != early_handle)
/*
@@ -313,19 +317,13 @@ void __reset_page_owner(struct page *page, unsigned short order)
noinline void __set_page_owner(struct page *page, unsigned short order,
gfp_t gfp_mask)
{
- struct page_ext *page_ext;
u64 ts_nsec = local_clock();
depot_stack_handle_t handle;
handle = save_stack(gfp_mask);
-
- page_ext = page_ext_get(page);
- if (unlikely(!page_ext))
- return;
- __update_page_owner_handle(page_ext, handle, order, gfp_mask, -1,
+ __update_page_owner_handle(page, handle, order, gfp_mask, -1,
ts_nsec, current->pid, current->tgid,
current->comm);
- page_ext_put(page_ext);
inc_stack_record_count(handle, gfp_mask, 1 << order);
}
@@ -344,26 +342,24 @@ void __set_page_owner_migrate_reason(struct page *page, int reason)
void __split_page_owner(struct page *page, int old_order, int new_order)
{
- int i;
- struct page_ext *page_ext = page_ext_get(page);
+ struct page_ext_iter iter;
+ struct page_ext *page_ext;
struct page_owner *page_owner;
- if (unlikely(!page_ext))
- return;
-
- for (i = 0; i < (1 << old_order); i++) {
+ rcu_read_lock();
+ for_each_page_ext(page, 1 << old_order, page_ext, iter) {
page_owner = get_page_owner(page_ext);
page_owner->order = new_order;
- page_ext = page_ext_next(page_ext);
}
- page_ext_put(page_ext);
+ rcu_read_unlock();
}
void __folio_copy_owner(struct folio *newfolio, struct folio *old)
{
- int i;
struct page_ext *old_ext;
struct page_ext *new_ext;
+ struct page_ext *page_ext;
+ struct page_ext_iter iter;
struct page_owner *old_page_owner;
struct page_owner *new_page_owner;
depot_stack_handle_t migrate_handle;
@@ -381,7 +377,7 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
old_page_owner = get_page_owner(old_ext);
new_page_owner = get_page_owner(new_ext);
migrate_handle = new_page_owner->handle;
- __update_page_owner_handle(new_ext, old_page_owner->handle,
+ __update_page_owner_handle(&newfolio->page, old_page_owner->handle,
old_page_owner->order, old_page_owner->gfp_mask,
old_page_owner->last_migrate_reason,
old_page_owner->ts_nsec, old_page_owner->pid,
@@ -391,7 +387,7 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
* will be freed after migration. Keep them until then as they may be
* useful.
*/
- __update_page_owner_free_handle(new_ext, 0, old_page_owner->order,
+ __update_page_owner_free_handle(&newfolio->page, 0, old_page_owner->order,
old_page_owner->free_pid,
old_page_owner->free_tgid,
old_page_owner->free_ts_nsec);
@@ -400,11 +396,12 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
* for the new one and the old folio otherwise there will be an imbalance
* when subtracting those pages from the stack.
*/
- for (i = 0; i < (1 << new_page_owner->order); i++) {
+ rcu_read_lock();
+ for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) {
+ old_page_owner = get_page_owner(page_ext);
old_page_owner->handle = migrate_handle;
- old_ext = page_ext_next(old_ext);
- old_page_owner = get_page_owner(old_ext);
}
+ rcu_read_unlock();
page_ext_put(new_ext);
page_ext_put(old_ext);
@@ -813,7 +810,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
goto ext_put_continue;
/* Found early allocated page */
- __update_page_owner_handle(page_ext, early_handle, 0, 0,
+ __update_page_owner_handle(page, early_handle, 0, 0,
-1, local_clock(), current->pid,
current->tgid, current->comm);
count++;
--
2.48.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public
2025-02-24 21:59 ` [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public Luiz Capitulino
@ 2025-02-25 16:38 ` David Hildenbrand
2025-02-25 22:29 ` Luiz Capitulino
0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2025-02-25 16:38 UTC (permalink / raw)
To: Luiz Capitulino, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 24.02.25 22:59, Luiz Capitulino wrote:
> The next commit will use it.
This should likely be squashed into the next patch, where people also
have the context why this is required.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 2/4] mm: page_ext: add an iteration API for page extensions
2025-02-24 21:59 ` [PATCH v2 2/4] mm: page_ext: add an iteration API for page extensions Luiz Capitulino
@ 2025-02-25 16:39 ` David Hildenbrand
0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2025-02-25 16:39 UTC (permalink / raw)
To: Luiz Capitulino, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 24.02.25 22:59, Luiz Capitulino wrote:
> The page extension implementation assumes that all page extensions of
> a given page order are stored in the same memory section. The function
> page_ext_next() relies on this assumption by adding an offset to the
> current object to return the next adjacent page extension.
>
> This behavior works as expected for flatmem but fails for sparsemem when
> using 1G pages. The commit cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for
> gigantic folios") exposes this issue, making it possible for a crash when
> using page_owner or page_table_check page extensions.
>
> The problem is that for 1G pages, the page extensions may span memory
> section boundaries and be stored in different memory sections. This issue
> was not visible before commit cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP
> for gigantic folios") because alloc_contig_pages() never passed more than
> MAX_PAGE_ORDER to post_alloc_hook(). However, the series introducing
> mentioned commit changed this behavior allowing the full 1G page order
> to be passed.
>
> Reproducer:
>
> 1. Build the kernel with CONFIG_SPARSEMEM=y and table extensions
> support
> 2. Pass 'default_hugepagesz=1 page_owner=on' in the kernel command-line
> 3. Reserve one 1G page at run-time, this should crash (backtrace below)
>
> To address this issue, this commit introduces a new API for iterating
> through page extensions. The main iteration macro is for_each_page_ext()
> and it must be called with the RCU read lock taken. Here's an usage
> example:
>
> """
> struct page_ext_iter iter;
> struct page_ext *page_ext;
>
> ...
>
> rcu_read_lock();
> for_each_page_ext(page, 1 << order, page_ext, iter) {
> struct my_page_ext *obj = get_my_page_ext_obj(page_ext);
> ...
> }
> rcu_read_unlock();
> """
>
> The loop construct uses page_ext_iter_next() which checks to see if we
> have crossed sections in the iteration. In this case, page_ext_iter_next()
> retrieves the next page_ext object from another section.
>
> Thanks to David Hildenbrand for helping identify the root cause and
> providing suggestions on how to fix and optmize the solution (final
> implementation and bugs are all mine through).
>
> Lastly, here's the backtrace, without kasan you can get random crashes:
>
> [ 76.052526] BUG: KASAN: slab-out-of-bounds in __update_page_owner_handle+0x238/0x298
> [ 76.060283] Write of size 4 at addr ffff07ff96240038 by task tee/3598
> [ 76.066714]
> [ 76.068203] CPU: 88 UID: 0 PID: 3598 Comm: tee Kdump: loaded Not tainted 6.13.0-rep1 #3
> [ 76.076202] Hardware name: WIWYNN Mt.Jade Server System B81.030Z1.0007/Mt.Jade Motherboard, BIOS 2.10.20220810 (SCP: 2.10.20220810) 2022/08/10
> [ 76.088972] Call trace:
> [ 76.091411] show_stack+0x20/0x38 (C)
> [ 76.095073] dump_stack_lvl+0x80/0xf8
> [ 76.098733] print_address_description.constprop.0+0x88/0x398
> [ 76.104476] print_report+0xa8/0x278
> [ 76.108041] kasan_report+0xa8/0xf8
> [ 76.111520] __asan_report_store4_noabort+0x20/0x30
> [ 76.116391] __update_page_owner_handle+0x238/0x298
> [ 76.121259] __set_page_owner+0xdc/0x140
> [ 76.125173] post_alloc_hook+0x190/0x1d8
> [ 76.129090] alloc_contig_range_noprof+0x54c/0x890
> [ 76.133874] alloc_contig_pages_noprof+0x35c/0x4a8
> [ 76.138656] alloc_gigantic_folio.isra.0+0x2c0/0x368
> [ 76.143616] only_alloc_fresh_hugetlb_folio.isra.0+0x24/0x150
> [ 76.149353] alloc_pool_huge_folio+0x11c/0x1f8
> [ 76.153787] set_max_huge_pages+0x364/0xca8
> [ 76.157961] __nr_hugepages_store_common+0xb0/0x1a0
> [ 76.162829] nr_hugepages_store+0x108/0x118
> [ 76.167003] kobj_attr_store+0x3c/0x70
> [ 76.170745] sysfs_kf_write+0xfc/0x188
> [ 76.174492] kernfs_fop_write_iter+0x274/0x3e0
> [ 76.178927] vfs_write+0x64c/0x8e0
> [ 76.182323] ksys_write+0xf8/0x1f0
> [ 76.185716] __arm64_sys_write+0x74/0xb0
> [ 76.189630] invoke_syscall.constprop.0+0xd8/0x1e0
> [ 76.194412] do_el0_svc+0x164/0x1e0
> [ 76.197891] el0_svc+0x40/0xe0
> [ 76.200939] el0t_64_sync_handler+0x144/0x168
> [ 76.205287] el0t_64_sync+0x1ac/0x1b0
>
> Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 3/4] mm: page_table_check: use new iteration API
2025-02-24 21:59 ` [PATCH v2 3/4] mm: page_table_check: use new iteration API Luiz Capitulino
@ 2025-02-25 16:40 ` David Hildenbrand
0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2025-02-25 16:40 UTC (permalink / raw)
To: Luiz Capitulino, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 24.02.25 22:59, Luiz Capitulino wrote:
> The page_ext_next() function assumes that page extension objects for a
> page order allocation always reside in the same memory section, which
> may not be true and could lead to crashes. Use the new page_ext
> iteration API instead.
>
> Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/4] mm: page_owner: use new iteration API
2025-02-24 21:59 ` [PATCH v2 4/4] mm: page_owner: " Luiz Capitulino
@ 2025-02-25 16:44 ` David Hildenbrand
2025-02-25 22:30 ` Luiz Capitulino
0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2025-02-25 16:44 UTC (permalink / raw)
To: Luiz Capitulino, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 24.02.25 22:59, Luiz Capitulino wrote:
> The page_ext_next() function assumes that page extension objects for a
> page order allocation always reside in the same memory section, which
> may not be true and could lead to crashes. Use the new page_ext
> iteration API instead.
>
> Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
> ---
> mm/page_owner.c | 61 +++++++++++++++++++++++--------------------------
> 1 file changed, 29 insertions(+), 32 deletions(-)
>
[...]
> void __reset_page_owner(struct page *page, unsigned short order)
> @@ -293,11 +297,11 @@ void __reset_page_owner(struct page *page, unsigned short order)
>
> page_owner = get_page_owner(page_ext);
> alloc_handle = page_owner->handle;
> + page_ext_put(page_ext);
>
> handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
> - __update_page_owner_free_handle(page_ext, handle, order, current->pid,
> + __update_page_owner_free_handle(page, handle, order, current->pid,
> current->tgid, free_ts_nsec);
> - page_ext_put(page_ext);
I assume moving that is fine ...
but I'll not that ...
> - for (i = 0; i < (1 << new_page_owner->order); i++) {
> + rcu_read_lock();
> + for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) {
> + old_page_owner = get_page_owner(page_ext);
> old_page_owner->handle = migrate_handle;
> - old_ext = page_ext_next(old_ext);
> - old_page_owner = get_page_owner(old_ext);
> }
> + rcu_read_unlock();
>
> page_ext_put(new_ext);
> page_ext_put(old_ext);
... here you are not moving it?
In general, LGTM, only the remaining page_ext_put() are a bit confusing.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public
2025-02-25 16:38 ` David Hildenbrand
@ 2025-02-25 22:29 ` Luiz Capitulino
2025-02-26 17:07 ` David Hildenbrand
0 siblings, 1 reply; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-25 22:29 UTC (permalink / raw)
To: David Hildenbrand, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 2025-02-25 11:38, David Hildenbrand wrote:
> On 24.02.25 22:59, Luiz Capitulino wrote:
>> The next commit will use it.
>
> This should likely be squashed into the next patch, where people also have the context why this is required.
Would you mind if I only squash it if I'm required to send v3?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/4] mm: page_owner: use new iteration API
2025-02-25 16:44 ` David Hildenbrand
@ 2025-02-25 22:30 ` Luiz Capitulino
2025-02-27 13:50 ` David Hildenbrand
0 siblings, 1 reply; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-25 22:30 UTC (permalink / raw)
To: David Hildenbrand, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 2025-02-25 11:44, David Hildenbrand wrote:
> On 24.02.25 22:59, Luiz Capitulino wrote:
>> The page_ext_next() function assumes that page extension objects for a
>> page order allocation always reside in the same memory section, which
>> may not be true and could lead to crashes. Use the new page_ext
>> iteration API instead.
>>
>> Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
>> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
>> ---
>> mm/page_owner.c | 61 +++++++++++++++++++++++--------------------------
>> 1 file changed, 29 insertions(+), 32 deletions(-)
>>
>
> [...]
>
>> void __reset_page_owner(struct page *page, unsigned short order)
>> @@ -293,11 +297,11 @@ void __reset_page_owner(struct page *page, unsigned short order)
>> page_owner = get_page_owner(page_ext);
>> alloc_handle = page_owner->handle;
>> + page_ext_put(page_ext);
>> handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
>> - __update_page_owner_free_handle(page_ext, handle, order, current->pid,
>> + __update_page_owner_free_handle(page, handle, order, current->pid,
>> current->tgid, free_ts_nsec);
>> - page_ext_put(page_ext);
>
> I assume moving that is fine ...
>
> but I'll not that ...
>
>> - for (i = 0; i < (1 << new_page_owner->order); i++) {
>> + rcu_read_lock();
>> + for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) {
>> + old_page_owner = get_page_owner(page_ext);
>> old_page_owner->handle = migrate_handle;
>> - old_ext = page_ext_next(old_ext);
>> - old_page_owner = get_page_owner(old_ext);
>> }
>> + rcu_read_unlock();
>> page_ext_put(new_ext);
>> page_ext_put(old_ext);
>
> ... here you are not moving it?
>
>
> In general, LGTM, only the remaining page_ext_put() are a bit confusing.
Which part you found confusing: the fact that I'm not moving them up or that
we still make use of them?
For this hunk, I decided to keep them where they are because 'new_page_owner',
which is a page extension from 'next_ext', is still used in the last loop. So
I decided to free them all at the end for simplicity.
The other part is, page_ext_get() and page_ext_put() are still valid functions
for getting specific page extensions outside of loops and the usage in
__folio_copy_owner() (and a few other cases) seems valid to me.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public
2025-02-25 22:29 ` Luiz Capitulino
@ 2025-02-26 17:07 ` David Hildenbrand
0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2025-02-26 17:07 UTC (permalink / raw)
To: Luiz Capitulino, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 25.02.25 23:29, Luiz Capitulino wrote:
> On 2025-02-25 11:38, David Hildenbrand wrote:
>> On 24.02.25 22:59, Luiz Capitulino wrote:
>>> The next commit will use it.
>>
>> This should likely be squashed into the next patch, where people also have the context why this is required.
>
> Would you mind if I only squash it if I'm required to send v3?
Sure, likely Andrew could also just squash them in his tree.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/4] mm: page_owner: use new iteration API
2025-02-25 22:30 ` Luiz Capitulino
@ 2025-02-27 13:50 ` David Hildenbrand
2025-02-27 20:50 ` Luiz Capitulino
0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2025-02-27 13:50 UTC (permalink / raw)
To: Luiz Capitulino, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 25.02.25 23:30, Luiz Capitulino wrote:
> On 2025-02-25 11:44, David Hildenbrand wrote:
>> On 24.02.25 22:59, Luiz Capitulino wrote:
>>> The page_ext_next() function assumes that page extension objects for a
>>> page order allocation always reside in the same memory section, which
>>> may not be true and could lead to crashes. Use the new page_ext
>>> iteration API instead.
>>>
>>> Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
>>> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
>>> ---
>>> mm/page_owner.c | 61 +++++++++++++++++++++++--------------------------
>>> 1 file changed, 29 insertions(+), 32 deletions(-)
>>>
>>
>> [...]
>>
>>> void __reset_page_owner(struct page *page, unsigned short order)
>>> @@ -293,11 +297,11 @@ void __reset_page_owner(struct page *page, unsigned short order)
>>> page_owner = get_page_owner(page_ext);
>>> alloc_handle = page_owner->handle;
>>> + page_ext_put(page_ext);
>>> handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
>>> - __update_page_owner_free_handle(page_ext, handle, order, current->pid,
>>> + __update_page_owner_free_handle(page, handle, order, current->pid,
>>> current->tgid, free_ts_nsec);
>>> - page_ext_put(page_ext);
>>
>> I assume moving that is fine ...
>>
>> but I'll not that ...
>>
>>> - for (i = 0; i < (1 << new_page_owner->order); i++) {
>>> + rcu_read_lock();
>>> + for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) {
>>> + old_page_owner = get_page_owner(page_ext);
>>> old_page_owner->handle = migrate_handle;
>>> - old_ext = page_ext_next(old_ext);
>>> - old_page_owner = get_page_owner(old_ext);
>>> }
>>> + rcu_read_unlock();
>>> page_ext_put(new_ext);
>>> page_ext_put(old_ext);
>>
>> ... here you are not moving it?
>>
>>
>> In general, LGTM, only the remaining page_ext_put() are a bit confusing.
>
> Which part you found confusing: the fact that I'm not moving them up or that
> we still make use of them?
How we are deferring page_ext_put() when not actually working on these
values anymore. The page_owner itself should not go away here unless we
have a serious bug.
To be precise, can't we simply do the following on top?
diff --git a/mm/page_owner.c b/mm/page_owner.c
index c9d2c688eb981..12044340adf89 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -356,26 +356,24 @@ void __split_page_owner(struct page *page, int old_order, int new_order)
void __folio_copy_owner(struct folio *newfolio, struct folio *old)
{
- struct page_ext *old_ext;
- struct page_ext *new_ext;
struct page_ext *page_ext;
struct page_ext_iter iter;
struct page_owner *old_page_owner;
struct page_owner *new_page_owner;
depot_stack_handle_t migrate_handle;
- old_ext = page_ext_get(&old->page);
- if (unlikely(!old_ext))
+ page_ext = page_ext_get(&old->page);
+ if (unlikely(!page_ext))
return;
+ old_page_owner = get_page_owner(page_ext);
+ page_ext_put(page_ext);
- new_ext = page_ext_get(&newfolio->page);
- if (unlikely(!new_ext)) {
- page_ext_put(old_ext);
+ page_ext = page_ext_get(&newfolio->page);
+ if (unlikely(!page_ext))
return;
- }
+ new_page_owner = get_page_owner(page_ext);
+ page_ext_put(page_ext);
- old_page_owner = get_page_owner(old_ext);
- new_page_owner = get_page_owner(new_ext);
migrate_handle = new_page_owner->handle;
__update_page_owner_handle(&newfolio->page, old_page_owner->handle,
old_page_owner->order, old_page_owner->gfp_mask,
@@ -402,9 +400,6 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
old_page_owner->handle = migrate_handle;
}
rcu_read_unlock();
-
- page_ext_put(new_ext);
- page_ext_put(old_ext);
}
void pagetypeinfo_showmixedcount_print(struct seq_file *m,
--
Cheers,
David / dhildenb
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/4] mm: page_owner: use new iteration API
2025-02-27 13:50 ` David Hildenbrand
@ 2025-02-27 20:50 ` Luiz Capitulino
2025-02-28 9:09 ` David Hildenbrand
0 siblings, 1 reply; 15+ messages in thread
From: Luiz Capitulino @ 2025-02-27 20:50 UTC (permalink / raw)
To: David Hildenbrand, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
On 2025-02-27 08:50, David Hildenbrand wrote:
> On 25.02.25 23:30, Luiz Capitulino wrote:
>> On 2025-02-25 11:44, David Hildenbrand wrote:
>>> On 24.02.25 22:59, Luiz Capitulino wrote:
>>>> The page_ext_next() function assumes that page extension objects for a
>>>> page order allocation always reside in the same memory section, which
>>>> may not be true and could lead to crashes. Use the new page_ext
>>>> iteration API instead.
>>>>
>>>> Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios")
>>>> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
>>>> ---
>>>> mm/page_owner.c | 61 +++++++++++++++++++++++--------------------------
>>>> 1 file changed, 29 insertions(+), 32 deletions(-)
>>>>
>>>
>>> [...]
>>>
>>>> void __reset_page_owner(struct page *page, unsigned short order)
>>>> @@ -293,11 +297,11 @@ void __reset_page_owner(struct page *page, unsigned short order)
>>>> page_owner = get_page_owner(page_ext);
>>>> alloc_handle = page_owner->handle;
>>>> + page_ext_put(page_ext);
>>>> handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
>>>> - __update_page_owner_free_handle(page_ext, handle, order, current->pid,
>>>> + __update_page_owner_free_handle(page, handle, order, current->pid,
>>>> current->tgid, free_ts_nsec);
>>>> - page_ext_put(page_ext);
>>>
>>> I assume moving that is fine ...
>>>
>>> but I'll not that ...
>>>
>>>> - for (i = 0; i < (1 << new_page_owner->order); i++) {
>>>> + rcu_read_lock();
>>>> + for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) {
>>>> + old_page_owner = get_page_owner(page_ext);
>>>> old_page_owner->handle = migrate_handle;
>>>> - old_ext = page_ext_next(old_ext);
>>>> - old_page_owner = get_page_owner(old_ext);
>>>> }
>>>> + rcu_read_unlock();
>>>> page_ext_put(new_ext);
>>>> page_ext_put(old_ext);
>>>
>>> ... here you are not moving it?
>>>
>>>
>>> In general, LGTM, only the remaining page_ext_put() are a bit confusing.
>>
>> Which part you found confusing: the fact that I'm not moving them up or that
>> we still make use of them?
>
> How we are deferring page_ext_put() when not actually working on these
> values anymore. The page_owner itself should not go away here unless we
> have a serious bug.
>
> To be precise, can't we simply do the following on top?
Yes, that looks good and I like how the new API allows for simpler code.
My only concern is that if the user is not familiar with the page_ext
internals, it might not be clear what page_ext_put() is actually
protecting in which case it looks wrong that we're using a reference
returned by get_page_owner() after releasing the lock. If you think
that that's not an issue then I can apply this change on top.
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index c9d2c688eb981..12044340adf89 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -356,26 +356,24 @@ void __split_page_owner(struct page *page, int old_order, int new_order)
>
> void __folio_copy_owner(struct folio *newfolio, struct folio *old)
> {
> - struct page_ext *old_ext;
> - struct page_ext *new_ext;
> struct page_ext *page_ext;
> struct page_ext_iter iter;
> struct page_owner *old_page_owner;
> struct page_owner *new_page_owner;
> depot_stack_handle_t migrate_handle;
>
> - old_ext = page_ext_get(&old->page);
> - if (unlikely(!old_ext))
> + page_ext = page_ext_get(&old->page);
> + if (unlikely(!page_ext))
> return;
> + old_page_owner = get_page_owner(page_ext);
> + page_ext_put(page_ext);
>
> - new_ext = page_ext_get(&newfolio->page);
> - if (unlikely(!new_ext)) {
> - page_ext_put(old_ext);
> + page_ext = page_ext_get(&newfolio->page);
> + if (unlikely(!page_ext))
> return;
> - }
> + new_page_owner = get_page_owner(page_ext);
> + page_ext_put(page_ext);
>
> - old_page_owner = get_page_owner(old_ext);
> - new_page_owner = get_page_owner(new_ext);
> migrate_handle = new_page_owner->handle;
> __update_page_owner_handle(&newfolio->page, old_page_owner->handle,
> old_page_owner->order, old_page_owner->gfp_mask,
> @@ -402,9 +400,6 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
> old_page_owner->handle = migrate_handle;
> }
> rcu_read_unlock();
> -
> - page_ext_put(new_ext);
> - page_ext_put(old_ext);
> }
>
> void pagetypeinfo_showmixedcount_print(struct seq_file *m,
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/4] mm: page_owner: use new iteration API
2025-02-27 20:50 ` Luiz Capitulino
@ 2025-02-28 9:09 ` David Hildenbrand
0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2025-02-28 9:09 UTC (permalink / raw)
To: Luiz Capitulino, linux-kernel, linux-mm, yuzhao, pasha.tatashin
Cc: akpm, hannes, muchun.song
>> To be precise, can't we simply do the following on top?
>
> Yes, that looks good and I like how the new API allows for simpler code.
>
> My only concern is that if the user is not familiar with the page_ext
> internals, it might not be clear what page_ext_put() is actually
> protecting in which case it looks wrong that we're using a reference
> returned by get_page_owner() after releasing the lock. If you think
> that that's not an issue then I can apply this change on top.
The page_ext stuff only protects the page_ext itself, not any data
stored in there. So I assume this should be just fine.
(most of these cases shouldn't need any protection, because the page_ext
should not actually ever vanish here for memory that we are holding in
our hands; but we decided to just add it everywhere for consistency)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-02-28 9:09 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-24 21:59 [PATCH v2 0/4] mm: page_ext: Introduce new iteration API Luiz Capitulino
2025-02-24 21:59 ` [PATCH v2 1/4] mm: page_ext: make lookup_page_ext() public Luiz Capitulino
2025-02-25 16:38 ` David Hildenbrand
2025-02-25 22:29 ` Luiz Capitulino
2025-02-26 17:07 ` David Hildenbrand
2025-02-24 21:59 ` [PATCH v2 2/4] mm: page_ext: add an iteration API for page extensions Luiz Capitulino
2025-02-25 16:39 ` David Hildenbrand
2025-02-24 21:59 ` [PATCH v2 3/4] mm: page_table_check: use new iteration API Luiz Capitulino
2025-02-25 16:40 ` David Hildenbrand
2025-02-24 21:59 ` [PATCH v2 4/4] mm: page_owner: " Luiz Capitulino
2025-02-25 16:44 ` David Hildenbrand
2025-02-25 22:30 ` Luiz Capitulino
2025-02-27 13:50 ` David Hildenbrand
2025-02-27 20:50 ` Luiz Capitulino
2025-02-28 9:09 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).