* [PATCH] mm/page_alloc: add tracepoint for PCP refills
@ 2026-04-25 9:13 Bunyod Suvonov
2026-04-26 22:06 ` Vishal Moola
0 siblings, 1 reply; 2+ messages in thread
From: Bunyod Suvonov @ 2026-04-25 9:13 UTC (permalink / raw)
To: akpm, vbabka, linux-mm
Cc: rostedt, mhiramat, mathieu.desnoyers, linux-trace-kernel,
linux-kernel, surenb, mhocko, jackmanb, hannes, ziy,
Bunyod Suvonov
The page allocator already has mm_page_pcpu_drain to trace pages
drained from the per-cpu page lists back to the buddy allocator. There
is no matching tracepoint for the opposite direction, where
rmqueue_bulk() refills a PCP list from the buddy allocator.
mm_page_alloc_zone_locked is not a good substitute for this. It is
emitted from __rmqueue_smallest(), which is used both by rmqueue_bulk()
and by the direct buddy allocation path. Its percpu_refill field is
derived from the allocation order and migratetype, so it does not
reliably identify whether the allocation came from a PCP refill.
Add mm_page_pcpu_refill and emit it from rmqueue_bulk() for each page
added to the PCP list. The new tracepoint uses the same page, order and
migratetype fields as mm_page_pcpu_drain, making refill and drain
activity directly comparable.
Signed-off-by: Bunyod Suvonov <b.suvonov@sjtu.edu.cn>
---
include/trace/events/kmem.h | 23 +++++++++++++++++++++++
mm/page_alloc.c | 1 +
2 files changed, 24 insertions(+)
diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index cd7920c81f85..16985604fc51 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -243,6 +243,29 @@ DEFINE_EVENT(mm_page, mm_page_alloc_zone_locked,
TP_ARGS(page, order, migratetype, percpu_refill)
);
+TRACE_EVENT(mm_page_pcpu_refill,
+
+ TP_PROTO(struct page *page, unsigned int order, int migratetype),
+
+ TP_ARGS(page, order, migratetype),
+
+ TP_STRUCT__entry(
+ __field( unsigned long, pfn )
+ __field( unsigned int, order )
+ __field( int, migratetype )
+ ),
+
+ TP_fast_assign(
+ __entry->pfn = page ? page_to_pfn(page) : -1UL;
+ __entry->order = order;
+ __entry->migratetype = migratetype;
+ ),
+
+ TP_printk("page=%p pfn=0x%lx order=%d migratetype=%d",
+ pfn_to_page(__entry->pfn), __entry->pfn,
+ __entry->order, __entry->migratetype)
+);
+
TRACE_EVENT(mm_page_pcpu_drain,
TP_PROTO(struct page *page, unsigned int order, int migratetype),
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 65e205111553..a60b73ed39a4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2544,6 +2544,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
* pages are ordered properly.
*/
list_add_tail(&page->pcp_list, list);
+ trace_mm_page_pcpu_refill(page, order, migratetype);
}
spin_unlock_irqrestore(&zone->lock, flags);
--
2.53.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] mm/page_alloc: add tracepoint for PCP refills
2026-04-25 9:13 [PATCH] mm/page_alloc: add tracepoint for PCP refills Bunyod Suvonov
@ 2026-04-26 22:06 ` Vishal Moola
0 siblings, 0 replies; 2+ messages in thread
From: Vishal Moola @ 2026-04-26 22:06 UTC (permalink / raw)
To: Bunyod Suvonov
Cc: akpm, vbabka, linux-mm, rostedt, mhiramat, mathieu.desnoyers,
linux-trace-kernel, linux-kernel, surenb, mhocko, jackmanb,
hannes, ziy
On Sat, Apr 25, 2026 at 05:13:35PM +0800, Bunyod Suvonov wrote:
> The page allocator already has mm_page_pcpu_drain to trace pages
> drained from the per-cpu page lists back to the buddy allocator. There
> is no matching tracepoint for the opposite direction, where
> rmqueue_bulk() refills a PCP list from the buddy allocator.
This sounds like a reasonable idea. Does this tracepoint show us
something that a workload might care about? Not opposed, just curious.
For future versions, would you mind including documentation about it
in Documentation/trace/events-kmem.rst?
> mm_page_alloc_zone_locked is not a good substitute for this. It is
> emitted from __rmqueue_smallest(), which is used both by rmqueue_bulk()
> and by the direct buddy allocation path. Its percpu_refill field is
> derived from the allocation order and migratetype, so it does not
> reliably identify whether the allocation came from a PCP refill.
>
> Add mm_page_pcpu_refill and emit it from rmqueue_bulk() for each page
> added to the PCP list. The new tracepoint uses the same page, order and
> migratetype fields as mm_page_pcpu_drain, making refill and drain
> activity directly comparable.
>
> Signed-off-by: Bunyod Suvonov <b.suvonov@sjtu.edu.cn>
> ---
> include/trace/events/kmem.h | 23 +++++++++++++++++++++++
> mm/page_alloc.c | 1 +
> 2 files changed, 24 insertions(+)
>
> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
> index cd7920c81f85..16985604fc51 100644
> --- a/include/trace/events/kmem.h
> +++ b/include/trace/events/kmem.h
> @@ -243,6 +243,29 @@ DEFINE_EVENT(mm_page, mm_page_alloc_zone_locked,
> TP_ARGS(page, order, migratetype, percpu_refill)
> );
>
> +TRACE_EVENT(mm_page_pcpu_refill,
> +
> + TP_PROTO(struct page *page, unsigned int order, int migratetype),
> +
> + TP_ARGS(page, order, migratetype),
> +
> + TP_STRUCT__entry(
> + __field( unsigned long, pfn )
> + __field( unsigned int, order )
> + __field( int, migratetype )
> + ),
> +
> + TP_fast_assign(
> + __entry->pfn = page ? page_to_pfn(page) : -1UL;
> + __entry->order = order;
> + __entry->migratetype = migratetype;
> + ),
> +
> + TP_printk("page=%p pfn=0x%lx order=%d migratetype=%d",
> + pfn_to_page(__entry->pfn), __entry->pfn,
> + __entry->order, __entry->migratetype)
> +);
> +
> TRACE_EVENT(mm_page_pcpu_drain,
>
> TP_PROTO(struct page *page, unsigned int order, int migratetype),
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 65e205111553..a60b73ed39a4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2544,6 +2544,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
> * pages are ordered properly.
> */
> list_add_tail(&page->pcp_list, list);
> + trace_mm_page_pcpu_refill(page, order, migratetype);
If you're trying to trace all pages as they come onto the pcp lists,
should you also account for the free_frozen_page_commit() path?
> }
> spin_unlock_irqrestore(&zone->lock, flags);
>
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-04-26 22:06 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-25 9:13 [PATCH] mm/page_alloc: add tracepoint for PCP refills Bunyod Suvonov
2026-04-26 22:06 ` Vishal Moola
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox