linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion
@ 2025-08-14 13:48 Bharata B Rao
  2025-08-14 13:48 ` [RFC PATCH v1 1/7] mm: migrate: Allow misplaced migration without VMA too Bharata B Rao
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

Hi,

This patchset is about adding a dedicated sub-system for maintaining
hot pages information from the lower tiers and promoting the hot pages
to the top tiers. It exposes an API that other sub-systems which detect
accesses, can use to report the accesses for further processing. Further
processing includes system-wide accumulation of memory access info at
PFN granularity, classification the PFNs as hot and promotion of hot
pages using per-node kernel threads. This is a continuation of the
earlier kpromoted work [1] that I posted a while back.

Kernel thread based async batch migration [2] was an off-shoot of
this effort that attempted to batch the migrations from NUMA
balancing by creating a separate kernel thread for migration.
Per-page hotness information was stored as part of extended page
flags. The kernel thread then scanned the entire PFN space to pick
the PFNs that are classified as hot.

The observed challenges from the previous approaches were these:

1. Too many PFNs need to be scanned to identify the hot PFNs in
   approach [2].
2. Hot page records stored in hash lists become unwieldy for
   extracting the required hot pages in approach [1].
3. Dynamic allocation vs static availability of space to store
   per-page hotness information.

This series tries to address challenges 1 and 2 by maintaining
the hot page records in hash lists for quick lookup and maintaining
a separate per-target-node max heap for storing ready-to-migrate
hot page records. The records in heap are priority-ordered based
on "hotness" of the page.

The API for reporting the page access remains unchanged from [1].
When the page access gets recorded, the hotness data of the page
is updated and if it crosses a threshold, it gets tracked in the
heap as well. These heaps are per-target-node and corresponding
migrate threads will periodically extract the top records from
them and do batch migration. 

In the current series, two page temperature sources are included
as examples.

1. IBS based memory access profiler.
2. PTE-A bit based access profiler for MGLRU. (from Kinsey Ho)

TODOs:

- Currently only access frequency is used to calculate the hotness.
  We could have a scalar hotness indicator based on both frequency
  of access and time of access.
- There could be millions of allocation and freeing of records
  and from atomic contexts too. Need to understand how problematic
  this could be. Approach [2] mitigated this by having pre-allocated
  hotness records for each page as part of extended page flags.
- The amount of data needed for tracking hotness is also a concern.
  There is scope for packing the three parameters (nid, time, frequency)
  in a more compact manner which I will attempt in next iterations.
- Migration rate-limiting needs to be added.
- Very very lightly tested atm as the current focus is to get the
  hot data arragement right.

Regards,
Bharata.

[1] Kpromoted - https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/
[2] Kmigrated - https://lore.kernel.org/linux-mm/20250616133931.206626-1-bharata@amd.com/

Bharata B Rao (4):
  mm: migrate: Allow misplaced migration without VMA too
  mm: Hot page tracking and promotion
  x86: ibs: In-kernel IBS driver for memory access profiling
  x86: ibs: Enable IBS profiling for memory accesses

Gregory Price (1):
  migrate: implement migrate_misplaced_folios_batch

Kinsey Ho (2):
  mm: mglru: generalize page table walk
  mm: klruscand: use mglru scanning for page promotion

 arch/x86/events/amd/ibs.c           |  11 +
 arch/x86/include/asm/entry-common.h |   3 +
 arch/x86/include/asm/hardirq.h      |   2 +
 arch/x86/include/asm/ibs.h          |   9 +
 arch/x86/include/asm/msr-index.h    |  16 +
 arch/x86/mm/Makefile                |   3 +-
 arch/x86/mm/ibs.c                   | 343 +++++++++++++++++++
 include/linux/migrate.h             |   6 +
 include/linux/mmzone.h              |  16 +
 include/linux/pghot.h               |  87 +++++
 include/linux/vm_event_item.h       |  26 ++
 mm/Kconfig                          |  19 ++
 mm/Makefile                         |   2 +
 mm/internal.h                       |   4 +
 mm/klruscand.c                      | 118 +++++++
 mm/migrate.c                        |  36 +-
 mm/mm_init.c                        |  10 +
 mm/pghot.c                          | 501 ++++++++++++++++++++++++++++
 mm/vmscan.c                         | 176 +++++++---
 mm/vmstat.c                         |  26 ++
 20 files changed, 1365 insertions(+), 49 deletions(-)
 create mode 100644 arch/x86/include/asm/ibs.h
 create mode 100644 arch/x86/mm/ibs.c
 create mode 100644 include/linux/pghot.h
 create mode 100644 mm/klruscand.c
 create mode 100644 mm/pghot.c

-- 
2.34.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH v1 1/7] mm: migrate: Allow misplaced migration without VMA too
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
@ 2025-08-14 13:48 ` Bharata B Rao
  2025-08-15  1:29   ` Huang, Ying
  2025-08-14 13:48 ` [RFC PATCH v1 2/7] migrate: implement migrate_misplaced_folios_batch Bharata B Rao
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

We want isolation of misplaced folios to work in contexts
where VMA isn't available. In order to prepare for that
allow migrate_misplaced_folio_prepare() to be called with
a NULL VMA.

Signed-off-by: Bharata B Rao <bharata@amd.com>
---
 mm/migrate.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 425401b2d4e1..7e356c0b1b5a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2619,7 +2619,8 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
 
 /*
  * Prepare for calling migrate_misplaced_folio() by isolating the folio if
- * permitted. Must be called with the PTL still held.
+ * permitted. Must be called with the PTL still held if called with a non-NULL
+ * vma.
  */
 int migrate_misplaced_folio_prepare(struct folio *folio,
 		struct vm_area_struct *vma, int node)
@@ -2636,7 +2637,7 @@ int migrate_misplaced_folio_prepare(struct folio *folio,
 		 * See folio_maybe_mapped_shared() on possible imprecision
 		 * when we cannot easily detect if a folio is shared.
 		 */
-		if ((vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio))
+		if (vma && (vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio))
 			return -EACCES;
 
 		/*
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v1 2/7] migrate: implement migrate_misplaced_folios_batch
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
  2025-08-14 13:48 ` [RFC PATCH v1 1/7] mm: migrate: Allow misplaced migration without VMA too Bharata B Rao
@ 2025-08-14 13:48 ` Bharata B Rao
  2025-08-15  1:39   ` Huang, Ying
  2025-08-14 13:48 ` [RFC PATCH v1 3/7] mm: Hot page tracking and promotion Bharata B Rao
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

From: Gregory Price <gourry@gourry.net>

A common operation in tiering is to migrate multiple pages at once.
The migrate_misplaced_folio function requires one call for each
individual folio.  Expose a batch-variant of the same call for use
when doing batch migrations.

Signed-off-by: Gregory Price <gourry@gourry.net>
Signed-off-by: Bharata B Rao <bharata@amd.com>
---
 include/linux/migrate.h |  6 ++++++
 mm/migrate.c            | 31 +++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index acadd41e0b5c..0593f5869be8 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -107,6 +107,7 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
 int migrate_misplaced_folio_prepare(struct folio *folio,
 		struct vm_area_struct *vma, int node);
 int migrate_misplaced_folio(struct folio *folio, int node);
+int migrate_misplaced_folios_batch(struct list_head *foliolist, int node);
 #else
 static inline int migrate_misplaced_folio_prepare(struct folio *folio,
 		struct vm_area_struct *vma, int node)
@@ -117,6 +118,11 @@ static inline int migrate_misplaced_folio(struct folio *folio, int node)
 {
 	return -EAGAIN; /* can't migrate now */
 }
+static inline int migrate_misplaced_folios_batch(struct list_head *foliolist,
+						 int node)
+{
+	return -EAGAIN; /* can't migrate now */
+}
 #endif /* CONFIG_NUMA_BALANCING */
 
 #ifdef CONFIG_MIGRATION
diff --git a/mm/migrate.c b/mm/migrate.c
index 7e356c0b1b5a..1268a95eda0e 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2714,5 +2714,36 @@ int migrate_misplaced_folio(struct folio *folio, int node)
 	BUG_ON(!list_empty(&migratepages));
 	return nr_remaining ? -EAGAIN : 0;
 }
+
+/*
+ * Batch variant of migrate_misplaced_folio. Attempts to migrate
+ * a folio list to the specified destination.
+ *
+ * Caller is expected to have isolated the folios by calling
+ * migrate_misplaced_folio_prepare(), which will result in an
+ * elevated reference count on the folio.
+ *
+ * This function will un-isolate the folios, dereference them, and
+ * remove them from the list before returning.
+ */
+int migrate_misplaced_folios_batch(struct list_head *folio_list, int node)
+{
+	pg_data_t *pgdat = NODE_DATA(node);
+	unsigned int nr_succeeded;
+	int nr_remaining;
+
+	nr_remaining = migrate_pages(folio_list, alloc_misplaced_dst_folio,
+				     NULL, node, MIGRATE_ASYNC,
+				     MR_NUMA_MISPLACED, &nr_succeeded);
+	if (nr_remaining)
+		putback_movable_pages(folio_list);
+
+	if (nr_succeeded) {
+		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
+		mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded);
+	}
+	BUG_ON(!list_empty(folio_list));
+	return nr_remaining ? -EAGAIN : 0;
+}
 #endif /* CONFIG_NUMA_BALANCING */
 #endif /* CONFIG_NUMA */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v1 3/7] mm: Hot page tracking and promotion
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
  2025-08-14 13:48 ` [RFC PATCH v1 1/7] mm: migrate: Allow misplaced migration without VMA too Bharata B Rao
  2025-08-14 13:48 ` [RFC PATCH v1 2/7] migrate: implement migrate_misplaced_folios_batch Bharata B Rao
@ 2025-08-14 13:48 ` Bharata B Rao
  2025-08-15  1:56   ` Huang, Ying
       [not found]   ` <CGME20250821111729epcas5p4b57cdfb4a339e8ac7fc1ea803d6baa34@epcas5p4.samsung.com>
  2025-08-14 13:48 ` [RFC PATCH v1 4/7] x86: ibs: In-kernel IBS driver for memory access profiling Bharata B Rao
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

This introduces a sub-system for collecting memory access
information from different sources. It maintains the hotness
information based on the access history and time of access.

Additionally, it provides per-lowertier-node kernel threads
(named kpromoted) that periodically promote the pages that
are eligible for promotion.

Sub-systems that generate hot page access info can report that
using this API:

int pghot_record_access(u64 pfn, int nid, int src,
			unsigned long time)

@pfn: The PFN of the memory accessed
@nid: The accessing NUMA node ID
@src: The temperature source (sub-system) that generated the
      access info
@time: The access time in jiffies

Some temperature sources may not provide the nid from which
the page was accessed. This is true for sources that use
page table scanning for PTE Accessed bit. For such sources,
the default toptier node to which such pages should be promoted
is hard coded.

Also, the access time provided some sources may at best be
considered approximate. This is especially true for hot pages
detected by PTE A bit scanning.

The hot PFN records are stored in hash lists hashed by PFN value.
The PFN records that are categorized as hot enough to be promoted
are maintained in a per-lowertier-node max heap from which
kpromoted extracts and promotes them.

Each record stores the following info:

struct pghot_info {
	unsigned long pfn;

	unsigned long last_update; /* Most recent access time */
	int frequency; /* Number of accesses within current window */
	int nid; /* Most recent access from this node */

	struct hlist_node hnode;
	size_t heap_idx; /* Position in max heap for quick retreival */
};

The way in which a page is categorized as hot enough to be
promoted is pretty primitive now.

Signed-off-by: Bharata B Rao <bharata@amd.com>
---
 include/linux/mmzone.h        |  11 +
 include/linux/pghot.h         |  87 ++++++
 include/linux/vm_event_item.h |   9 +
 mm/Kconfig                    |  11 +
 mm/Makefile                   |   1 +
 mm/mm_init.c                  |  10 +
 mm/pghot.c                    | 501 ++++++++++++++++++++++++++++++++++
 mm/vmstat.c                   |   9 +
 8 files changed, 639 insertions(+)
 create mode 100644 include/linux/pghot.h
 create mode 100644 mm/pghot.c

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 0c5da9141983..f7094babed10 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1349,6 +1349,10 @@ struct memory_failure_stats {
 };
 #endif
 
+#ifdef CONFIG_PGHOT
+#include <linux/pghot.h>
+#endif
+
 /*
  * On NUMA machines, each NUMA node would have a pg_data_t to describe
  * it's memory layout. On UMA machines there is a single pglist_data which
@@ -1497,6 +1501,13 @@ typedef struct pglist_data {
 #ifdef CONFIG_MEMORY_FAILURE
 	struct memory_failure_stats mf_stats;
 #endif
+#ifdef CONFIG_PGHOT
+	struct task_struct *kpromoted;
+	wait_queue_head_t kpromoted_wait;
+	struct pghot_info **phi_buf;
+	struct max_heap heap;
+	spinlock_t heap_lock;
+#endif
 } pg_data_t;
 
 #define node_present_pages(nid)	(NODE_DATA(nid)->node_present_pages)
diff --git a/include/linux/pghot.h b/include/linux/pghot.h
new file mode 100644
index 000000000000..6b8496944e7f
--- /dev/null
+++ b/include/linux/pghot.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_KPROMOTED_H
+#define _LINUX_KPROMOTED_H
+
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/workqueue_types.h>
+
+/* Page hotness temperature sources */
+enum pghot_src {
+	PGHOT_HW_HINTS,
+	PGHOT_PGTABLE_SCAN,
+};
+
+#ifdef CONFIG_PGHOT
+
+#define KPROMOTED_FREQ_WINDOW	(5 * MSEC_PER_SEC)
+
+/* 2 accesses within a window will make the page a promotion candidate */
+#define KPRMOTED_FREQ_THRESHOLD	2
+
+/*
+ * The following two defines control the number of hash lists
+ * that are maintained for tracking PFN accesses.
+ */
+#define PGHOT_HASH_PCT		50	/* % of lower tier memory pages to track */
+#define PGHOT_HASH_ENTRIES	1024	/* Number of entries per list, ideal case */
+
+/*
+ * Percentage of hash entries that can reside in heap as migrate-ready
+ * candidates
+ */
+#define PGHOT_HEAP_PCT		25
+
+#define KPRMOTED_MIGRATE_BATCH	1024
+
+/*
+ * If target NID isn't available, kpromoted promotes to node 0
+ * by default.
+ *
+ * TODO: Need checks to validate that default node is indeed
+ * present and is a toptier node.
+ */
+#define KPROMOTED_DEFAULT_NODE	0
+
+struct pghot_info {
+	unsigned long pfn;
+
+	/*
+	 * The following are the three fundamental parameters
+	 * required to track the hotness of page/PFN.
+	 *
+	 * TODO:
+	 * Check if these three can fit into a u32.
+	 * With 3 bits for frequency (8 most recent accesses),
+	 * 10 bits for nid (1024 nodes), the remaining 19 bits
+	 * are available for timestamp.
+	 */
+	unsigned long last_update; /* Most recent access time */
+	int frequency; /* Number of accesses within current window */
+	int nid; /* Most recent access from this node */
+
+	struct hlist_node hnode;
+	size_t heap_idx; /* Position in max heap for quick retreival */
+};
+
+struct max_heap {
+	size_t nr;
+	size_t size;
+	struct pghot_info **data;
+	DECLARE_FLEX_ARRAY(struct pghot_info *, preallocated);
+};
+
+/*
+ * The wakeup interval of kpromoted threads
+ */
+#define KPROMOTE_DELAY	20	/* 20ms */
+
+int pghot_record_access(u64 pfn, int nid, int src, unsigned long now);
+#else
+static inline int pghot_record_access(u64 pfn, int nid, int src,
+				      unsigned long now)
+{
+	return 0;
+}
+#endif /* CONFIG_PGHOT */
+#endif /* _LINUX_KPROMOTED_H */
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 9e15a088ba38..9085e5c2d4aa 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -186,6 +186,15 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		KSTACK_REST,
 #endif
 #endif /* CONFIG_DEBUG_STACK_USAGE */
+		PGHOT_RECORDED_ACCESSES,
+		PGHOT_RECORD_HWHINTS,
+		PGHOT_RECORD_PGTSCANS,
+		PGHOT_RECORDS_HASH,
+		PGHOT_RECORDS_HEAP,
+		KPROMOTED_RIGHT_NODE,
+		KPROMOTED_NON_LRU,
+		KPROMOTED_COLD_OLD,
+		KPROMOTED_DROPPED,
 		NR_VM_EVENT_ITEMS
 };
 
diff --git a/mm/Kconfig b/mm/Kconfig
index e443fe8cd6cf..8b236eb874cf 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1381,6 +1381,17 @@ config PT_RECLAIM
 
 	  Note: now only empty user PTE page table pages will be reclaimed.
 
+config PGHOT
+	bool "Hot page tracking and promotion"
+	def_bool y
+	depends on NUMA && MIGRATION && MMU
+	select MIN_HEAP
+	help
+	  A sub-system to track page accesses in lower tier memory and
+	  maintain hot page information. Promotes hot pages from lower
+	  tiers to top tier by using the memory access information provided
+	  by various sources. Asynchronous promotion is done by per-node
+	  kernel threads.
 
 source "mm/damon/Kconfig"
 
diff --git a/mm/Makefile b/mm/Makefile
index ef54aa615d9d..8799bd0c68ed 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -147,3 +147,4 @@ obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o
 obj-$(CONFIG_EXECMEM) += execmem.o
 obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o
 obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o
+obj-$(CONFIG_PGHOT) += kpromoted.o
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 5c21b3af216b..f7992be3ff7f 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1402,6 +1402,15 @@ static void pgdat_init_kcompactd(struct pglist_data *pgdat)
 static void pgdat_init_kcompactd(struct pglist_data *pgdat) {}
 #endif
 
+#ifdef CONFIG_PGHOT
+static void pgdat_init_kpromoted(struct pglist_data *pgdat)
+{
+	init_waitqueue_head(&pgdat->kpromoted_wait);
+}
+#else
+static void pgdat_init_kpromoted(struct pglist_data *pgdat) {}
+#endif
+
 static void __meminit pgdat_init_internals(struct pglist_data *pgdat)
 {
 	int i;
@@ -1411,6 +1420,7 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat)
 
 	pgdat_init_split_queue(pgdat);
 	pgdat_init_kcompactd(pgdat);
+	pgdat_init_kpromoted(pgdat);
 
 	init_waitqueue_head(&pgdat->kswapd_wait);
 	init_waitqueue_head(&pgdat->pfmemalloc_wait);
diff --git a/mm/pghot.c b/mm/pghot.c
new file mode 100644
index 000000000000..eadcf970c3ef
--- /dev/null
+++ b/mm/pghot.c
@@ -0,0 +1,501 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Maintains information about hot pages from slower tier nodes and
+ * promotes them.
+ *
+ * Info about accessed pages are stored in hash lists indexed by PFN.
+ * Info about pages that are hot enough to be promoted are stored in
+ * a per-toptier-node max_heap.
+ *
+ * kpromoted is a kernel thread that runs on each toptier node and
+ * promotes pages from max_heap.
+ *
+ * TODO:
+ * - Compact pghot_info so that nid, time and frequency can fit
+ * - Scalar hotness value as a function frequency and recency
+ * - Possibility of moving migration rate limiting to kpromoted
+ */
+#include <linux/pghot.h>
+#include <linux/kthread.h>
+#include <linux/mmzone.h>
+#include <linux/migrate.h>
+#include <linux/memory-tiers.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/vmalloc.h>
+#include <linux/hashtable.h>
+#include <linux/min_heap.h>
+
+struct pghot_hash {
+	struct hlist_head hash;
+	spinlock_t lock;
+};
+
+static struct pghot_hash *phi_hash;
+static int phi_hash_order;
+static int phi_heap_entries;
+static struct kmem_cache *phi_cache __ro_after_init;
+static bool kpromoted_started __ro_after_init;
+
+static bool phi_heap_less(const void *lhs, const void *rhs, void *args)
+{
+	return (*(struct pghot_info **)lhs)->frequency >
+		(*(struct pghot_info **)rhs)->frequency;
+}
+
+static void phi_heap_swp(void *lhs, void *rhs, void *args)
+{
+	struct pghot_info **l = (struct pghot_info **)lhs;
+	struct pghot_info **r = (struct pghot_info **)rhs;
+	int lindex = l - (struct pghot_info **)args;
+	int rindex = r - (struct pghot_info **)args;
+	struct pghot_info *tmp = *l;
+
+	*l = *r;
+	*r = tmp;
+
+	(*l)->heap_idx = lindex;
+	(*r)->heap_idx = rindex;
+}
+
+static const struct min_heap_callbacks phi_heap_cb = {
+	.less = phi_heap_less,
+	.swp = phi_heap_swp,
+};
+
+static void phi_heap_update_entry(struct max_heap *phi_heap, struct pghot_info *phi)
+{
+	int orig_idx = phi->heap_idx;
+
+	min_heap_sift_up(phi_heap, phi->heap_idx, &phi_heap_cb,
+			 phi_heap->data);
+	if (phi_heap->data[phi->heap_idx]->heap_idx == orig_idx)
+		min_heap_sift_down(phi_heap, phi->heap_idx,
+				   &phi_heap_cb, phi_heap->data);
+}
+
+static bool phi_heap_insert(struct max_heap *phi_heap, struct pghot_info *phi)
+{
+	if (phi_heap->nr >= phi_heap_entries)
+		return false;
+
+	phi->heap_idx = phi_heap->nr;
+	min_heap_push(phi_heap, &phi, &phi_heap_cb, phi_heap->data);
+
+	return true;
+}
+
+static bool phi_is_pfn_hot(struct pghot_info *phi)
+{
+	struct page *page = pfn_to_online_page(phi->pfn);
+	unsigned long now = jiffies;
+	struct folio *folio;
+
+	if (!page || is_zone_device_page(page))
+		return false;
+
+	folio = page_folio(page);
+	if (!folio_test_lru(folio)) {
+		count_vm_event(KPROMOTED_NON_LRU);
+		return false;
+	}
+	if (folio_nid(folio) == phi->nid) {
+		count_vm_event(KPROMOTED_RIGHT_NODE);
+		return false;
+	}
+
+	/* If the page was hot a while ago, don't promote */
+	if ((now - phi->last_update) > 2 * msecs_to_jiffies(KPROMOTED_FREQ_WINDOW)) {
+		count_vm_event(KPROMOTED_COLD_OLD);
+		return false;
+	}
+	return true;
+}
+
+static struct folio *kpromoted_isolate_folio(struct pghot_info *phi)
+{
+	struct page *page = pfn_to_page(phi->pfn);
+	struct folio *folio;
+
+	if (!page)
+		return NULL;
+
+	folio = page_folio(page);
+	if (migrate_misplaced_folio_prepare(folio, NULL, phi->nid))
+		return NULL;
+	else
+		return folio;
+}
+
+static struct pghot_info *phi_alloc(unsigned long pfn)
+{
+	struct pghot_info *phi;
+
+	phi = kmem_cache_zalloc(phi_cache, GFP_NOWAIT);
+	if (!phi)
+		return NULL;
+
+	phi->pfn = pfn;
+	phi->heap_idx = -1;
+	return phi;
+}
+
+static inline void phi_free(struct pghot_info *phi)
+{
+	kmem_cache_free(phi_cache, phi);
+}
+
+static int phi_heap_extract(pg_data_t *pgdat, int batch_count, int freq_th,
+			    struct list_head *migrate_list, int *count)
+{
+	spinlock_t *phi_heap_lock = &pgdat->heap_lock;
+	struct max_heap *phi_heap = &pgdat->heap;
+	int max_retries = 10;
+	int bkt, i = 0;
+
+	if (batch_count < 0 || !migrate_list || !count || freq_th < 1 ||
+	    freq_th > KPRMOTED_FREQ_THRESHOLD)
+		return -EINVAL;
+
+	*count = 0;
+	for (i = 0; i < batch_count; i++) {
+		struct pghot_info *top = NULL;
+		bool should_continue = false;
+		struct folio *folio;
+		int retries = 0;
+
+		while (retries < max_retries) {
+			spin_lock(phi_heap_lock);
+			if (phi_heap->nr > 0 && phi_heap->data[0]->frequency >= freq_th) {
+				should_continue = true;
+				bkt = hash_min(phi_heap->data[0]->pfn, phi_hash_order);
+				top = phi_heap->data[0];
+			}
+			spin_unlock(phi_heap_lock);
+
+			if (!should_continue)
+				goto done;
+
+			spin_lock(&phi_hash[bkt].lock);
+			spin_lock(phi_heap_lock);
+			if (phi_heap->nr == 0 || phi_heap->data[0] != top ||
+			    phi_heap->data[0]->frequency < freq_th) {
+				spin_unlock(phi_heap_lock);
+				spin_unlock(&phi_hash[bkt].lock);
+				retries++;
+				continue;
+			}
+
+			top = phi_heap->data[0];
+			hlist_del_init(&top->hnode);
+
+			phi_heap->nr--;
+			if (phi_heap->nr > 0) {
+				phi_heap->data[0] = phi_heap->data[phi_heap->nr];
+				phi_heap->data[0]->heap_idx = 0;
+				min_heap_sift_down(phi_heap, 0, &phi_heap_cb,
+						   phi_heap->data);
+			}
+
+			spin_unlock(phi_heap_lock);
+			spin_unlock(&phi_hash[bkt].lock);
+
+			if (!phi_is_pfn_hot(top)) {
+				count_vm_event(KPROMOTED_DROPPED);
+				goto skip;
+			}
+
+			folio = kpromoted_isolate_folio(top);
+			if (folio) {
+				list_add(&folio->lru, migrate_list);
+				(*count)++;
+			}
+skip:
+			phi_free(top);
+			break;
+		}
+		if (retries >= max_retries) {
+			pr_warn("%s: Too many retries\n", __func__);
+			break;
+		}
+
+	}
+done:
+	return 0;
+}
+
+static void phi_heap_add_or_adjust(struct pghot_info *phi)
+{
+	pg_data_t *pgdat = NODE_DATA(phi->nid);
+	struct max_heap *phi_heap = &pgdat->heap;
+
+	spin_lock(&pgdat->heap_lock);
+	if (phi->heap_idx >= 0 && phi->heap_idx < phi_heap->nr &&
+	    phi_heap->data[phi->heap_idx] == phi) {
+		/* Entry exists in heap */
+		if (phi->frequency < KPRMOTED_FREQ_THRESHOLD) {
+			/* Below threshold, remove from the heap */
+			phi_heap->nr--;
+			if (phi->heap_idx < phi_heap->nr) {
+				phi_heap->data[phi->heap_idx] =
+					phi_heap->data[phi_heap->nr];
+				phi_heap->data[phi->heap_idx]->heap_idx =
+					phi->heap_idx;
+				min_heap_sift_down(phi_heap, phi->heap_idx,
+						   &phi_heap_cb, phi_heap->data);
+			}
+			phi->heap_idx = -1;
+
+		} else {
+			/* Update position in heap */
+			phi_heap_update_entry(phi_heap, phi);
+		}
+	} else if (phi->frequency >= KPRMOTED_FREQ_THRESHOLD) {
+		/* Add to the heap */
+		if (phi_heap_insert(phi_heap, phi))
+			count_vm_event(PGHOT_RECORDS_HEAP);
+	}
+	spin_unlock(&pgdat->heap_lock);
+}
+
+static struct pghot_info *phi_lookup(unsigned long pfn, int bkt)
+{
+	struct pghot_info *phi;
+
+	hlist_for_each_entry(phi, &phi_hash[bkt].hash, hnode) {
+		if (phi->pfn == pfn)
+			return phi;
+	}
+	return NULL;
+}
+
+/*
+ * Called by subsystems that generate page hotness/access information.
+ *
+ *  @pfn: The PFN of the memory accessed
+ *  @nid: The accessing NUMA node ID
+ *  @src: The temperature source (sub-system) that generated the
+ *        access info
+ *  @time: The access time in jiffies
+ *
+ * Maintains the access records per PFN, classifies them as
+ * hot based on subsequent accesses and finally hands over
+ * them to kpromoted for migration.
+ */
+int pghot_record_access(u64 pfn, int nid, int src, unsigned long now)
+{
+	struct pghot_info *phi;
+	struct page *page;
+	struct folio *folio;
+	int bkt;
+	bool new_entry = false, new_window = false;
+
+	if (!kpromoted_started)
+		return -EINVAL;
+
+	count_vm_event(PGHOT_RECORDED_ACCESSES);
+
+	switch (src) {
+	case PGHOT_HW_HINTS:
+		count_vm_event(PGHOT_RECORD_HWHINTS);
+		break;
+	case PGHOT_PGTABLE_SCAN:
+		count_vm_event(PGHOT_RECORD_PGTSCANS);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/*
+	 * Record only accesses from lower tiers.
+	 */
+	if (node_is_toptier(pfn_to_nid(pfn)))
+		return 0;
+
+	/*
+	 * Reject the non-migratable pages right away.
+	 */
+	page = pfn_to_online_page(pfn);
+	if (!page || is_zone_device_page(page))
+		return 0;
+
+	folio = page_folio(page);
+	if (!folio_test_lru(folio))
+		return 0;
+
+	bkt = hash_min(pfn, phi_hash_order);
+	spin_lock(&phi_hash[bkt].lock);
+	phi = phi_lookup(pfn, bkt);
+	if (!phi) {
+		phi = phi_alloc(pfn);
+		if (!phi)
+			goto out;
+		new_entry = true;
+	}
+
+	if (((now - phi->last_update) > msecs_to_jiffies(KPROMOTED_FREQ_WINDOW)) ||
+	    (nid != NUMA_NO_NODE && phi->nid != nid))
+		new_window = true;
+
+	if (new_entry || new_window) {
+		/* New window */
+		phi->frequency = 1; /* TODO: Factor in the history */
+	} else
+		phi->frequency++;
+	phi->last_update = now;
+	phi->nid = (nid == NUMA_NO_NODE) ? KPROMOTED_DEFAULT_NODE : nid;
+
+	if (new_entry) {
+		/* Insert the new entry into hash table */
+		hlist_add_head(&phi->hnode, &phi_hash[bkt].hash);
+		count_vm_event(PGHOT_RECORDS_HASH);
+	} else {
+		/* Add/update the position in heap */
+		phi_heap_add_or_adjust(phi);
+	}
+out:
+	spin_unlock(&phi_hash[bkt].lock);
+	return 0;
+}
+
+/*
+ * Extract the hot page records and batch-migrate the
+ * hot pages.
+ */
+static void kpromoted_migrate(pg_data_t *pgdat)
+{
+	int count, ret;
+	LIST_HEAD(migrate_list);
+
+	/*
+	 * Extract the top N elements from the heap that match
+	 * the requested hotness threshold.
+	 *
+	 * PFNs ineligible from migration standpoint are removed
+	 * from the heap and hash.
+	 *
+	 * Folios eligible for migration are isolated and returned
+	 * in @migrate_list.
+	 */
+	ret = phi_heap_extract(pgdat, KPRMOTED_MIGRATE_BATCH,
+			       KPRMOTED_FREQ_THRESHOLD, &migrate_list, &count);
+	if (ret)
+		return;
+
+	if (!list_empty(&migrate_list))
+		migrate_misplaced_folios_batch(&migrate_list, pgdat->node_id);
+}
+
+static int kpromoted(void *p)
+{
+	pg_data_t *pgdat = (pg_data_t *)p;
+
+	while (!kthread_should_stop()) {
+		wait_event_timeout(pgdat->kpromoted_wait, false,
+				   msecs_to_jiffies(KPROMOTE_DELAY));
+		kpromoted_migrate(pgdat);
+	}
+	return 0;
+}
+
+static int kpromoted_run(int nid)
+{
+	pg_data_t *pgdat = NODE_DATA(nid);
+	int ret = 0;
+
+	if (!node_is_toptier(nid))
+		return 0;
+
+	if (!pgdat->phi_buf) {
+		pgdat->phi_buf = vzalloc_node(phi_heap_entries * sizeof(struct pghot_info *),
+					      nid);
+		if (!pgdat->phi_buf)
+			return -ENOMEM;
+
+		min_heap_init(&pgdat->heap, pgdat->phi_buf, phi_heap_entries);
+		spin_lock_init(&pgdat->heap_lock);
+	}
+
+	if (!pgdat->kpromoted)
+		pgdat->kpromoted = kthread_create_on_node(kpromoted, pgdat, nid,
+							  "kpromoted%d", nid);
+	if (IS_ERR(pgdat->kpromoted)) {
+		ret = PTR_ERR(pgdat->kpromoted);
+		pgdat->kpromoted = NULL;
+		pr_info("Failed to start kpromoted%d, ret %d\n", nid, ret);
+	} else {
+		wake_up_process(pgdat->kpromoted);
+	}
+	return ret;
+}
+
+static int __init pghot_init(void)
+{
+	unsigned int hash_size;
+	size_t hash_entries;
+	size_t nr_pages = 0;
+	pg_data_t *pgdat;
+	int i, nid, ret;
+
+	/*
+	 * Arrive at the hash and heap sizes based on the
+	 * number of pages present in the lower tier nodes.
+	 */
+	for_each_node_state(nid, N_MEMORY) {
+		if (!node_is_toptier(nid))
+			nr_pages += NODE_DATA(nid)->node_present_pages;
+	}
+
+	if (!nr_pages)
+		return 0;
+
+	hash_entries = nr_pages * PGHOT_HASH_PCT / 100;
+	hash_size = hash_entries / PGHOT_HASH_ENTRIES;
+	phi_hash_order = ilog2(hash_size);
+
+	phi_hash = vmalloc(sizeof(struct pghot_hash) * hash_size);
+	if (!phi_hash) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	for (i = 0; i < hash_size; i++) {
+		INIT_HLIST_HEAD(&phi_hash[i].hash);
+		spin_lock_init(&phi_hash[i].lock);
+	}
+
+	phi_cache = KMEM_CACHE(pghot_info, 0);
+	if (unlikely(!phi_cache)) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	phi_heap_entries = hash_entries * PGHOT_HEAP_PCT / 100;
+	for_each_node_state(nid, N_CPU) {
+		ret = kpromoted_run(nid);
+		if (ret)
+			goto out_stop_kthread;
+	}
+
+	kpromoted_started = true;
+	pr_info("pghot: Started page hotness monitoring and promotion thread\n");
+	pr_info("pghot: nr_pages %ld hash_size %d hash_entries %ld hash_order %d heap_entries %d\n",
+	       nr_pages, hash_size, hash_entries, phi_hash_order, phi_heap_entries);
+	return 0;
+
+out_stop_kthread:
+	for_each_node_state(nid, N_CPU) {
+		pgdat = NODE_DATA(nid);
+		if (pgdat->kpromoted) {
+			kthread_stop(pgdat->kpromoted);
+			pgdat->kpromoted = NULL;
+			vfree(pgdat->phi_buf);
+		}
+	}
+out:
+	kmem_cache_destroy(phi_cache);
+	vfree(phi_hash);
+	return ret;
+}
+
+late_initcall(pghot_init)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 71cd1ceba191..9edbdd71c6f7 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1496,6 +1496,15 @@ const char * const vmstat_text[] = {
 #endif
 #undef I
 #endif /* CONFIG_VM_EVENT_COUNTERS */
+	"pghot_recorded_accesses",
+	"pghot_recorded_hwhints",
+	"pghot_recorded_pgtscans",
+	"pghot_records_hash",
+	"pghot_records_heap",
+	"kpromoted_right_node",
+	"kpromoted_non_lru",
+	"kpromoted_cold_old",
+	"kpromoted_dropped",
 };
 #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA || CONFIG_MEMCG */
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v1 4/7] x86: ibs: In-kernel IBS driver for memory access profiling
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
                   ` (2 preceding siblings ...)
  2025-08-14 13:48 ` [RFC PATCH v1 3/7] mm: Hot page tracking and promotion Bharata B Rao
@ 2025-08-14 13:48 ` Bharata B Rao
  2025-08-14 13:48 ` [RFC PATCH v1 5/7] x86: ibs: Enable IBS profiling for memory accesses Bharata B Rao
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

Use IBS (Instruction Based Sampling) feature present
in AMD processors for memory access tracking. The access
information obtained from IBS via NMI is fed to kpromoted
daemon for futher action.

In addition to many other information related to the memory
access, IBS provides physical (and virtual) address of the access
and indicates if the access came from slower tier. Only memory
accesses originating from slower tiers are further acted upon
by this driver.

The samples are initially accumulated in percpu buffers which
are flushed to pghot hot page tracking mechanism using irq_work.

TODO: Many counters are added to vmstat just as debugging aid
for now.

About IBS
---------
IBS can be programmed to provide data about instruction
execution periodically. This is done by programming a desired
sample count (number of ops) in a control register. When the
programmed number of ops are dispatched, a micro-op gets tagged,
various information about the tagged micro-op's execution is
populated in IBS execution MSRs and an interrupt is raised.
While IBS provides a lot of data for each sample, for the
purpose of  memory access profiling, we are interested in
linear and physical address of the memory access that reached
DRAM. Recent AMD processors provide further filtering where
it is possible to limit the sampling to those ops that had
an L3 miss which greately reduces the non-useful samples.

While IBS provides capability to sample instruction fetch
and execution, only IBS execution sampling is used here
to collect data about memory accesses that occur during
the instruction execution.

More information about IBS is available in Sec 13.3 of
AMD64 Architecture Programmer's Manual, Volume 2:System
Programming which is present at:
https://bugzilla.kernel.org/attachment.cgi?id=288923

Information about MSRs used for programming IBS can be
found in Sec 2.1.14.4 of PPR Vol 1 for AMD Family 19h
Model 11h B1 which is currently present at:
https://www.amd.com/system/files/TechDocs/55901_0.25.zip

Signed-off-by: Bharata B Rao <bharata@amd.com>
---
 arch/x86/events/amd/ibs.c        |  11 ++
 arch/x86/include/asm/ibs.h       |   7 +
 arch/x86/include/asm/msr-index.h |  16 ++
 arch/x86/mm/Makefile             |   3 +-
 arch/x86/mm/ibs.c                | 311 +++++++++++++++++++++++++++++++
 include/linux/vm_event_item.h    |  17 ++
 mm/vmstat.c                      |  17 ++
 7 files changed, 381 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/include/asm/ibs.h
 create mode 100644 arch/x86/mm/ibs.c

diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
index 112f43b23ebf..1498dc9caeb2 100644
--- a/arch/x86/events/amd/ibs.c
+++ b/arch/x86/events/amd/ibs.c
@@ -13,9 +13,11 @@
 #include <linux/ptrace.h>
 #include <linux/syscore_ops.h>
 #include <linux/sched/clock.h>
+#include <linux/pghot.h>
 
 #include <asm/apic.h>
 #include <asm/msr.h>
+#include <asm/ibs.h>
 
 #include "../perf_event.h"
 
@@ -1756,6 +1758,15 @@ static __init int amd_ibs_init(void)
 {
 	u32 caps;
 
+	/*
+	 * TODO: Find a clean way to disable perf IBS so that IBS
+	 * can be used for memory access profiling.
+	 */
+	if (arch_hw_access_profiling) {
+		pr_info("IBS isn't available for perf use\n");
+		return 0;
+	}
+
 	caps = __get_ibs_caps();
 	if (!caps)
 		return -ENODEV;	/* ibs not supported by the cpu */
diff --git a/arch/x86/include/asm/ibs.h b/arch/x86/include/asm/ibs.h
new file mode 100644
index 000000000000..b5a4f2ca6330
--- /dev/null
+++ b/arch/x86/include/asm/ibs.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_IBS_H
+#define _ASM_X86_IBS_H
+
+extern bool arch_hw_access_profiling;
+
+#endif /* _ASM_X86_IBS_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index b65c3ba5fa14..55d26380550c 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -742,6 +742,22 @@
 /* AMD Last Branch Record MSRs */
 #define MSR_AMD64_LBR_SELECT			0xc000010e
 
+/* AMD IBS MSR bits */
+#define MSR_AMD64_IBSOPDATA2_DATASRC			0x7
+#define MSR_AMD64_IBSOPDATA2_DATASRC_LCL_CACHE		0x1
+#define MSR_AMD64_IBSOPDATA2_DATASRC_PEER_CACHE_NEAR	0x2
+#define MSR_AMD64_IBSOPDATA2_DATASRC_DRAM		0x3
+#define MSR_AMD64_IBSOPDATA2_DATASRC_FAR_CCX_CACHE	0x5
+#define MSR_AMD64_IBSOPDATA2_DATASRC_EXT_MEM		0x8
+#define	MSR_AMD64_IBSOPDATA2_RMTNODE			0x10
+
+#define MSR_AMD64_IBSOPDATA3_LDOP		BIT_ULL(0)
+#define MSR_AMD64_IBSOPDATA3_STOP		BIT_ULL(1)
+#define MSR_AMD64_IBSOPDATA3_DCMISS		BIT_ULL(7)
+#define MSR_AMD64_IBSOPDATA3_LADDR_VALID	BIT_ULL(17)
+#define MSR_AMD64_IBSOPDATA3_PADDR_VALID	BIT_ULL(18)
+#define MSR_AMD64_IBSOPDATA3_L2MISS		BIT_ULL(20)
+
 /* Zen4 */
 #define MSR_ZEN4_BP_CFG                 0xc001102e
 #define MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT 4
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 5b9908f13dcf..967e5af9eba9 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -22,7 +22,8 @@ CFLAGS_REMOVE_pgprot.o			= -pg
 endif
 
 obj-y				:=  init.o init_$(BITS).o fault.o ioremap.o extable.o mmap.o \
-				    pgtable.o physaddr.o tlb.o cpu_entry_area.o maccess.o pgprot.o
+				    pgtable.o physaddr.o tlb.o cpu_entry_area.o maccess.o pgprot.o \
+				    ibs.o
 
 obj-y				+= pat/
 
diff --git a/arch/x86/mm/ibs.c b/arch/x86/mm/ibs.c
new file mode 100644
index 000000000000..6669710dd35b
--- /dev/null
+++ b/arch/x86/mm/ibs.c
@@ -0,0 +1,311 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/init.h>
+#include <linux/pghot.h>
+#include <linux/percpu.h>
+#include <linux/workqueue.h>
+#include <linux/irq_work.h>
+
+#include <asm/nmi.h>
+#include <asm/perf_event.h> /* TODO: Move defns like IBS_OP_ENABLE into non-perf header */
+#include <asm/apic.h>
+#include <asm/ibs.h>
+
+bool arch_hw_access_profiling;
+static u64 ibs_config __read_mostly;
+static u32 ibs_caps;
+
+#define IBS_NR_SAMPLES	150
+
+/*
+ * Basic access info captured for each memory access.
+ */
+struct ibs_sample {
+	unsigned long pfn;
+	unsigned long time;	/* jiffies when accessed */
+	int nid;		/* Accessing node ID, if known */
+};
+
+/*
+ * Percpu buffer of access samples. Samples are accumulated here
+ * before pushing them to kpromoted for further action.
+ */
+struct ibs_sample_pcpu {
+	struct ibs_sample samples[IBS_NR_SAMPLES];
+	int head, tail;
+};
+
+struct ibs_sample_pcpu __percpu *ibs_s;
+
+/*
+ * The workqueue for pushing the percpu access samples to kpromoted.
+ */
+static struct work_struct ibs_work;
+static struct irq_work ibs_irq_work;
+
+/*
+ * Record the IBS-reported access sample in percpu buffer.
+ * Called from IBS NMI handler.
+ */
+static int ibs_push_sample(unsigned long pfn, int nid, unsigned long time)
+{
+	struct ibs_sample_pcpu *ibs_pcpu = raw_cpu_ptr(ibs_s);
+	int next = ibs_pcpu->head + 1;
+
+	if (next >= IBS_NR_SAMPLES)
+		next = 0;
+
+	if (next == ibs_pcpu->tail)
+		return 0;
+
+	ibs_pcpu->samples[ibs_pcpu->head].pfn = pfn;
+	ibs_pcpu->samples[ibs_pcpu->head].time = time;
+	ibs_pcpu->head = next;
+	return 1;
+}
+
+static int ibs_pop_sample(struct ibs_sample *s)
+{
+	struct ibs_sample_pcpu *ibs_pcpu = raw_cpu_ptr(ibs_s);
+
+	int next = ibs_pcpu->tail + 1;
+
+	if (ibs_pcpu->head == ibs_pcpu->tail)
+		return 0;
+
+	if (next >= IBS_NR_SAMPLES)
+		next = 0;
+
+	*s = ibs_pcpu->samples[ibs_pcpu->tail];
+	ibs_pcpu->tail = next;
+	return 1;
+}
+
+/*
+ * Remove access samples from percpu buffer and send them
+ * to kpromoted for further action.
+ */
+static void ibs_work_handler(struct work_struct *work)
+{
+	struct ibs_sample s;
+
+	while (ibs_pop_sample(&s))
+		pghot_record_access(s.pfn, s.nid, PGHOT_HW_HINTS, s.time);
+}
+
+static void ibs_irq_handler(struct irq_work *i)
+{
+	schedule_work_on(smp_processor_id(), &ibs_work);
+}
+
+/*
+ * IBS NMI handler: Process the memory access info reported by IBS.
+ *
+ * Reads the MSRs to collect all the information about the reported
+ * memory access, validates the access, stores the valid sample and
+ * schedules the work on this CPU to further process the sample.
+ */
+static int ibs_overflow_handler(unsigned int cmd, struct pt_regs *regs)
+{
+	struct mm_struct *mm = current->mm;
+	u64 ops_ctl, ops_data3, ops_data2;
+	u64 laddr = -1, paddr = -1;
+	u64 data_src, rmt_node;
+	struct page *page;
+	unsigned long pfn;
+
+	rdmsrl(MSR_AMD64_IBSOPCTL, ops_ctl);
+
+	/*
+	 * When IBS sampling period is reprogrammed via read-modify-update
+	 * of MSR_AMD64_IBSOPCTL, overflow NMIs could be generated with
+	 * IBS_OP_ENABLE not set. For such cases, return as HANDLED.
+	 *
+	 * With this, the handler will say "handled" for all NMIs that
+	 * aren't related to this NMI.  This stems from the limitation of
+	 * having both status and control bits in one MSR.
+	 */
+	if (!(ops_ctl & IBS_OP_VAL))
+		goto handled;
+
+	wrmsrl(MSR_AMD64_IBSOPCTL, ops_ctl & ~IBS_OP_VAL);
+
+	count_vm_event(HWHINT_NR_EVENTS);
+
+	if (!user_mode(regs)) {
+		count_vm_event(HWHINT_KERNEL);
+		goto handled;
+	}
+
+	if (!mm) {
+		count_vm_event(HWHINT_KTHREAD);
+		goto handled;
+	}
+
+	rdmsrl(MSR_AMD64_IBSOPDATA3, ops_data3);
+
+	/* Load/Store ops only */
+	/* TODO: DataSrc isn't valid for stores, so filter out stores? */
+	if (!(ops_data3 & (MSR_AMD64_IBSOPDATA3_LDOP |
+			   MSR_AMD64_IBSOPDATA3_STOP))) {
+		count_vm_event(HWHINT_NON_LOAD_STORES);
+		goto handled;
+	}
+
+	/* Discard the sample if it was L1 or L2 hit */
+	if (!(ops_data3 & (MSR_AMD64_IBSOPDATA3_DCMISS |
+			   MSR_AMD64_IBSOPDATA3_L2MISS))) {
+		count_vm_event(HWHINT_DC_L2_HITS);
+		goto handled;
+	}
+
+	rdmsrl(MSR_AMD64_IBSOPDATA2, ops_data2);
+	data_src = ops_data2 & MSR_AMD64_IBSOPDATA2_DATASRC;
+	if (ibs_caps & IBS_CAPS_ZEN4)
+		data_src |= ((ops_data2 & 0xC0) >> 3);
+
+	switch (data_src) {
+	case MSR_AMD64_IBSOPDATA2_DATASRC_LCL_CACHE:
+		count_vm_event(HWHINT_LOCAL_L3L1L2);
+		break;
+	case MSR_AMD64_IBSOPDATA2_DATASRC_PEER_CACHE_NEAR:
+		count_vm_event(HWHINT_LOCAL_PEER_CACHE_NEAR);
+		break;
+	case MSR_AMD64_IBSOPDATA2_DATASRC_DRAM:
+		count_vm_event(HWHINT_DRAM_ACCESSES);
+		break;
+	case MSR_AMD64_IBSOPDATA2_DATASRC_EXT_MEM:
+		count_vm_event(HWHINT_CXL_ACCESSES);
+		break;
+	case MSR_AMD64_IBSOPDATA2_DATASRC_FAR_CCX_CACHE:
+		count_vm_event(HWHINT_FAR_CACHE_HITS);
+		break;
+	}
+
+	rmt_node = ops_data2 & MSR_AMD64_IBSOPDATA2_RMTNODE;
+	if (rmt_node)
+		count_vm_event(HWHINT_REMOTE_NODE);
+
+	/* Is linear addr valid? */
+	if (ops_data3 & MSR_AMD64_IBSOPDATA3_LADDR_VALID)
+		rdmsrl(MSR_AMD64_IBSDCLINAD, laddr);
+	else {
+		count_vm_event(HWHINT_LADDR_INVALID);
+		goto handled;
+	}
+
+	/* Discard kernel address accesses */
+	if (laddr & (1UL << 63)) {
+		count_vm_event(HWHINT_KERNEL_ADDR);
+		goto handled;
+	}
+
+	/* Is phys addr valid? */
+	if (ops_data3 & MSR_AMD64_IBSOPDATA3_PADDR_VALID)
+		rdmsrl(MSR_AMD64_IBSDCPHYSAD, paddr);
+	else {
+		count_vm_event(HWHINT_PADDR_INVALID);
+		goto handled;
+	}
+
+	pfn = PHYS_PFN(paddr);
+	page = pfn_to_online_page(pfn);
+	if (!page)
+		goto handled;
+
+	if (!PageLRU(page)) {
+		count_vm_event(HWHINT_NON_LRU);
+		goto handled;
+	}
+
+	if (!ibs_push_sample(pfn, numa_node_id(), jiffies)) {
+		count_vm_event(HWHINT_BUFFER_FULL);
+		goto handled;
+	}
+
+	irq_work_queue(&ibs_irq_work);
+	count_vm_event(HWHINT_USEFUL_SAMPLES);
+
+handled:
+	return NMI_HANDLED;
+}
+
+static inline int get_ibs_lvt_offset(void)
+{
+	u64 val;
+
+	rdmsrl(MSR_AMD64_IBSCTL, val);
+	if (!(val & IBSCTL_LVT_OFFSET_VALID))
+		return -EINVAL;
+
+	return val & IBSCTL_LVT_OFFSET_MASK;
+}
+
+static void setup_APIC_ibs(void)
+{
+	int offset;
+
+	offset = get_ibs_lvt_offset();
+	if (offset < 0)
+		goto failed;
+
+	if (!setup_APIC_eilvt(offset, 0, APIC_EILVT_MSG_NMI, 0))
+		return;
+failed:
+	pr_warn("IBS APIC setup failed on cpu #%d\n",
+		smp_processor_id());
+}
+
+static void clear_APIC_ibs(void)
+{
+	int offset;
+
+	offset = get_ibs_lvt_offset();
+	if (offset >= 0)
+		setup_APIC_eilvt(offset, 0, APIC_EILVT_MSG_FIX, 1);
+}
+
+static int x86_amd_ibs_access_profile_startup(unsigned int cpu)
+{
+	setup_APIC_ibs();
+	return 0;
+}
+
+static int x86_amd_ibs_access_profile_teardown(unsigned int cpu)
+{
+	clear_APIC_ibs();
+	return 0;
+}
+
+static int __init ibs_access_profiling_init(void)
+{
+	if (!boot_cpu_has(X86_FEATURE_IBS)) {
+		pr_info("IBS capability is unavailable for access profiling\n");
+		return 0;
+	}
+
+	ibs_s = alloc_percpu_gfp(struct ibs_sample_pcpu, GFP_KERNEL | __GFP_ZERO);
+	if (!ibs_s)
+		return 0;
+
+	INIT_WORK(&ibs_work, ibs_work_handler);
+	init_irq_work(&ibs_irq_work, ibs_irq_handler);
+
+	/* Uses IBS Op sampling */
+	ibs_config = IBS_OP_CNT_CTL | IBS_OP_ENABLE;
+	ibs_caps = cpuid_eax(IBS_CPUID_FEATURES);
+	if (ibs_caps & IBS_CAPS_ZEN4)
+		ibs_config |= IBS_OP_L3MISSONLY;
+
+	register_nmi_handler(NMI_LOCAL, ibs_overflow_handler, 0, "ibs");
+
+	cpuhp_setup_state(CPUHP_AP_PERF_X86_AMD_IBS_STARTING,
+			  "x86/amd/ibs_access_profile:starting",
+			  x86_amd_ibs_access_profile_startup,
+			  x86_amd_ibs_access_profile_teardown);
+
+	pr_info("IBS setup for memory access profiling\n");
+	return 0;
+}
+
+arch_initcall(ibs_access_profiling_init);
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 9085e5c2d4aa..d01267649431 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -195,6 +195,23 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		KPROMOTED_NON_LRU,
 		KPROMOTED_COLD_OLD,
 		KPROMOTED_DROPPED,
+		HWHINT_NR_EVENTS,
+		HWHINT_KERNEL,
+		HWHINT_KTHREAD,
+		HWHINT_NON_LOAD_STORES,
+		HWHINT_DC_L2_HITS,
+		HWHINT_LOCAL_L3L1L2,
+		HWHINT_LOCAL_PEER_CACHE_NEAR,
+		HWHINT_FAR_CACHE_HITS,
+		HWHINT_DRAM_ACCESSES,
+		HWHINT_CXL_ACCESSES,
+		HWHINT_REMOTE_NODE,
+		HWHINT_LADDR_INVALID,
+		HWHINT_KERNEL_ADDR,
+		HWHINT_PADDR_INVALID,
+		HWHINT_NON_LRU,
+		HWHINT_BUFFER_FULL,
+		HWHINT_USEFUL_SAMPLES,
 		NR_VM_EVENT_ITEMS
 };
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 9edbdd71c6f7..5727e4b88258 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1505,6 +1505,23 @@ const char * const vmstat_text[] = {
 	"kpromoted_non_lru",
 	"kpromoted_cold_old",
 	"kpromoted_dropped",
+	"hwhint_nr_events",
+	"hwhint_kernel",
+	"hwhint_kthread",
+	"hwhint_non_load_stores",
+	"hwhint_dc_l2_hits",
+	"hwhint_local_l3l1l2",
+	"hwhint_local_peer_cache_near",
+	"hwhint_far_cache_hits",
+	"hwhint_dram_accesses",
+	"hwhint_cxl_accesses",
+	"hwhint_remote_node",
+	"hwhint_invalid_laddr",
+	"hwhint_kernel_addr",
+	"hwhint_invalid_paddr",
+	"hwhint_non_lru",
+	"hwhint_buffer_full",
+	"hwhint_useful_samples",
 };
 #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA || CONFIG_MEMCG */
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v1 5/7] x86: ibs: Enable IBS profiling for memory accesses
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
                   ` (3 preceding siblings ...)
  2025-08-14 13:48 ` [RFC PATCH v1 4/7] x86: ibs: In-kernel IBS driver for memory access profiling Bharata B Rao
@ 2025-08-14 13:48 ` Bharata B Rao
  2025-08-14 13:48 ` [RFC PATCH v1 6/7] mm: mglru: generalize page table walk Bharata B Rao
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

Enable IBS memory access data collection for user memory
accesses by programming the required MSRs. The profiling
is turned ON only for user mode execution and turned OFF
for kernel mode execution. Profiling is explicitly disabled
for NMI handler too.

TODOs:

- IBS sampling rate is kept fixed for now.
- Arch/vendor separation/isolation of the code needs relook.

Signed-off-by: Bharata B Rao <bharata@amd.com>
---
 arch/x86/include/asm/entry-common.h |  3 +++
 arch/x86/include/asm/hardirq.h      |  2 ++
 arch/x86/include/asm/ibs.h          |  2 ++
 arch/x86/mm/ibs.c                   | 32 +++++++++++++++++++++++++++++
 4 files changed, 39 insertions(+)

diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
index d535a97c7284..7144b57d209b 100644
--- a/arch/x86/include/asm/entry-common.h
+++ b/arch/x86/include/asm/entry-common.h
@@ -9,10 +9,12 @@
 #include <asm/io_bitmap.h>
 #include <asm/fpu/api.h>
 #include <asm/fred.h>
+#include <asm/ibs.h>
 
 /* Check that the stack and regs on entry from user mode are sane. */
 static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs)
 {
+	hw_access_profiling_stop();
 	if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) {
 		/*
 		 * Make sure that the entry code gave us a sensible EFLAGS
@@ -99,6 +101,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
 static __always_inline void arch_exit_to_user_mode(void)
 {
 	amd_clear_divider();
+	hw_access_profiling_start();
 }
 #define arch_exit_to_user_mode arch_exit_to_user_mode
 
diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
index f00c09ffe6a9..0752cb6ebd7a 100644
--- a/arch/x86/include/asm/hardirq.h
+++ b/arch/x86/include/asm/hardirq.h
@@ -91,4 +91,6 @@ static __always_inline bool kvm_get_cpu_l1tf_flush_l1d(void)
 static __always_inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
 #endif /* IS_ENABLED(CONFIG_KVM_INTEL) */
 
+#define arch_nmi_enter()	hw_access_profiling_stop()
+#define arch_nmi_exit()		hw_access_profiling_start()
 #endif /* _ASM_X86_HARDIRQ_H */
diff --git a/arch/x86/include/asm/ibs.h b/arch/x86/include/asm/ibs.h
index b5a4f2ca6330..6b480958534e 100644
--- a/arch/x86/include/asm/ibs.h
+++ b/arch/x86/include/asm/ibs.h
@@ -2,6 +2,8 @@
 #ifndef _ASM_X86_IBS_H
 #define _ASM_X86_IBS_H
 
+void hw_access_profiling_start(void);
+void hw_access_profiling_stop(void);
 extern bool arch_hw_access_profiling;
 
 #endif /* _ASM_X86_IBS_H */
diff --git a/arch/x86/mm/ibs.c b/arch/x86/mm/ibs.c
index 6669710dd35b..3128e8fa5f39 100644
--- a/arch/x86/mm/ibs.c
+++ b/arch/x86/mm/ibs.c
@@ -16,6 +16,7 @@ static u64 ibs_config __read_mostly;
 static u32 ibs_caps;
 
 #define IBS_NR_SAMPLES	150
+#define IBS_SAMPLE_PERIOD      10000
 
 /*
  * Basic access info captured for each memory access.
@@ -98,6 +99,36 @@ static void ibs_irq_handler(struct irq_work *i)
 	schedule_work_on(smp_processor_id(), &ibs_work);
 }
 
+void hw_access_profiling_stop(void)
+{
+	u64 ops_ctl;
+
+	if (!arch_hw_access_profiling)
+		return;
+
+	rdmsrl(MSR_AMD64_IBSOPCTL, ops_ctl);
+	wrmsrl(MSR_AMD64_IBSOPCTL, ops_ctl & ~IBS_OP_ENABLE);
+}
+
+void hw_access_profiling_start(void)
+{
+	u64 config = 0;
+	unsigned int period = IBS_SAMPLE_PERIOD;
+
+	if (!arch_hw_access_profiling)
+		return;
+
+	/* Disable IBS for kernel thread */
+	if (!current->mm)
+		goto out;
+
+	config = (period >> 4)  & IBS_OP_MAX_CNT;
+	config |= (period & IBS_OP_MAX_CNT_EXT_MASK);
+	config |= ibs_config;
+out:
+	wrmsrl(MSR_AMD64_IBSOPCTL, config);
+}
+
 /*
  * IBS NMI handler: Process the memory access info reported by IBS.
  *
@@ -304,6 +335,7 @@ static int __init ibs_access_profiling_init(void)
 			  x86_amd_ibs_access_profile_startup,
 			  x86_amd_ibs_access_profile_teardown);
 
+	arch_hw_access_profiling = true;
 	pr_info("IBS setup for memory access profiling\n");
 	return 0;
 }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v1 6/7] mm: mglru: generalize page table walk
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
                   ` (4 preceding siblings ...)
  2025-08-14 13:48 ` [RFC PATCH v1 5/7] x86: ibs: Enable IBS profiling for memory accesses Bharata B Rao
@ 2025-08-14 13:48 ` Bharata B Rao
  2025-08-14 13:48 ` [RFC PATCH v1 7/7] mm: klruscand: use mglru scanning for page promotion Bharata B Rao
  2025-08-15 11:59 ` [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Balbir Singh
  7 siblings, 0 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

From: Kinsey Ho <kinseyho@google.com>

Refactor the existing MGLRU page table walking logic to make it
resumable.

Additionally, introduce two hooks into the MGLRU page table walk:
accessed callback and flush callback. The accessed callback is called
for each accessed page detected via the scanned accessed bit. The flush
callback is called when the accessed callback reports an out of space
error. This allows for processing pages in batches for efficiency.

With a generalised page table walk, introduce a new scan function which
repeatedly scans on the same young generation and does not add a new
young generation.

Signed-off-by: Kinsey Ho <kinseyho@google.com>
Signed-off-by: Yuanchu Xie <yuanchu@google.com>
Signed-off-by: Bharata B Rao <bharata@amd.com>
---
 include/linux/mmzone.h |   5 ++
 mm/internal.h          |   4 +
 mm/vmscan.c            | 176 ++++++++++++++++++++++++++++++-----------
 3 files changed, 139 insertions(+), 46 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f7094babed10..4ad15490aff6 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -533,6 +533,8 @@ struct lru_gen_mm_walk {
 	unsigned long seq;
 	/* the next address within an mm to scan */
 	unsigned long next_addr;
+	/* called for each accessed pte/pmd */
+	int (*accessed_cb)(unsigned long pfn);
 	/* to batch promoted pages */
 	int nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
 	/* to batch the mm stats */
@@ -540,6 +542,9 @@ struct lru_gen_mm_walk {
 	/* total batched items */
 	int batched;
 	int swappiness;
+	/* for the pmd under scanning */
+	int nr_young_pte;
+	int nr_total_pte;
 	bool force_scan;
 };
 
diff --git a/mm/internal.h b/mm/internal.h
index 45b725c3dc03..6c2c86abfde2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -548,6 +548,10 @@ static inline int user_proactive_reclaim(char *buf,
 	return 0;
 }
 #endif
+void set_task_reclaim_state(struct task_struct *task,
+				   struct reclaim_state *rs);
+void lru_gen_scan_lruvec(struct lruvec *lruvec, unsigned long seq,
+			 int (*accessed_cb)(unsigned long), void (*flush_cb)(void));
 
 /*
  * in mm/rmap.c:
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7de11524a936..4146e17f90ae 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -289,7 +289,7 @@ static int sc_swappiness(struct scan_control *sc, struct mem_cgroup *memcg)
 			continue;				\
 		else
 
-static void set_task_reclaim_state(struct task_struct *task,
+void set_task_reclaim_state(struct task_struct *task,
 				   struct reclaim_state *rs)
 {
 	/* Check for an overwrite */
@@ -3092,7 +3092,7 @@ static bool iterate_mm_list(struct lru_gen_mm_walk *walk, struct mm_struct **ite
 
 	VM_WARN_ON_ONCE(mm_state->seq + 1 < walk->seq);
 
-	if (walk->seq <= mm_state->seq)
+	if (!walk->accessed_cb && walk->seq <= mm_state->seq)
 		goto done;
 
 	if (!mm_state->head)
@@ -3518,16 +3518,14 @@ static void walk_update_folio(struct lru_gen_mm_walk *walk, struct folio *folio,
 	}
 }
 
-static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
-			   struct mm_walk *args)
+static int walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
+			   struct mm_walk *args, bool *suitable)
 {
-	int i;
+	int i, err = 0;
 	bool dirty;
 	pte_t *pte;
 	spinlock_t *ptl;
 	unsigned long addr;
-	int total = 0;
-	int young = 0;
 	struct folio *last = NULL;
 	struct lru_gen_mm_walk *walk = args->private;
 	struct mem_cgroup *memcg = lruvec_memcg(walk->lruvec);
@@ -3537,17 +3535,21 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
 	pmd_t pmdval;
 
 	pte = pte_offset_map_rw_nolock(args->mm, pmd, start & PMD_MASK, &pmdval, &ptl);
-	if (!pte)
-		return false;
+	if (!pte) {
+		*suitable = false;
+		return 0;
+	}
 
 	if (!spin_trylock(ptl)) {
 		pte_unmap(pte);
-		return true;
+		*suitable = true;
+		return 0;
 	}
 
 	if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) {
 		pte_unmap_unlock(pte, ptl);
-		return false;
+		*suitable = false;
+		return 0;
 	}
 
 	arch_enter_lazy_mmu_mode();
@@ -3557,7 +3559,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
 		struct folio *folio;
 		pte_t ptent = ptep_get(pte + i);
 
-		total++;
+		walk->nr_total_pte++;
 		walk->mm_stats[MM_LEAF_TOTAL]++;
 
 		pfn = get_pte_pfn(ptent, args->vma, addr, pgdat);
@@ -3581,23 +3583,34 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
 		if (pte_dirty(ptent))
 			dirty = true;
 
-		young++;
+		walk->nr_young_pte++;
 		walk->mm_stats[MM_LEAF_YOUNG]++;
+
+		if (!walk->accessed_cb)
+			continue;
+
+		err = walk->accessed_cb(pfn);
+		if (err) {
+			walk->next_addr = addr + PAGE_SIZE;
+			break;
+		}
 	}
 
 	walk_update_folio(walk, last, gen, dirty);
 	last = NULL;
 
-	if (i < PTRS_PER_PTE && get_next_vma(PMD_MASK, PAGE_SIZE, args, &start, &end))
+	if (!err && i < PTRS_PER_PTE &&
+	    get_next_vma(PMD_MASK, PAGE_SIZE, args, &start, &end))
 		goto restart;
 
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(pte, ptl);
 
-	return suitable_to_scan(total, young);
+	*suitable = suitable_to_scan(walk->nr_total_pte, walk->nr_young_pte);
+	return err;
 }
 
-static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma,
+static int walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma,
 				  struct mm_walk *args, unsigned long *bitmap, unsigned long *first)
 {
 	int i;
@@ -3610,6 +3623,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area
 	struct pglist_data *pgdat = lruvec_pgdat(walk->lruvec);
 	DEFINE_MAX_SEQ(walk->lruvec);
 	int gen = lru_gen_from_seq(max_seq);
+	int err = 0;
 
 	VM_WARN_ON_ONCE(pud_leaf(*pud));
 
@@ -3617,13 +3631,13 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area
 	if (*first == -1) {
 		*first = addr;
 		bitmap_zero(bitmap, MIN_LRU_BATCH);
-		return;
+		return 0;
 	}
 
 	i = addr == -1 ? 0 : pmd_index(addr) - pmd_index(*first);
 	if (i && i <= MIN_LRU_BATCH) {
 		__set_bit(i - 1, bitmap);
-		return;
+		return 0;
 	}
 
 	pmd = pmd_offset(pud, *first);
@@ -3673,6 +3687,16 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area
 			dirty = true;
 
 		walk->mm_stats[MM_LEAF_YOUNG]++;
+		if (!walk->accessed_cb)
+			goto next;
+
+		err = walk->accessed_cb(pfn);
+		if (err) {
+			i = find_next_bit(bitmap, MIN_LRU_BATCH, i) + 1;
+
+			walk->next_addr = (*first & PMD_MASK) + i * PMD_SIZE;
+			break;
+		}
 next:
 		i = i > MIN_LRU_BATCH ? 0 : find_next_bit(bitmap, MIN_LRU_BATCH, i) + 1;
 	} while (i <= MIN_LRU_BATCH);
@@ -3683,9 +3707,10 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area
 	spin_unlock(ptl);
 done:
 	*first = -1;
+	return err;
 }
 
-static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
+static int walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
 			   struct mm_walk *args)
 {
 	int i;
@@ -3697,6 +3722,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
 	unsigned long first = -1;
 	struct lru_gen_mm_walk *walk = args->private;
 	struct lru_gen_mm_state *mm_state = get_mm_state(walk->lruvec);
+	int err = 0;
 
 	VM_WARN_ON_ONCE(pud_leaf(*pud));
 
@@ -3710,6 +3736,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
 	/* walk_pte_range() may call get_next_vma() */
 	vma = args->vma;
 	for (i = pmd_index(start), addr = start; addr != end; i++, addr = next) {
+		bool suitable;
 		pmd_t val = pmdp_get_lockless(pmd + i);
 
 		next = pmd_addr_end(addr, end);
@@ -3726,7 +3753,10 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
 			walk->mm_stats[MM_LEAF_TOTAL]++;
 
 			if (pfn != -1)
-				walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first);
+				err = walk_pmd_range_locked(pud, addr, vma, args,
+						bitmap, &first);
+			if (err)
+				return err;
 			continue;
 		}
 
@@ -3735,33 +3765,50 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
 			if (!pmd_young(val))
 				continue;
 
-			walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first);
+			err = walk_pmd_range_locked(pud, addr, vma, args,
+						bitmap, &first);
+			if (err)
+				return err;
 		}
 
 		if (!walk->force_scan && !test_bloom_filter(mm_state, walk->seq, pmd + i))
 			continue;
 
+		err = walk_pte_range(&val, addr, next, args, &suitable);
+		if (err && walk->next_addr < next && first == -1)
+			return err;
+
+		walk->nr_total_pte = 0;
+		walk->nr_young_pte = 0;
+
 		walk->mm_stats[MM_NONLEAF_FOUND]++;
 
-		if (!walk_pte_range(&val, addr, next, args))
-			continue;
+		if (!suitable)
+			goto next;
 
 		walk->mm_stats[MM_NONLEAF_ADDED]++;
 
 		/* carry over to the next generation */
 		update_bloom_filter(mm_state, walk->seq + 1, pmd + i);
+next:
+		if (err) {
+			walk->next_addr = first;
+			return err;
+		}
 	}
 
-	walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first);
+	err = walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first);
 
-	if (i < PTRS_PER_PMD && get_next_vma(PUD_MASK, PMD_SIZE, args, &start, &end))
+	if (!err && i < PTRS_PER_PMD && get_next_vma(PUD_MASK, PMD_SIZE, args, &start, &end))
 		goto restart;
+
+	return err;
 }
 
 static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end,
 			  struct mm_walk *args)
 {
-	int i;
+	int i, err;
 	pud_t *pud;
 	unsigned long addr;
 	unsigned long next;
@@ -3779,7 +3826,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end,
 		if (!pud_present(val) || WARN_ON_ONCE(pud_leaf(val)))
 			continue;
 
-		walk_pmd_range(&val, addr, next, args);
+		err = walk_pmd_range(&val, addr, next, args);
+		if (err)
+			return err;
 
 		if (need_resched() || walk->batched >= MAX_LRU_BATCH) {
 			end = (addr | ~PUD_MASK) + 1;
@@ -3800,40 +3849,48 @@ static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end,
 	return -EAGAIN;
 }
 
-static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk)
+static int try_walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk)
 {
+	int err;
 	static const struct mm_walk_ops mm_walk_ops = {
 		.test_walk = should_skip_vma,
 		.p4d_entry = walk_pud_range,
 		.walk_lock = PGWALK_RDLOCK,
 	};
-	int err;
 	struct lruvec *lruvec = walk->lruvec;
 
-	walk->next_addr = FIRST_USER_ADDRESS;
+	DEFINE_MAX_SEQ(lruvec);
 
-	do {
-		DEFINE_MAX_SEQ(lruvec);
+	err = -EBUSY;
 
-		err = -EBUSY;
+	/* another thread might have called inc_max_seq() */
+	if (walk->seq != max_seq)
+		return err;
 
-		/* another thread might have called inc_max_seq() */
-		if (walk->seq != max_seq)
-			break;
+	/* the caller might be holding the lock for write */
+	if (mmap_read_trylock(mm)) {
+		err = walk_page_range(mm, walk->next_addr, ULONG_MAX,
+				      &mm_walk_ops, walk);
 
-		/* the caller might be holding the lock for write */
-		if (mmap_read_trylock(mm)) {
-			err = walk_page_range(mm, walk->next_addr, ULONG_MAX, &mm_walk_ops, walk);
+		mmap_read_unlock(mm);
+	}
 
-			mmap_read_unlock(mm);
-		}
+	if (walk->batched) {
+		spin_lock_irq(&lruvec->lru_lock);
+		reset_batch_size(walk);
+		spin_unlock_irq(&lruvec->lru_lock);
+	}
 
-		if (walk->batched) {
-			spin_lock_irq(&lruvec->lru_lock);
-			reset_batch_size(walk);
-			spin_unlock_irq(&lruvec->lru_lock);
-		}
+	return err;
+}
 
+static void walk_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk)
+{
+	int err;
+
+	walk->next_addr = FIRST_USER_ADDRESS;
+	do {
+		err = try_walk_mm(mm, walk);
 		cond_resched();
 	} while (err == -EAGAIN);
 }
@@ -4045,6 +4102,33 @@ static bool inc_max_seq(struct lruvec *lruvec, unsigned long seq, int swappiness
 	return success;
 }
 
+void lru_gen_scan_lruvec(struct lruvec *lruvec, unsigned long seq,
+			 int (*accessed_cb)(unsigned long), void (*flush_cb)(void))
+{
+	struct lru_gen_mm_walk *walk = current->reclaim_state->mm_walk;
+	struct mm_struct *mm = NULL;
+
+	walk->lruvec = lruvec;
+	walk->seq = seq;
+	walk->accessed_cb = accessed_cb;
+	walk->swappiness = MAX_SWAPPINESS;
+
+	do {
+		int err = -EBUSY;
+
+		iterate_mm_list(walk, &mm);
+		if (!mm)
+			break;
+
+		walk->next_addr = FIRST_USER_ADDRESS;
+		do {
+			err = try_walk_mm(mm, walk);
+			cond_resched();
+			flush_cb();
+		} while (err == -EAGAIN);
+	} while (mm);
+}
+
 static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long seq,
 			       int swappiness, bool force_scan)
 {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v1 7/7] mm: klruscand: use mglru scanning for page promotion
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
                   ` (5 preceding siblings ...)
  2025-08-14 13:48 ` [RFC PATCH v1 6/7] mm: mglru: generalize page table walk Bharata B Rao
@ 2025-08-14 13:48 ` Bharata B Rao
  2025-08-15 11:59 ` [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Balbir Singh
  7 siblings, 0 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-14 13:48 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs,
	Bharata B Rao

From: Kinsey Ho <kinseyho@google.com>

Introduce a new kernel daemon, klruscand, that periodically invokes the
MGLRU page table walk. It leverages the new callbacks to gather access
information and forwards it to the pghot hot page tracking sub-system
for promotion decisions.

This benefits from reusing the existing MGLRU page table walk
infrastructure, which is optimized with features such as hierarchical
scanning and bloom filters to reduce CPU overhead.

As an additional optimization to be added in the future, we can tune
the scan intervals for each memcg.

Signed-off-by: Kinsey Ho <kinseyho@google.com>
Signed-off-by: Yuanchu Xie <yuanchu@google.com>
Signed-off-by: Bharata B Rao <bharata@amd.com>
	[Reduced the scan interval to 100ms, pfn_t to unsigned long]
---
 mm/Kconfig     |   8 ++++
 mm/Makefile    |   1 +
 mm/klruscand.c | 118 +++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 127 insertions(+)
 create mode 100644 mm/klruscand.c

diff --git a/mm/Kconfig b/mm/Kconfig
index 8b236eb874cf..6d53c1208729 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1393,6 +1393,14 @@ config PGHOT
 	  by various sources. Asynchronous promotion is done by per-node
 	  kernel threads.
 
+config KLRUSCAND
+	bool "Kernel lower tier access scan daemon"
+	default y
+	depends on PGHOT && LRU_GEN_WALKS_MMU
+	help
+	  Scan for accesses from lower tiers by invoking MGLRU to perform
+	  page table walks.
+
 source "mm/damon/Kconfig"
 
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 8799bd0c68ed..1d39ef55f3e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -148,3 +148,4 @@ obj-$(CONFIG_EXECMEM) += execmem.o
 obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o
 obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o
 obj-$(CONFIG_PGHOT) += kpromoted.o
+obj-$(CONFIG_KLRUSCAND) += klruscand.o
diff --git a/mm/klruscand.c b/mm/klruscand.c
new file mode 100644
index 000000000000..1a51aab29bd9
--- /dev/null
+++ b/mm/klruscand.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/memcontrol.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/random.h>
+#include <linux/migrate.h>
+#include <linux/mm_inline.h>
+#include <linux/slab.h>
+#include <linux/sched/clock.h>
+#include <linux/memory-tiers.h>
+#include <linux/sched/mm.h>
+#include <linux/sched.h>
+#include <linux/pghot.h>
+
+#include "internal.h"
+
+#define KLRUSCAND_INTERVAL_MS 100
+#define BATCH_SIZE (2 << 16)
+
+static struct task_struct *scan_thread;
+static unsigned long pfn_batch[BATCH_SIZE];
+static int batch_index;
+
+static void flush_cb(void)
+{
+	int i = 0;
+
+	for (; i < batch_index; i++) {
+		u64 pfn = pfn_batch[i];
+
+		pghot_record_access((unsigned long)pfn, NUMA_NO_NODE,
+					PGHOT_PGTABLE_SCAN, jiffies);
+
+		if (i % 16 == 0)
+			cond_resched();
+	}
+	batch_index = 0;
+}
+
+static int accessed_cb(unsigned long pfn)
+{
+	if (batch_index >= BATCH_SIZE)
+		return -EAGAIN;
+
+	pfn_batch[batch_index++] = pfn;
+	return 0;
+}
+
+static int klruscand_run(void *unused)
+{
+	struct lru_gen_mm_walk *walk;
+
+	walk = kzalloc(sizeof(*walk),
+		       __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN);
+	if (!walk)
+		return -ENOMEM;
+
+	while (!kthread_should_stop()) {
+		unsigned long next_wake_time;
+		long sleep_time;
+		struct mem_cgroup *memcg;
+		int flags;
+		int nid;
+
+		next_wake_time = jiffies + msecs_to_jiffies(KLRUSCAND_INTERVAL_MS);
+
+		for_each_node_state(nid, N_MEMORY) {
+			pg_data_t *pgdat = NODE_DATA(nid);
+			struct reclaim_state rs = { 0 };
+
+			if (node_is_toptier(nid))
+				continue;
+
+			rs.mm_walk = walk;
+			set_task_reclaim_state(current, &rs);
+			flags = memalloc_noreclaim_save();
+
+			memcg = mem_cgroup_iter(NULL, NULL, NULL);
+			do {
+				struct lruvec *lruvec =
+					mem_cgroup_lruvec(memcg, pgdat);
+				unsigned long max_seq =
+					READ_ONCE((lruvec)->lrugen.max_seq);
+
+				lru_gen_scan_lruvec(lruvec, max_seq,
+						    accessed_cb, flush_cb);
+				cond_resched();
+			} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
+
+			memalloc_noreclaim_restore(flags);
+			set_task_reclaim_state(current, NULL);
+			memset(walk, 0, sizeof(*walk));
+		}
+
+		sleep_time = next_wake_time - jiffies;
+		if (sleep_time > 0 && sleep_time != MAX_SCHEDULE_TIMEOUT)
+			schedule_timeout_idle(sleep_time);
+	}
+	kfree(walk);
+	return 0;
+}
+
+static int __init klruscand_init(void)
+{
+	struct task_struct *task;
+
+	task = kthread_run(klruscand_run, NULL, "klruscand");
+
+	if (IS_ERR(task)) {
+		pr_err("Failed to create klruscand kthread\n");
+		return PTR_ERR(task);
+	}
+
+	scan_thread = task;
+	return 0;
+}
+module_init(klruscand_init);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 1/7] mm: migrate: Allow misplaced migration without VMA too
  2025-08-14 13:48 ` [RFC PATCH v1 1/7] mm: migrate: Allow misplaced migration without VMA too Bharata B Rao
@ 2025-08-15  1:29   ` Huang, Ying
  0 siblings, 0 replies; 16+ messages in thread
From: Huang, Ying @ 2025-08-15  1:29 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linux-kernel, linux-mm, Jonathan.Cameron, dave.hansen, gourry,
	hannes, mgorman, mingo, peterz, raghavendra.kt, riel, rientjes,
	sj, weixugc, willy, ziy, dave, nifan.cxl, xuezhengchu, yiannis,
	akpm, david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs

Bharata B Rao <bharata@amd.com> writes:

> We want isolation of misplaced folios to work in contexts
> where VMA isn't available. In order to prepare for that
> allow migrate_misplaced_folio_prepare() to be called with
> a NULL VMA.
>
> Signed-off-by: Bharata B Rao <bharata@amd.com>
> ---
>  mm/migrate.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 425401b2d4e1..7e356c0b1b5a 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2619,7 +2619,8 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
>  
>  /*
>   * Prepare for calling migrate_misplaced_folio() by isolating the folio if
> - * permitted. Must be called with the PTL still held.
> + * permitted. Must be called with the PTL still held if called with a non-NULL
> + * vma.

The locking rule is changed.  IMO, it deserves more explanation in patch description.

>   */
>  int migrate_misplaced_folio_prepare(struct folio *folio,
>  		struct vm_area_struct *vma, int node)
> @@ -2636,7 +2637,7 @@ int migrate_misplaced_folio_prepare(struct folio *folio,
>  		 * See folio_maybe_mapped_shared() on possible imprecision
>  		 * when we cannot easily detect if a folio is shared.
>  		 */
> -		if ((vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio))
> +		if (vma && (vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio))
>  			return -EACCES;
>  
>  		/*

---
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 2/7] migrate: implement migrate_misplaced_folios_batch
  2025-08-14 13:48 ` [RFC PATCH v1 2/7] migrate: implement migrate_misplaced_folios_batch Bharata B Rao
@ 2025-08-15  1:39   ` Huang, Ying
  0 siblings, 0 replies; 16+ messages in thread
From: Huang, Ying @ 2025-08-15  1:39 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linux-kernel, linux-mm, Jonathan.Cameron, dave.hansen, gourry,
	hannes, mgorman, mingo, peterz, raghavendra.kt, riel, rientjes,
	sj, weixugc, willy, ziy, dave, nifan.cxl, xuezhengchu, yiannis,
	akpm, david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs

Bharata B Rao <bharata@amd.com> writes:

> From: Gregory Price <gourry@gourry.net>
>
> A common operation in tiering is to migrate multiple pages at once.

Is it common now?  If so, you can replace some callers of
migrate_misplaced_folio() with migrate_misplaced_folios_batch().

> The migrate_misplaced_folio function requires one call for each

IMHO, migrate_misplaced_folio() is more concise than
migrate_misplaced_folio function.

> individual folio.  Expose a batch-variant of the same call for use
> when doing batch migrations.
>
> Signed-off-by: Gregory Price <gourry@gourry.net>
> Signed-off-by: Bharata B Rao <bharata@amd.com>
> ---
>  include/linux/migrate.h |  6 ++++++
>  mm/migrate.c            | 31 +++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index acadd41e0b5c..0593f5869be8 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -107,6 +107,7 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>  int migrate_misplaced_folio_prepare(struct folio *folio,
>  		struct vm_area_struct *vma, int node);
>  int migrate_misplaced_folio(struct folio *folio, int node);
> +int migrate_misplaced_folios_batch(struct list_head *foliolist, int node);
>  #else
>  static inline int migrate_misplaced_folio_prepare(struct folio *folio,
>  		struct vm_area_struct *vma, int node)
> @@ -117,6 +118,11 @@ static inline int migrate_misplaced_folio(struct folio *folio, int node)
>  {
>  	return -EAGAIN; /* can't migrate now */
>  }
> +static inline int migrate_misplaced_folios_batch(struct list_head *foliolist,
> +						 int node)
> +{
> +	return -EAGAIN; /* can't migrate now */
> +}
>  #endif /* CONFIG_NUMA_BALANCING */
>  
>  #ifdef CONFIG_MIGRATION
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 7e356c0b1b5a..1268a95eda0e 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2714,5 +2714,36 @@ int migrate_misplaced_folio(struct folio *folio, int node)
>  	BUG_ON(!list_empty(&migratepages));
>  	return nr_remaining ? -EAGAIN : 0;
>  }
> +
> +/*
> + * Batch variant of migrate_misplaced_folio. Attempts to migrate
> + * a folio list to the specified destination.
> + *
> + * Caller is expected to have isolated the folios by calling
> + * migrate_misplaced_folio_prepare(), which will result in an
> + * elevated reference count on the folio.
> + *
> + * This function will un-isolate the folios, dereference them, and
> + * remove them from the list before returning.
> + */
> +int migrate_misplaced_folios_batch(struct list_head *folio_list, int node)

In addition to working on a list of folios instead of single folio, I
found there are some difference about memcg counting between
migrate_misplaced_folios_batch() and migrate_misplace_folio().  Why?

And, can we merge the implementation of two functions to reduce code
duplication?

> +{
> +	pg_data_t *pgdat = NODE_DATA(node);
> +	unsigned int nr_succeeded;
> +	int nr_remaining;
> +
> +	nr_remaining = migrate_pages(folio_list, alloc_misplaced_dst_folio,
> +				     NULL, node, MIGRATE_ASYNC,
> +				     MR_NUMA_MISPLACED, &nr_succeeded);
> +	if (nr_remaining)
> +		putback_movable_pages(folio_list);
> +
> +	if (nr_succeeded) {
> +		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
> +		mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded);
> +	}
> +	BUG_ON(!list_empty(folio_list));
> +	return nr_remaining ? -EAGAIN : 0;
> +}
>  #endif /* CONFIG_NUMA_BALANCING */
>  #endif /* CONFIG_NUMA */

---
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 3/7] mm: Hot page tracking and promotion
  2025-08-14 13:48 ` [RFC PATCH v1 3/7] mm: Hot page tracking and promotion Bharata B Rao
@ 2025-08-15  1:56   ` Huang, Ying
  2025-08-15 14:16     ` Bharata B Rao
       [not found]   ` <CGME20250821111729epcas5p4b57cdfb4a339e8ac7fc1ea803d6baa34@epcas5p4.samsung.com>
  1 sibling, 1 reply; 16+ messages in thread
From: Huang, Ying @ 2025-08-15  1:56 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linux-kernel, linux-mm, Jonathan.Cameron, dave.hansen, gourry,
	hannes, mgorman, mingo, peterz, raghavendra.kt, riel, rientjes,
	sj, weixugc, willy, ziy, dave, nifan.cxl, xuezhengchu, yiannis,
	akpm, david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs

Bharata B Rao <bharata@amd.com> writes:

> This introduces a sub-system for collecting memory access
> information from different sources. It maintains the hotness
> information based on the access history and time of access.
>
> Additionally, it provides per-lowertier-node kernel threads
> (named kpromoted) that periodically promote the pages that
> are eligible for promotion.
>
> Sub-systems that generate hot page access info can report that
> using this API:
>
> int pghot_record_access(u64 pfn, int nid, int src,
> 			unsigned long time)
>
> @pfn: The PFN of the memory accessed
> @nid: The accessing NUMA node ID
> @src: The temperature source (sub-system) that generated the
>       access info
> @time: The access time in jiffies

How will the page hotness information gather with NUMA balancing hint
page fault be expressed with this interface?

> Some temperature sources may not provide the nid from which
> the page was accessed. This is true for sources that use
> page table scanning for PTE Accessed bit. For such sources,
> the default toptier node to which such pages should be promoted
> is hard coded.
>
> Also, the access time provided some sources may at best be
> considered approximate. This is especially true for hot pages
> detected by PTE A bit scanning.
>
> The hot PFN records are stored in hash lists hashed by PFN value.
> The PFN records that are categorized as hot enough to be promoted
> are maintained in a per-lowertier-node max heap from which
> kpromoted extracts and promotes them.
>
> Each record stores the following info:
>
> struct pghot_info {
> 	unsigned long pfn;
>
> 	unsigned long last_update; /* Most recent access time */
> 	int frequency; /* Number of accesses within current window */
> 	int nid; /* Most recent access from this node */
>
> 	struct hlist_node hnode;
> 	size_t heap_idx; /* Position in max heap for quick retreival */
> };
>
> The way in which a page is categorized as hot enough to be
> promoted is pretty primitive now.
>

[snip]

---
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion
  2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
                   ` (6 preceding siblings ...)
  2025-08-14 13:48 ` [RFC PATCH v1 7/7] mm: klruscand: use mglru scanning for page promotion Bharata B Rao
@ 2025-08-15 11:59 ` Balbir Singh
  2025-08-15 15:35   ` Bharata B Rao
  7 siblings, 1 reply; 16+ messages in thread
From: Balbir Singh @ 2025-08-15 11:59 UTC (permalink / raw)
  To: Bharata B Rao, linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu

On 8/14/25 23:48, Bharata B Rao wrote:
> Hi,
> 
> This patchset is about adding a dedicated sub-system for maintaining
> hot pages information from the lower tiers and promoting the hot pages
> to the top tiers. It exposes an API that other sub-systems which detect
> accesses, can use to report the accesses for further processing. Further
> processing includes system-wide accumulation of memory access info at
> PFN granularity, classification the PFNs as hot and promotion of hot
> pages using per-node kernel threads. This is a continuation of the
> earlier kpromoted work [1] that I posted a while back.
> 
> Kernel thread based async batch migration [2] was an off-shoot of
> this effort that attempted to batch the migrations from NUMA
> balancing by creating a separate kernel thread for migration.
> Per-page hotness information was stored as part of extended page
> flags. The kernel thread then scanned the entire PFN space to pick
> the PFNs that are classified as hot.
> 
> The observed challenges from the previous approaches were these:
> 
> 1. Too many PFNs need to be scanned to identify the hot PFNs in
>    approach [2].
> 2. Hot page records stored in hash lists become unwieldy for
>    extracting the required hot pages in approach [1].
> 3. Dynamic allocation vs static availability of space to store
>    per-page hotness information.
> 
> This series tries to address challenges 1 and 2 by maintaining
> the hot page records in hash lists for quick lookup and maintaining
> a separate per-target-node max heap for storing ready-to-migrate
> hot page records. The records in heap are priority-ordered based
> on "hotness" of the page.
> 

Could you elaborate on when/how a page is considered hot? Is it based
on how often a page has been scanned?

> The API for reporting the page access remains unchanged from [1].
> When the page access gets recorded, the hotness data of the page
> is updated and if it crosses a threshold, it gets tracked in the
> heap as well. These heaps are per-target-node and corresponding
> migrate threads will periodically extract the top records from
> them and do batch migration. 
> 

I don't quite follow the heaps and tracking in the heap, could
you please clarify

> In the current series, two page temperature sources are included
> as examples.
> 
> 1. IBS based memory access profiler.
> 2. PTE-A bit based access profiler for MGLRU. (from Kinsey Ho)
> 

Thanks,
Balbir



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 3/7] mm: Hot page tracking and promotion
  2025-08-15  1:56   ` Huang, Ying
@ 2025-08-15 14:16     ` Bharata B Rao
  0 siblings, 0 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-15 14:16 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, Jonathan.Cameron, dave.hansen, gourry,
	hannes, mgorman, mingo, peterz, raghavendra.kt, riel, rientjes,
	sj, weixugc, willy, ziy, dave, nifan.cxl, xuezhengchu, yiannis,
	akpm, david, byungchul, kinseyho, joshua.hahnjy, yuanchu, balbirs

On 15-Aug-25 7:26 AM, Huang, Ying wrote:
> Bharata B Rao <bharata@amd.com> writes:
> 
>> This introduces a sub-system for collecting memory access
>> information from different sources. It maintains the hotness
>> information based on the access history and time of access.
>>
>> Additionally, it provides per-lowertier-node kernel threads
>> (named kpromoted) that periodically promote the pages that
>> are eligible for promotion.
>>
>> Sub-systems that generate hot page access info can report that
>> using this API:
>>
>> int pghot_record_access(u64 pfn, int nid, int src,
>> 			unsigned long time)
>>
>> @pfn: The PFN of the memory accessed
>> @nid: The accessing NUMA node ID
>> @src: The temperature source (sub-system) that generated the
>>       access info
>> @time: The access time in jiffies
> 
> How will the page hotness information gather with NUMA balancing hint
> page fault be expressed with this interface?

Something like this can be done for reporting accesses detected by
NUMA balancing -
https://lore.kernel.org/linux-mm/20250616133931.206626-5-bharata@amd.com/

However we need to bypass the hot page threshold detection and rate
limiting and have the same or similar functionality implemented within
this patchset.

Regards,
Bharata.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion
  2025-08-15 11:59 ` [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Balbir Singh
@ 2025-08-15 15:35   ` Bharata B Rao
  0 siblings, 0 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-15 15:35 UTC (permalink / raw)
  To: Balbir Singh, linux-kernel, linux-mm
  Cc: Jonathan.Cameron, dave.hansen, gourry, hannes, mgorman, mingo,
	peterz, raghavendra.kt, riel, rientjes, sj, weixugc, willy,
	ying.huang, ziy, dave, nifan.cxl, xuezhengchu, yiannis, akpm,
	david, byungchul, kinseyho, joshua.hahnjy, yuanchu

On 15-Aug-25 5:29 PM, Balbir Singh wrote:
> On 8/14/25 23:48, Bharata B Rao wrote:
>> Hi,
>>
>> This patchset is about adding a dedicated sub-system for maintaining
>> hot pages information from the lower tiers and promoting the hot pages
>> to the top tiers. It exposes an API that other sub-systems which detect
>> accesses, can use to report the accesses for further processing. Further
>> processing includes system-wide accumulation of memory access info at
>> PFN granularity, classification the PFNs as hot and promotion of hot
>> pages using per-node kernel threads. This is a continuation of the
>> earlier kpromoted work [1] that I posted a while back.
>>
>> Kernel thread based async batch migration [2] was an off-shoot of
>> this effort that attempted to batch the migrations from NUMA
>> balancing by creating a separate kernel thread for migration.
>> Per-page hotness information was stored as part of extended page
>> flags. The kernel thread then scanned the entire PFN space to pick
>> the PFNs that are classified as hot.
>>
>> The observed challenges from the previous approaches were these:
>>
>> 1. Too many PFNs need to be scanned to identify the hot PFNs in
>>    approach [2].
>> 2. Hot page records stored in hash lists become unwieldy for
>>    extracting the required hot pages in approach [1].
>> 3. Dynamic allocation vs static availability of space to store
>>    per-page hotness information.
>>
>> This series tries to address challenges 1 and 2 by maintaining
>> the hot page records in hash lists for quick lookup and maintaining
>> a separate per-target-node max heap for storing ready-to-migrate
>> hot page records. The records in heap are priority-ordered based
>> on "hotness" of the page.
>>
> 
> Could you elaborate on when/how a page is considered hot? Is it based
> on how often a page has been scanned?

There are multiple sub-systems within the kernel which detect and
act upon page accesses. NUMA balancing (via hint faults), MGLRU (via
page table scanning for PTE A bit) are examples of the same. The
idea behind this patchset is to consolidate such access information
within a new dedicated sub-system for hot page promotion that
maintains hotness data for accessed pages and promotes them when
a threshold is reached.

Currently I am considering only the number of accesses as an
indicator of page hotness. We need to consider the time of access
too. Both of them should contribute to the eventual "hotness" indicator.
Maybe something similar/analogous to how memory tiering derives
adistance value from bandwidth and latency could be tried out.

> 
>> The API for reporting the page access remains unchanged from [1].
>> When the page access gets recorded, the hotness data of the page
>> is updated and if it crosses a threshold, it gets tracked in the
>> heap as well. These heaps are per-target-node and corresponding
>> migrate threads will periodically extract the top records from
>> them and do batch migration. 
>>
> 
> I don't quite follow the heaps and tracking in the heap, could
> you please clarify

When different sub-systems report page accesses via the API
introduced by this new sub-system, a record for each such page
is stored in hash lists (hashed by PFN value). In addition to
the PFN and target_nid, the hotness record includes parameters
like frequency and time of access from which the hotness is
derived. Repeated reporting of access on the same PFN will result
in updating of hotness information. When the hotness of a
record (as updated during reporting of access) crosses a threshold,
the record becomes part of a max heap data structure. Records
in the max heap are arranged based on the hotness and hence
the top elements of the heap will correspond to the hottest
pages. There will be one such heap for each toptier node so
that per-toptier-node kpromoted thread can easily extract the
top N records from its own heap and perform batched migration.

Hope this clarifies.

Regards,
Bharata.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 3/7] mm: Hot page tracking and promotion
       [not found]   ` <CGME20250821111729epcas5p4b57cdfb4a339e8ac7fc1ea803d6baa34@epcas5p4.samsung.com>
@ 2025-08-21 11:17     ` Alok Rathore
  2025-08-21 15:10       ` Bharata B Rao
  0 siblings, 1 reply; 16+ messages in thread
From: Alok Rathore @ 2025-08-21 11:17 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linux-kernel, linux-mm, Jonathan.Cameron, dave.hansen, gourry,
	hannes, mgorman, mingo, peterz, raghavendra.kt, riel, rientjes,
	sj, weixugc, willy, ying.huang, ziy, dave, nifan.cxl, xuezhengchu,
	yiannis, akpm, david, byungchul, kinseyho, joshua.hahnjy, yuanchu,
	balbirs, alokrathore20, gost.dev, cpgs

[-- Attachment #1: Type: text/plain, Size: 22219 bytes --]

On 14/08/25 07:18PM, Bharata B Rao wrote:
>This introduces a sub-system for collecting memory access
>information from different sources. It maintains the hotness
>information based on the access history and time of access.
>
>Additionally, it provides per-lowertier-node kernel threads
>(named kpromoted) that periodically promote the pages that
>are eligible for promotion.
>
>Sub-systems that generate hot page access info can report that
>using this API:
>
>int pghot_record_access(u64 pfn, int nid, int src,
>			unsigned long time)
>
>@pfn: The PFN of the memory accessed
>@nid: The accessing NUMA node ID
>@src: The temperature source (sub-system) that generated the
>      access info
>@time: The access time in jiffies
>
>Some temperature sources may not provide the nid from which
>the page was accessed. This is true for sources that use
>page table scanning for PTE Accessed bit. For such sources,
>the default toptier node to which such pages should be promoted
>is hard coded.
>
>Also, the access time provided some sources may at best be
>considered approximate. This is especially true for hot pages
>detected by PTE A bit scanning.
>
>The hot PFN records are stored in hash lists hashed by PFN value.
>The PFN records that are categorized as hot enough to be promoted
>are maintained in a per-lowertier-node max heap from which
>kpromoted extracts and promotes them.
>
>Each record stores the following info:
>
>struct pghot_info {
>	unsigned long pfn;
>
>	unsigned long last_update; /* Most recent access time */
>	int frequency; /* Number of accesses within current window */
>	int nid; /* Most recent access from this node */
>
>	struct hlist_node hnode;
>	size_t heap_idx; /* Position in max heap for quick retreival */
>};
>
>The way in which a page is categorized as hot enough to be
>promoted is pretty primitive now.
>
>Signed-off-by: Bharata B Rao <bharata@amd.com>
>---
> include/linux/mmzone.h        |  11 +
> include/linux/pghot.h         |  87 ++++++
> include/linux/vm_event_item.h |   9 +
> mm/Kconfig                    |  11 +
> mm/Makefile                   |   1 +
> mm/mm_init.c                  |  10 +
> mm/pghot.c                    | 501 ++++++++++++++++++++++++++++++++++
> mm/vmstat.c                   |   9 +
> 8 files changed, 639 insertions(+)
> create mode 100644 include/linux/pghot.h
> create mode 100644 mm/pghot.c
>
>diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>index 0c5da9141983..f7094babed10 100644
>--- a/include/linux/mmzone.h
>+++ b/include/linux/mmzone.h
>@@ -1349,6 +1349,10 @@ struct memory_failure_stats {
> };
> #endif
>
>+#ifdef CONFIG_PGHOT
>+#include <linux/pghot.h>
>+#endif
>+
> /*
>  * On NUMA machines, each NUMA node would have a pg_data_t to describe
>  * it's memory layout. On UMA machines there is a single pglist_data which
>@@ -1497,6 +1501,13 @@ typedef struct pglist_data {
> #ifdef CONFIG_MEMORY_FAILURE
> 	struct memory_failure_stats mf_stats;
> #endif
>+#ifdef CONFIG_PGHOT
>+	struct task_struct *kpromoted;
>+	wait_queue_head_t kpromoted_wait;
>+	struct pghot_info **phi_buf;
>+	struct max_heap heap;
>+	spinlock_t heap_lock;
>+#endif
> } pg_data_t;
>
> #define node_present_pages(nid)	(NODE_DATA(nid)->node_present_pages)
>diff --git a/include/linux/pghot.h b/include/linux/pghot.h
>new file mode 100644
>index 000000000000..6b8496944e7f
>--- /dev/null
>+++ b/include/linux/pghot.h
>@@ -0,0 +1,87 @@
>+/* SPDX-License-Identifier: GPL-2.0 */
>+#ifndef _LINUX_KPROMOTED_H
>+#define _LINUX_KPROMOTED_H
>+
>+#include <linux/types.h>
>+#include <linux/init.h>
>+#include <linux/workqueue_types.h>
>+
>+/* Page hotness temperature sources */
>+enum pghot_src {
>+	PGHOT_HW_HINTS,
>+	PGHOT_PGTABLE_SCAN,
>+};
>+
>+#ifdef CONFIG_PGHOT
>+
>+#define KPROMOTED_FREQ_WINDOW	(5 * MSEC_PER_SEC)
>+
>+/* 2 accesses within a window will make the page a promotion candidate */
>+#define KPRMOTED_FREQ_THRESHOLD	2
>+
>+/*
>+ * The following two defines control the number of hash lists
>+ * that are maintained for tracking PFN accesses.
>+ */
>+#define PGHOT_HASH_PCT		50	/* % of lower tier memory pages to track */
>+#define PGHOT_HASH_ENTRIES	1024	/* Number of entries per list, ideal case */
>+
>+/*
>+ * Percentage of hash entries that can reside in heap as migrate-ready
>+ * candidates
>+ */
>+#define PGHOT_HEAP_PCT		25
>+
>+#define KPRMOTED_MIGRATE_BATCH	1024
>+
>+/*
>+ * If target NID isn't available, kpromoted promotes to node 0
>+ * by default.
>+ *
>+ * TODO: Need checks to validate that default node is indeed
>+ * present and is a toptier node.
>+ */
>+#define KPROMOTED_DEFAULT_NODE	0
>+
>+struct pghot_info {
>+	unsigned long pfn;
>+
>+	/*
>+	 * The following are the three fundamental parameters
>+	 * required to track the hotness of page/PFN.
>+	 *
>+	 * TODO:
>+	 * Check if these three can fit into a u32.
>+	 * With 3 bits for frequency (8 most recent accesses),
>+	 * 10 bits for nid (1024 nodes), the remaining 19 bits
>+	 * are available for timestamp.
>+	 */
>+	unsigned long last_update; /* Most recent access time */
>+	int frequency; /* Number of accesses within current window */
>+	int nid; /* Most recent access from this node */
>+
>+	struct hlist_node hnode;
>+	size_t heap_idx; /* Position in max heap for quick retreival */
>+};
>+
>+struct max_heap {
>+	size_t nr;
>+	size_t size;
>+	struct pghot_info **data;
>+	DECLARE_FLEX_ARRAY(struct pghot_info *, preallocated);
>+};
>+
>+/*
>+ * The wakeup interval of kpromoted threads
>+ */
>+#define KPROMOTE_DELAY	20	/* 20ms */
>+
>+int pghot_record_access(u64 pfn, int nid, int src, unsigned long now);
>+#else
>+static inline int pghot_record_access(u64 pfn, int nid, int src,
>+				      unsigned long now)
>+{
>+	return 0;
>+}
>+#endif /* CONFIG_PGHOT */
>+#endif /* _LINUX_KPROMOTED_H */
>diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
>index 9e15a088ba38..9085e5c2d4aa 100644
>--- a/include/linux/vm_event_item.h
>+++ b/include/linux/vm_event_item.h
>@@ -186,6 +186,15 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> 		KSTACK_REST,
> #endif
> #endif /* CONFIG_DEBUG_STACK_USAGE */
>+		PGHOT_RECORDED_ACCESSES,
>+		PGHOT_RECORD_HWHINTS,
>+		PGHOT_RECORD_PGTSCANS,
>+		PGHOT_RECORDS_HASH,
>+		PGHOT_RECORDS_HEAP,
>+		KPROMOTED_RIGHT_NODE,
>+		KPROMOTED_NON_LRU,
>+		KPROMOTED_COLD_OLD,
>+		KPROMOTED_DROPPED,
> 		NR_VM_EVENT_ITEMS
> };
>
>diff --git a/mm/Kconfig b/mm/Kconfig
>index e443fe8cd6cf..8b236eb874cf 100644
>--- a/mm/Kconfig
>+++ b/mm/Kconfig
>@@ -1381,6 +1381,17 @@ config PT_RECLAIM
>
> 	  Note: now only empty user PTE page table pages will be reclaimed.
>
>+config PGHOT
>+	bool "Hot page tracking and promotion"
>+	def_bool y
>+	depends on NUMA && MIGRATION && MMU
>+	select MIN_HEAP
>+	help
>+	  A sub-system to track page accesses in lower tier memory and
>+	  maintain hot page information. Promotes hot pages from lower
>+	  tiers to top tier by using the memory access information provided
>+	  by various sources. Asynchronous promotion is done by per-node
>+	  kernel threads.
>
> source "mm/damon/Kconfig"
>
>diff --git a/mm/Makefile b/mm/Makefile
>index ef54aa615d9d..8799bd0c68ed 100644
>--- a/mm/Makefile
>+++ b/mm/Makefile
>@@ -147,3 +147,4 @@ obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o
> obj-$(CONFIG_EXECMEM) += execmem.o
> obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o
> obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o
>+obj-$(CONFIG_PGHOT) += kpromoted.o

Looks like by mistake used older file name. It should be pghot.o

Can you please provide base commit. Unable to apply patch cleanly using b4 utility.

Regards,
Alok Rathore


>diff --git a/mm/mm_init.c b/mm/mm_init.c
>index 5c21b3af216b..f7992be3ff7f 100644
>--- a/mm/mm_init.c
>+++ b/mm/mm_init.c
>@@ -1402,6 +1402,15 @@ static void pgdat_init_kcompactd(struct pglist_data *pgdat)
> static void pgdat_init_kcompactd(struct pglist_data *pgdat) {}
> #endif
>
>+#ifdef CONFIG_PGHOT
>+static void pgdat_init_kpromoted(struct pglist_data *pgdat)
>+{
>+	init_waitqueue_head(&pgdat->kpromoted_wait);
>+}
>+#else
>+static void pgdat_init_kpromoted(struct pglist_data *pgdat) {}
>+#endif
>+
> static void __meminit pgdat_init_internals(struct pglist_data *pgdat)
> {
> 	int i;
>@@ -1411,6 +1420,7 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat)
>
> 	pgdat_init_split_queue(pgdat);
> 	pgdat_init_kcompactd(pgdat);
>+	pgdat_init_kpromoted(pgdat);
>
> 	init_waitqueue_head(&pgdat->kswapd_wait);
> 	init_waitqueue_head(&pgdat->pfmemalloc_wait);
>diff --git a/mm/pghot.c b/mm/pghot.c
>new file mode 100644
>index 000000000000..eadcf970c3ef
>--- /dev/null
>+++ b/mm/pghot.c
>@@ -0,0 +1,501 @@
>+// SPDX-License-Identifier: GPL-2.0
>+/*
>+ * Maintains information about hot pages from slower tier nodes and
>+ * promotes them.
>+ *
>+ * Info about accessed pages are stored in hash lists indexed by PFN.
>+ * Info about pages that are hot enough to be promoted are stored in
>+ * a per-toptier-node max_heap.
>+ *
>+ * kpromoted is a kernel thread that runs on each toptier node and
>+ * promotes pages from max_heap.
>+ *
>+ * TODO:
>+ * - Compact pghot_info so that nid, time and frequency can fit
>+ * - Scalar hotness value as a function frequency and recency
>+ * - Possibility of moving migration rate limiting to kpromoted
>+ */
>+#include <linux/pghot.h>
>+#include <linux/kthread.h>
>+#include <linux/mmzone.h>
>+#include <linux/migrate.h>
>+#include <linux/memory-tiers.h>
>+#include <linux/slab.h>
>+#include <linux/sched.h>
>+#include <linux/vmalloc.h>
>+#include <linux/hashtable.h>
>+#include <linux/min_heap.h>
>+
>+struct pghot_hash {
>+	struct hlist_head hash;
>+	spinlock_t lock;
>+};
>+
>+static struct pghot_hash *phi_hash;
>+static int phi_hash_order;
>+static int phi_heap_entries;
>+static struct kmem_cache *phi_cache __ro_after_init;
>+static bool kpromoted_started __ro_after_init;
>+
>+static bool phi_heap_less(const void *lhs, const void *rhs, void *args)
>+{
>+	return (*(struct pghot_info **)lhs)->frequency >
>+		(*(struct pghot_info **)rhs)->frequency;
>+}
>+
>+static void phi_heap_swp(void *lhs, void *rhs, void *args)
>+{
>+	struct pghot_info **l = (struct pghot_info **)lhs;
>+	struct pghot_info **r = (struct pghot_info **)rhs;
>+	int lindex = l - (struct pghot_info **)args;
>+	int rindex = r - (struct pghot_info **)args;
>+	struct pghot_info *tmp = *l;
>+
>+	*l = *r;
>+	*r = tmp;
>+
>+	(*l)->heap_idx = lindex;
>+	(*r)->heap_idx = rindex;
>+}
>+
>+static const struct min_heap_callbacks phi_heap_cb = {
>+	.less = phi_heap_less,
>+	.swp = phi_heap_swp,
>+};
>+
>+static void phi_heap_update_entry(struct max_heap *phi_heap, struct pghot_info *phi)
>+{
>+	int orig_idx = phi->heap_idx;
>+
>+	min_heap_sift_up(phi_heap, phi->heap_idx, &phi_heap_cb,
>+			 phi_heap->data);
>+	if (phi_heap->data[phi->heap_idx]->heap_idx == orig_idx)
>+		min_heap_sift_down(phi_heap, phi->heap_idx,
>+				   &phi_heap_cb, phi_heap->data);
>+}
>+
>+static bool phi_heap_insert(struct max_heap *phi_heap, struct pghot_info *phi)
>+{
>+	if (phi_heap->nr >= phi_heap_entries)
>+		return false;
>+
>+	phi->heap_idx = phi_heap->nr;
>+	min_heap_push(phi_heap, &phi, &phi_heap_cb, phi_heap->data);
>+
>+	return true;
>+}
>+
>+static bool phi_is_pfn_hot(struct pghot_info *phi)
>+{
>+	struct page *page = pfn_to_online_page(phi->pfn);
>+	unsigned long now = jiffies;
>+	struct folio *folio;
>+
>+	if (!page || is_zone_device_page(page))
>+		return false;
>+
>+	folio = page_folio(page);
>+	if (!folio_test_lru(folio)) {
>+		count_vm_event(KPROMOTED_NON_LRU);
>+		return false;
>+	}
>+	if (folio_nid(folio) == phi->nid) {
>+		count_vm_event(KPROMOTED_RIGHT_NODE);
>+		return false;
>+	}
>+
>+	/* If the page was hot a while ago, don't promote */
>+	if ((now - phi->last_update) > 2 * msecs_to_jiffies(KPROMOTED_FREQ_WINDOW)) {
>+		count_vm_event(KPROMOTED_COLD_OLD);
>+		return false;
>+	}
>+	return true;
>+}
>+
>+static struct folio *kpromoted_isolate_folio(struct pghot_info *phi)
>+{
>+	struct page *page = pfn_to_page(phi->pfn);
>+	struct folio *folio;
>+
>+	if (!page)
>+		return NULL;
>+
>+	folio = page_folio(page);
>+	if (migrate_misplaced_folio_prepare(folio, NULL, phi->nid))
>+		return NULL;
>+	else
>+		return folio;
>+}
>+
>+static struct pghot_info *phi_alloc(unsigned long pfn)
>+{
>+	struct pghot_info *phi;
>+
>+	phi = kmem_cache_zalloc(phi_cache, GFP_NOWAIT);
>+	if (!phi)
>+		return NULL;
>+
>+	phi->pfn = pfn;
>+	phi->heap_idx = -1;
>+	return phi;
>+}
>+
>+static inline void phi_free(struct pghot_info *phi)
>+{
>+	kmem_cache_free(phi_cache, phi);
>+}
>+
>+static int phi_heap_extract(pg_data_t *pgdat, int batch_count, int freq_th,
>+			    struct list_head *migrate_list, int *count)
>+{
>+	spinlock_t *phi_heap_lock = &pgdat->heap_lock;
>+	struct max_heap *phi_heap = &pgdat->heap;
>+	int max_retries = 10;
>+	int bkt, i = 0;
>+
>+	if (batch_count < 0 || !migrate_list || !count || freq_th < 1 ||
>+	    freq_th > KPRMOTED_FREQ_THRESHOLD)
>+		return -EINVAL;
>+
>+	*count = 0;
>+	for (i = 0; i < batch_count; i++) {
>+		struct pghot_info *top = NULL;
>+		bool should_continue = false;
>+		struct folio *folio;
>+		int retries = 0;
>+
>+		while (retries < max_retries) {
>+			spin_lock(phi_heap_lock);
>+			if (phi_heap->nr > 0 && phi_heap->data[0]->frequency >= freq_th) {
>+				should_continue = true;
>+				bkt = hash_min(phi_heap->data[0]->pfn, phi_hash_order);
>+				top = phi_heap->data[0];
>+			}
>+			spin_unlock(phi_heap_lock);
>+
>+			if (!should_continue)
>+				goto done;
>+
>+			spin_lock(&phi_hash[bkt].lock);
>+			spin_lock(phi_heap_lock);
>+			if (phi_heap->nr == 0 || phi_heap->data[0] != top ||
>+			    phi_heap->data[0]->frequency < freq_th) {
>+				spin_unlock(phi_heap_lock);
>+				spin_unlock(&phi_hash[bkt].lock);
>+				retries++;
>+				continue;
>+			}
>+
>+			top = phi_heap->data[0];
>+			hlist_del_init(&top->hnode);
>+
>+			phi_heap->nr--;
>+			if (phi_heap->nr > 0) {
>+				phi_heap->data[0] = phi_heap->data[phi_heap->nr];
>+				phi_heap->data[0]->heap_idx = 0;
>+				min_heap_sift_down(phi_heap, 0, &phi_heap_cb,
>+						   phi_heap->data);
>+			}
>+
>+			spin_unlock(phi_heap_lock);
>+			spin_unlock(&phi_hash[bkt].lock);
>+
>+			if (!phi_is_pfn_hot(top)) {
>+				count_vm_event(KPROMOTED_DROPPED);
>+				goto skip;
>+			}
>+
>+			folio = kpromoted_isolate_folio(top);
>+			if (folio) {
>+				list_add(&folio->lru, migrate_list);
>+				(*count)++;
>+			}
>+skip:
>+			phi_free(top);
>+			break;
>+		}
>+		if (retries >= max_retries) {
>+			pr_warn("%s: Too many retries\n", __func__);
>+			break;
>+		}
>+
>+	}
>+done:
>+	return 0;
>+}
>+
>+static void phi_heap_add_or_adjust(struct pghot_info *phi)
>+{
>+	pg_data_t *pgdat = NODE_DATA(phi->nid);
>+	struct max_heap *phi_heap = &pgdat->heap;
>+
>+	spin_lock(&pgdat->heap_lock);
>+	if (phi->heap_idx >= 0 && phi->heap_idx < phi_heap->nr &&
>+	    phi_heap->data[phi->heap_idx] == phi) {
>+		/* Entry exists in heap */
>+		if (phi->frequency < KPRMOTED_FREQ_THRESHOLD) {
>+			/* Below threshold, remove from the heap */
>+			phi_heap->nr--;
>+			if (phi->heap_idx < phi_heap->nr) {
>+				phi_heap->data[phi->heap_idx] =
>+					phi_heap->data[phi_heap->nr];
>+				phi_heap->data[phi->heap_idx]->heap_idx =
>+					phi->heap_idx;
>+				min_heap_sift_down(phi_heap, phi->heap_idx,
>+						   &phi_heap_cb, phi_heap->data);
>+			}
>+			phi->heap_idx = -1;
>+
>+		} else {
>+			/* Update position in heap */
>+			phi_heap_update_entry(phi_heap, phi);
>+		}
>+	} else if (phi->frequency >= KPRMOTED_FREQ_THRESHOLD) {
>+		/* Add to the heap */
>+		if (phi_heap_insert(phi_heap, phi))
>+			count_vm_event(PGHOT_RECORDS_HEAP);
>+	}
>+	spin_unlock(&pgdat->heap_lock);
>+}
>+
>+static struct pghot_info *phi_lookup(unsigned long pfn, int bkt)
>+{
>+	struct pghot_info *phi;
>+
>+	hlist_for_each_entry(phi, &phi_hash[bkt].hash, hnode) {
>+		if (phi->pfn == pfn)
>+			return phi;
>+	}
>+	return NULL;
>+}
>+
>+/*
>+ * Called by subsystems that generate page hotness/access information.
>+ *
>+ *  @pfn: The PFN of the memory accessed
>+ *  @nid: The accessing NUMA node ID
>+ *  @src: The temperature source (sub-system) that generated the
>+ *        access info
>+ *  @time: The access time in jiffies
>+ *
>+ * Maintains the access records per PFN, classifies them as
>+ * hot based on subsequent accesses and finally hands over
>+ * them to kpromoted for migration.
>+ */
>+int pghot_record_access(u64 pfn, int nid, int src, unsigned long now)
>+{
>+	struct pghot_info *phi;
>+	struct page *page;
>+	struct folio *folio;
>+	int bkt;
>+	bool new_entry = false, new_window = false;
>+
>+	if (!kpromoted_started)
>+		return -EINVAL;
>+
>+	count_vm_event(PGHOT_RECORDED_ACCESSES);
>+
>+	switch (src) {
>+	case PGHOT_HW_HINTS:
>+		count_vm_event(PGHOT_RECORD_HWHINTS);
>+		break;
>+	case PGHOT_PGTABLE_SCAN:
>+		count_vm_event(PGHOT_RECORD_PGTSCANS);
>+		break;
>+	default:
>+		return -EINVAL;
>+	}
>+
>+	/*
>+	 * Record only accesses from lower tiers.
>+	 */
>+	if (node_is_toptier(pfn_to_nid(pfn)))
>+		return 0;
>+
>+	/*
>+	 * Reject the non-migratable pages right away.
>+	 */
>+	page = pfn_to_online_page(pfn);
>+	if (!page || is_zone_device_page(page))
>+		return 0;
>+
>+	folio = page_folio(page);
>+	if (!folio_test_lru(folio))
>+		return 0;
>+
>+	bkt = hash_min(pfn, phi_hash_order);
>+	spin_lock(&phi_hash[bkt].lock);
>+	phi = phi_lookup(pfn, bkt);
>+	if (!phi) {
>+		phi = phi_alloc(pfn);
>+		if (!phi)
>+			goto out;
>+		new_entry = true;
>+	}
>+
>+	if (((now - phi->last_update) > msecs_to_jiffies(KPROMOTED_FREQ_WINDOW)) ||
>+	    (nid != NUMA_NO_NODE && phi->nid != nid))
>+		new_window = true;
>+
>+	if (new_entry || new_window) {
>+		/* New window */
>+		phi->frequency = 1; /* TODO: Factor in the history */
>+	} else
>+		phi->frequency++;
>+	phi->last_update = now;
>+	phi->nid = (nid == NUMA_NO_NODE) ? KPROMOTED_DEFAULT_NODE : nid;
>+
>+	if (new_entry) {
>+		/* Insert the new entry into hash table */
>+		hlist_add_head(&phi->hnode, &phi_hash[bkt].hash);
>+		count_vm_event(PGHOT_RECORDS_HASH);
>+	} else {
>+		/* Add/update the position in heap */
>+		phi_heap_add_or_adjust(phi);
>+	}
>+out:
>+	spin_unlock(&phi_hash[bkt].lock);
>+	return 0;
>+}
>+
>+/*
>+ * Extract the hot page records and batch-migrate the
>+ * hot pages.
>+ */
>+static void kpromoted_migrate(pg_data_t *pgdat)
>+{
>+	int count, ret;
>+	LIST_HEAD(migrate_list);
>+
>+	/*
>+	 * Extract the top N elements from the heap that match
>+	 * the requested hotness threshold.
>+	 *
>+	 * PFNs ineligible from migration standpoint are removed
>+	 * from the heap and hash.
>+	 *
>+	 * Folios eligible for migration are isolated and returned
>+	 * in @migrate_list.
>+	 */
>+	ret = phi_heap_extract(pgdat, KPRMOTED_MIGRATE_BATCH,
>+			       KPRMOTED_FREQ_THRESHOLD, &migrate_list, &count);
>+	if (ret)
>+		return;
>+
>+	if (!list_empty(&migrate_list))
>+		migrate_misplaced_folios_batch(&migrate_list, pgdat->node_id);
>+}
>+
>+static int kpromoted(void *p)
>+{
>+	pg_data_t *pgdat = (pg_data_t *)p;
>+
>+	while (!kthread_should_stop()) {
>+		wait_event_timeout(pgdat->kpromoted_wait, false,
>+				   msecs_to_jiffies(KPROMOTE_DELAY));
>+		kpromoted_migrate(pgdat);
>+	}
>+	return 0;
>+}
>+
>+static int kpromoted_run(int nid)
>+{
>+	pg_data_t *pgdat = NODE_DATA(nid);
>+	int ret = 0;
>+
>+	if (!node_is_toptier(nid))
>+		return 0;
>+
>+	if (!pgdat->phi_buf) {
>+		pgdat->phi_buf = vzalloc_node(phi_heap_entries * sizeof(struct pghot_info *),
>+					      nid);
>+		if (!pgdat->phi_buf)
>+			return -ENOMEM;
>+
>+		min_heap_init(&pgdat->heap, pgdat->phi_buf, phi_heap_entries);
>+		spin_lock_init(&pgdat->heap_lock);
>+	}
>+
>+	if (!pgdat->kpromoted)
>+		pgdat->kpromoted = kthread_create_on_node(kpromoted, pgdat, nid,
>+							  "kpromoted%d", nid);
>+	if (IS_ERR(pgdat->kpromoted)) {
>+		ret = PTR_ERR(pgdat->kpromoted);
>+		pgdat->kpromoted = NULL;
>+		pr_info("Failed to start kpromoted%d, ret %d\n", nid, ret);
>+	} else {
>+		wake_up_process(pgdat->kpromoted);
>+	}
>+	return ret;
>+}
>+
>+static int __init pghot_init(void)
>+{
>+	unsigned int hash_size;
>+	size_t hash_entries;
>+	size_t nr_pages = 0;
>+	pg_data_t *pgdat;
>+	int i, nid, ret;
>+
>+	/*
>+	 * Arrive at the hash and heap sizes based on the
>+	 * number of pages present in the lower tier nodes.
>+	 */
>+	for_each_node_state(nid, N_MEMORY) {
>+		if (!node_is_toptier(nid))
>+			nr_pages += NODE_DATA(nid)->node_present_pages;
>+	}
>+
>+	if (!nr_pages)
>+		return 0;
>+
>+	hash_entries = nr_pages * PGHOT_HASH_PCT / 100;
>+	hash_size = hash_entries / PGHOT_HASH_ENTRIES;
>+	phi_hash_order = ilog2(hash_size);
>+
>+	phi_hash = vmalloc(sizeof(struct pghot_hash) * hash_size);
>+	if (!phi_hash) {
>+		ret = -ENOMEM;
>+		goto out;
>+	}
>+
>+	for (i = 0; i < hash_size; i++) {
>+		INIT_HLIST_HEAD(&phi_hash[i].hash);
>+		spin_lock_init(&phi_hash[i].lock);
>+	}
>+
>+	phi_cache = KMEM_CACHE(pghot_info, 0);
>+	if (unlikely(!phi_cache)) {
>+		ret = -ENOMEM;
>+		goto out;
>+	}
>+
>+	phi_heap_entries = hash_entries * PGHOT_HEAP_PCT / 100;
>+	for_each_node_state(nid, N_CPU) {
>+		ret = kpromoted_run(nid);
>+		if (ret)
>+			goto out_stop_kthread;
>+	}
>+
>+	kpromoted_started = true;
>+	pr_info("pghot: Started page hotness monitoring and promotion thread\n");
>+	pr_info("pghot: nr_pages %ld hash_size %d hash_entries %ld hash_order %d heap_entries %d\n",
>+	       nr_pages, hash_size, hash_entries, phi_hash_order, phi_heap_entries);
>+	return 0;
>+
>+out_stop_kthread:
>+	for_each_node_state(nid, N_CPU) {
>+		pgdat = NODE_DATA(nid);
>+		if (pgdat->kpromoted) {
>+			kthread_stop(pgdat->kpromoted);
>+			pgdat->kpromoted = NULL;
>+			vfree(pgdat->phi_buf);
>+		}
>+	}
>+out:
>+	kmem_cache_destroy(phi_cache);
>+	vfree(phi_hash);
>+	return ret;
>+}
>+
>+late_initcall(pghot_init)
>diff --git a/mm/vmstat.c b/mm/vmstat.c
>index 71cd1ceba191..9edbdd71c6f7 100644
>--- a/mm/vmstat.c
>+++ b/mm/vmstat.c
>@@ -1496,6 +1496,15 @@ const char * const vmstat_text[] = {
> #endif
> #undef I
> #endif /* CONFIG_VM_EVENT_COUNTERS */
>+	"pghot_recorded_accesses",
>+	"pghot_recorded_hwhints",
>+	"pghot_recorded_pgtscans",
>+	"pghot_records_hash",
>+	"pghot_records_heap",
>+	"kpromoted_right_node",
>+	"kpromoted_non_lru",
>+	"kpromoted_cold_old",
>+	"kpromoted_dropped",
> };
> #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA || CONFIG_MEMCG */
>
>-- 
>2.34.1
>

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v1 3/7] mm: Hot page tracking and promotion
  2025-08-21 11:17     ` Alok Rathore
@ 2025-08-21 15:10       ` Bharata B Rao
  0 siblings, 0 replies; 16+ messages in thread
From: Bharata B Rao @ 2025-08-21 15:10 UTC (permalink / raw)
  To: Alok Rathore
  Cc: linux-kernel, linux-mm, Jonathan.Cameron, dave.hansen, gourry,
	hannes, mgorman, mingo, peterz, raghavendra.kt, riel, rientjes,
	sj, weixugc, willy, ying.huang, ziy, dave, nifan.cxl, xuezhengchu,
	yiannis, akpm, david, byungchul, kinseyho, joshua.hahnjy, yuanchu,
	balbirs, alokrathore20, gost.dev, cpgs

On 21-Aug-25 4:47 PM, Alok Rathore wrote:
> On 14/08/25 07:18PM, Bharata B Rao wrote:
>>
>> diff --git a/mm/Makefile b/mm/Makefile
>> index ef54aa615d9d..8799bd0c68ed 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -147,3 +147,4 @@ obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o
>> obj-$(CONFIG_EXECMEM) += execmem.o
>> obj-$(CONFIG_TMPFS_QUOTA) += shmem_quota.o
>> obj-$(CONFIG_PT_RECLAIM) += pt_reclaim.o
>> +obj-$(CONFIG_PGHOT) += kpromoted.o
> 
> Looks like by mistake used older file name. It should be pghot.o
> 
> Can you please provide base commit. Unable to apply patch cleanly using b4 utility.

Right, sorry about this. The base commit is 8742b2d8935f476449ef37e263bc4da3295c7b58

The updated patchset is available from
https://github.com/AMDESE/linux-mm/tree/bharata/kpromoted-rfcv1

Regards,
Bharata.


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-08-21 15:10 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-14 13:48 [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Bharata B Rao
2025-08-14 13:48 ` [RFC PATCH v1 1/7] mm: migrate: Allow misplaced migration without VMA too Bharata B Rao
2025-08-15  1:29   ` Huang, Ying
2025-08-14 13:48 ` [RFC PATCH v1 2/7] migrate: implement migrate_misplaced_folios_batch Bharata B Rao
2025-08-15  1:39   ` Huang, Ying
2025-08-14 13:48 ` [RFC PATCH v1 3/7] mm: Hot page tracking and promotion Bharata B Rao
2025-08-15  1:56   ` Huang, Ying
2025-08-15 14:16     ` Bharata B Rao
     [not found]   ` <CGME20250821111729epcas5p4b57cdfb4a339e8ac7fc1ea803d6baa34@epcas5p4.samsung.com>
2025-08-21 11:17     ` Alok Rathore
2025-08-21 15:10       ` Bharata B Rao
2025-08-14 13:48 ` [RFC PATCH v1 4/7] x86: ibs: In-kernel IBS driver for memory access profiling Bharata B Rao
2025-08-14 13:48 ` [RFC PATCH v1 5/7] x86: ibs: Enable IBS profiling for memory accesses Bharata B Rao
2025-08-14 13:48 ` [RFC PATCH v1 6/7] mm: mglru: generalize page table walk Bharata B Rao
2025-08-14 13:48 ` [RFC PATCH v1 7/7] mm: klruscand: use mglru scanning for page promotion Bharata B Rao
2025-08-15 11:59 ` [RFC PATCH v1 0/7] A subsystem for hot page detection and promotion Balbir Singh
2025-08-15 15:35   ` Bharata B Rao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).