linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
@ 2024-08-01  7:56 BiscuitOS Broiler
  2024-08-01  7:56 ` [PATCH v2 1/1] " BiscuitOS Broiler
  2024-08-01  8:06 ` [PATCH v2 0/1] " David Hildenbrand
  0 siblings, 2 replies; 8+ messages in thread
From: BiscuitOS Broiler @ 2024-08-01  7:56 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: arnd, linux-arch, chris, jcmvbkbc, James.Bottomley, deller,
	linux-parisc, tsbogend, rdunlap, bhelgaas, linux-mips,
	richard.henderson, ink, mattst88, linux-alpha, jiaoxupo,
	zhou.haofan, zhang.renze

Sure, here's the Scalable Tiered Memory Control (STMC)

**Background**

In the era when artificial intelligence, big data analytics, and
machine learning have become mainstream research topics and
application scenarios, the demand for high-capacity and high-
bandwidth memory in computers has become increasingly important.
The emergence of CXL (Compute Express Link) provides the
possibility of high-capacity memory. Although CXL TYPE3 devices
can provide large memory capacities, their access speed is lower
than traditional DRAM due to hardware architecture limitations.

To enjoy the large capacity brought by CXL memory while minimizing
the impact of high latency, Linux has introduced the Tiered Memory
architecture. In the Tiered Memory architecture, CXL memory is
treated as an independent, slower NUMA NODE, while DRAM is
considered as a relatively faster NUMA NODE. Applications allocate
memory from the local node, and Tiered Memory, leveraging memory
reclamation and NUMA Balancing mechanisms, can transparently demote
physical pages not recently accessed by user processes to the slower
CXL NUMA NODE. However, when user processes re-access the demoted
memory, the Tiered Memory mechanism will, based on certain logic,
decide whether to promote the demoted physical pages back to the
fast NUMA NODE. If the promotion is successful, the memory accessed
by the user process will reside in DRAM; otherwise, it will reside in
the CXL NODE. Through the Tiered Memory mechanism, Linux balances
betweenlarge memory capacity and latency, striving to maintain an
equilibrium for applications.

**Problem**
Although Tiered Memory strives to balance between large capacity and
latency, specific scenarios can lead to the following issues:

  1. In scenarios requiring massive computations, if data is heavily
     stored in CXL slow memory and Tiered Memory cannot promptly
     promote this memory to fast DRAM, it will significantly impact
     program performance.
  2. Similar to the scenario described in point 1, if Tiered Memory
     decides to promote these physical pages to fast DRAM NODE, but
     due to limitations in the DRAM NODE promote ratio, these physical
     pages cannot be promoted. Consequently, the program will keep
     running in slow memory.
  3. After an application finishes computing on a large block of fast
     memory, it may not immediately re-access it. Hence, this memory
     can only wait for the memory reclamation mechanism to demote it.
  4. Similar to the scenario described in point 3, if the demotion
     speed is slow, these cold pages will occupy the promotion
     resources, preventing some eligible slow pages from being
     immediately promoted, severely affecting application efficiency.

**Solution**
We propose the **Scalable Tiered Memory Control (STMC)** mechanism,
which delegates the authority of promoting and demoting memory to the
application. The principle is simple, as follows:

  1. When an application is preparing for computation, it can promote
     the memory it needs to use or ensure the memory resides on a fast
     NODE.
  2. When an application will not use the memory shortly, it can
     immediately demote the memory to slow memory, freeing up valuable
     promotion resources.

STMC mechanism is implemented through the madvise system call, providing
two new advice options: MADV_DEMOTE and MADV_PROMOTE. MADV_DEMOTE
advises demote the physical memory to the node where slow memory
resides; this advice only fails if there is no free physical memory on
the slow memory node. MADV_PROMOTE advises retaining the physical memory
in the fast memory; this advice only fails if there are no promotion
slots available on the fast memory node. Benefits brought by STMC
include:

  1. The STMC mechanism is a variant of on-demand memory management
     designed to let applications enjoy fast memory as much as possible,
     while actively demoting to slow memory when not in use, thus
     freeing up promotion slots for the NODE and allowing it to run in
     an optimized Tiered Memory environment.
  2. The STMC mechanism better balances large capacity and latency.

**Shortcomings of STMC**
The STMC mechanism requires the caller to manage memory demotion and
promotion. If the memory is not promptly demoting after an promotion,
it may cause issues similar to memory leaks, leading to short-term
promotion bottlenecks.

BiscuitOS Broiler (1):
  mm: introduce MADV_DEMOTE/MADV_PROMOTE

 arch/alpha/include/uapi/asm/mman.h           |   3 +
 arch/mips/include/uapi/asm/mman.h            |   3 +
 arch/parisc/include/uapi/asm/mman.h          |   3 +
 arch/xtensa/include/uapi/asm/mman.h          |   3 +
 include/uapi/asm-generic/mman-common.h       |   3 +
 mm/internal.h                                |   1 +
 mm/madvise.c                                 | 251 +++++++++++++++++++
 mm/vmscan.c                                  |  57 +++++
 tools/include/uapi/asm-generic/mman-common.h |   3 +
 9 files changed, 327 insertions(+)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
  2024-08-01  7:56 [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE BiscuitOS Broiler
@ 2024-08-01  7:56 ` BiscuitOS Broiler
  2024-08-01 19:25   ` Andrew Morton
  2024-08-01  8:06 ` [PATCH v2 0/1] " David Hildenbrand
  1 sibling, 1 reply; 8+ messages in thread
From: BiscuitOS Broiler @ 2024-08-01  7:56 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: arnd, linux-arch, chris, jcmvbkbc, James.Bottomley, deller,
	linux-parisc, tsbogend, rdunlap, bhelgaas, linux-mips,
	richard.henderson, ink, mattst88, linux-alpha, jiaoxupo,
	zhou.haofan, zhang.renze

In a tiered memory architecture, when a process does not access memory
in the fast nodes for a long time, the kernel will demote the memory
to slower memory through a reclamation mechanism. This frees up the
fast memory for other processes. When the process accesses the demoted
memory again, the tiered memory system will, following certain
policies, promote it back to fast memory. Since memory demotion and
promotion in a tiered memory system do not occur instantly but require
a gradual process, this can severely impact the performance of programs
in high-performance computing scenarios.

This patch introduces new MADV_DEMOTE and MADV_PROMOTE hints to the
madvise syscall. MADV_DEMOTE can mark a range of memory pages as cold
pages and immediately demote them to slow memory. MADV_PROMOTE can mark
a range of memory pages as hot pages and immediately promote them to
fast memory, allowing applications to better balance large memory
capacity with latency.

Signed-off-by: BiscuitOS Broiler <zhang.renze@h3c.com>
---
 arch/alpha/include/uapi/asm/mman.h           |   3 +
 arch/mips/include/uapi/asm/mman.h            |   3 +
 arch/parisc/include/uapi/asm/mman.h          |   3 +
 arch/xtensa/include/uapi/asm/mman.h          |   3 +
 include/uapi/asm-generic/mman-common.h       |   3 +
 mm/internal.h                                |   1 +
 mm/madvise.c                                 | 251 +++++++++++++++++++
 mm/vmscan.c                                  |  57 +++++
 tools/include/uapi/asm-generic/mman-common.h |   3 +
 9 files changed, 327 insertions(+)

diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h
index 763929e814e9..98e7609d51ab 100644
--- a/arch/alpha/include/uapi/asm/mman.h
+++ b/arch/alpha/include/uapi/asm/mman.h
@@ -78,6 +78,9 @@
 
 #define MADV_COLLAPSE	25		/* Synchronous hugepage collapse */
 
+#define MADV_DEMOTE	26		/* Demote page into slow node */
+#define MADV_PROMOTE	27		/* Promote page into fast node */
+
 /* compatibility flags */
 #define MAP_FILE	0
 
diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h
index 9c48d9a21aa0..aae4cd01c20d 100644
--- a/arch/mips/include/uapi/asm/mman.h
+++ b/arch/mips/include/uapi/asm/mman.h
@@ -105,6 +105,9 @@
 
 #define MADV_COLLAPSE	25		/* Synchronous hugepage collapse */
 
+#define MADV_DEMOTE	26		/* Demote page into slow node */
+#define MADV_PROMOTE	27		/* Promote page into fast node */
+
 /* compatibility flags */
 #define MAP_FILE	0
 
diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h
index 68c44f99bc93..8b50ac91d0c9 100644
--- a/arch/parisc/include/uapi/asm/mman.h
+++ b/arch/parisc/include/uapi/asm/mman.h
@@ -72,6 +72,9 @@
 
 #define MADV_COLLAPSE	25		/* Synchronous hugepage collapse */
 
+#define MADV_DEMOTE	26		/* Demote page into slow node */
+#define MADV_PROMOTE	27		/* Promote page into fast node */
+
 #define MADV_HWPOISON     100		/* poison a page for testing */
 #define MADV_SOFT_OFFLINE 101		/* soft offline page for testing */
 
diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h
index 1ff0c858544f..8f820d4f5901 100644
--- a/arch/xtensa/include/uapi/asm/mman.h
+++ b/arch/xtensa/include/uapi/asm/mman.h
@@ -113,6 +113,9 @@
 
 #define MADV_COLLAPSE	25		/* Synchronous hugepage collapse */
 
+#define MADV_DEMOTE	26		/* Demote page into slow node */
+#define MADV_PROMOTE	27		/* Promote page into fast node */
+
 /* compatibility flags */
 #define MAP_FILE	0
 
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 6ce1f1ceb432..52222c2245a8 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -79,6 +79,9 @@
 
 #define MADV_COLLAPSE	25		/* Synchronous hugepage collapse */
 
+#define MADV_DEMOTE	26		/* Demote page into slow node */
+#define MADV_PROMOTE	27		/* Promote page into fast node */
+
 /* compatibility flags */
 #define MAP_FILE	0
 
diff --git a/mm/internal.h b/mm/internal.h
index 7a3bcc6d95e7..105c2621e335 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1096,6 +1096,7 @@ extern unsigned long  __must_check vm_mmap_pgoff(struct file *, unsigned long,
 extern void set_pageblock_order(void);
 struct folio *alloc_migrate_folio(struct folio *src, unsigned long private);
 unsigned long reclaim_pages(struct list_head *folio_list);
+unsigned long demotion_pages(struct list_head *folio_list);
 unsigned int reclaim_clean_pages_from_list(struct zone *zone,
 					    struct list_head *folio_list);
 /* The ALLOC_WMARK bits are used as an index to zone->watermark */
diff --git a/mm/madvise.c b/mm/madvise.c
index 89089d84f8df..9e41936a2dc5 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -31,6 +31,9 @@
 #include <linux/swapops.h>
 #include <linux/shmem_fs.h>
 #include <linux/mmu_notifier.h>
+#include <linux/memory-tiers.h>
+#include <linux/migrate.h>
+#include <linux/sched/numa_balancing.h>
 
 #include <asm/tlb.h>
 
@@ -56,6 +59,8 @@ static int madvise_need_mmap_write(int behavior)
 	case MADV_DONTNEED_LOCKED:
 	case MADV_COLD:
 	case MADV_PAGEOUT:
+	case MADV_DEMOTE:
+	case MADV_PROMOTE:
 	case MADV_FREE:
 	case MADV_POPULATE_READ:
 	case MADV_POPULATE_WRITE:
@@ -639,6 +644,242 @@ static long madvise_pageout(struct vm_area_struct *vma,
 	return 0;
 }
 
+static int madvise_demotion_pte_range(pmd_t *pmd,
+				unsigned long addr, unsigned long end,
+				struct mm_walk *walk)
+{
+	struct mmu_gather *tlb = walk->private;
+	struct vm_area_struct *vma = walk->vma;
+	struct mm_struct *mm = tlb->mm;
+	pte_t *start_pte, *pte, ptent;
+	struct folio *folio = NULL;
+	LIST_HEAD(folio_list);
+	spinlock_t *ptl;
+	int nid;
+
+	if (fatal_signal_pending(current))
+		return -EINTR;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (pmd_trans_huge(*pmd))
+		return 0;
+#endif
+	tlb_change_page_size(tlb, PAGE_SIZE);
+	start_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+	if (!start_pte)
+		return 0;
+	flush_tlb_batched_pending(mm);
+	arch_enter_lazy_mmu_mode();
+	for (; addr < end; pte++, addr += PAGE_SIZE) {
+		ptent = ptep_get(pte);
+
+		if (pte_none(ptent))
+			continue;
+
+		if (!pte_present(ptent))
+			continue;
+
+		folio = vm_normal_folio(vma, addr, ptent);
+		if (!folio || folio_is_zone_device(folio))
+			continue;
+
+		if (folio_test_large(folio))
+			continue;
+
+		if (!folio_test_anon(folio))
+			continue;
+
+		nid = folio_nid(folio);
+		if (!node_is_toptier(nid))
+			continue;
+
+		/* no tiered memory node */
+		if (next_demotion_node(nid) == NUMA_NO_NODE)
+			continue;
+
+		/*
+		 * Do not interfere with other mappings of this folio and
+		 * non-LRU folio. If we have a large folio at this point, we
+		 * know it is fully mapped so if its mapcount is the same as its
+		 * number of pages, it must be exclusive.
+		 */
+		if (!folio_test_lru(folio) ||
+		    folio_mapcount(folio) != folio_nr_pages(folio))
+			continue;
+
+		folio_clear_referenced(folio);
+		folio_test_clear_young(folio);
+		if (folio_test_active(folio))
+			folio_set_workingset(folio);
+		if (folio_isolate_lru(folio)) {
+			if (folio_test_unevictable(folio))
+				folio_putback_lru(folio);
+			else
+				list_add(&folio->lru, &folio_list);
+		}
+	}
+
+	if (start_pte) {
+		arch_leave_lazy_mmu_mode();
+		pte_unmap_unlock(start_pte, ptl);
+	}
+
+	demotion_pages(&folio_list);
+	cond_resched();
+
+	return 0;
+}
+
+static const struct mm_walk_ops demotion_walk_ops = {
+	.pmd_entry = madvise_demotion_pte_range,
+	.walk_lock = PGWALK_RDLOCK,
+};
+
+static void madvise_demotion_page_range(struct mmu_gather *tlb,
+			     struct vm_area_struct *vma,
+			     unsigned long addr, unsigned long end)
+{
+	tlb_start_vma(tlb, vma);
+	walk_page_range(vma->vm_mm, addr, end, &demotion_walk_ops, tlb);
+	tlb_end_vma(tlb, vma);
+}
+
+static long madvise_demotion(struct vm_area_struct *vma,
+			struct vm_area_struct **prev,
+			unsigned long start_addr, unsigned long end_addr)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	struct mmu_gather tlb;
+
+	*prev = vma;
+	if (!can_madv_lru_vma(vma))
+		return -EINVAL;
+
+	if (!numa_demotion_enabled && !vma_is_anonymous(vma) &&
+				(vma->vm_flags & VM_MAYSHARE))
+		return 0;
+
+	lru_add_drain();
+	tlb_gather_mmu(&tlb, mm);
+	madvise_demotion_page_range(&tlb, vma, start_addr, end_addr);
+	tlb_finish_mmu(&tlb);
+
+	return 0;
+}
+
+static int madvise_promotion_pte_range(pmd_t *pmd,
+				unsigned long addr, unsigned long end,
+				struct mm_walk *walk)
+{
+	struct mmu_gather *tlb = walk->private;
+	struct vm_area_struct *vma = walk->vma;
+	struct mm_struct *mm = tlb->mm;
+	struct folio *folio = NULL;
+	LIST_HEAD(folio_list);
+	int nid, target_nid;
+	pte_t *pte, ptent;
+	spinlock_t *ptl;
+
+	if (fatal_signal_pending(current))
+		return -EINTR;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (pmd_trans_huge(*pmd))
+		return 0;
+#endif
+	tlb_change_page_size(tlb, PAGE_SIZE);
+	pte = pte_offset_map_nolock(vma->vm_mm, pmd, addr, &ptl);
+	if (!pte)
+		return 0;
+	flush_tlb_batched_pending(mm);
+	arch_enter_lazy_mmu_mode();
+	for (; addr < end; pte++, addr += PAGE_SIZE) {
+		ptent = ptep_get(pte);
+
+		if (pte_none(ptent))
+			continue;
+
+		if (!pte_present(ptent))
+			continue;
+
+		folio = vm_normal_folio(vma, addr, ptent);
+		if (!folio || folio_is_zone_device(folio))
+			continue;
+
+		if (folio_test_large(folio))
+			continue;
+
+		if (!folio_test_anon(folio))
+			continue;
+
+		/* skip page on fast node */
+		nid = folio_nid(folio);
+		if (node_is_toptier(nid))
+			continue;
+
+		if (!folio_test_lru(folio) ||
+		    folio_mapcount(folio) != folio_nr_pages(folio))
+			continue;
+
+		/* force update folio last access time */
+		folio_xchg_access_time(folio, jiffies_to_msecs(jiffies));
+
+		target_nid = numa_node_id();
+		if (!should_numa_migrate_memory(current, folio, nid, target_nid))
+			continue;
+
+		/* prepare pormote */
+		if (!folio_isolate_lru(folio))
+			continue;
+
+		/* promote page directly */
+		migrate_misplaced_folio(folio, vma, target_nid);
+		tlb_remove_tlb_entry(tlb, pte, addr);
+	}
+
+	arch_leave_lazy_mmu_mode();
+	cond_resched();
+
+	return 0;
+}
+
+static const struct mm_walk_ops promotion_walk_ops = {
+	.pmd_entry = madvise_promotion_pte_range,
+	.walk_lock = PGWALK_RDLOCK,
+};
+
+static void madvise_promotion_page_range(struct mmu_gather *tlb,
+			     struct vm_area_struct *vma,
+			     unsigned long addr, unsigned long end)
+{
+	tlb_start_vma(tlb, vma);
+	walk_page_range(vma->vm_mm, addr, end, &promotion_walk_ops, tlb);
+	tlb_end_vma(tlb, vma);
+}
+
+static long madvise_promotion(struct vm_area_struct *vma,
+			struct vm_area_struct **prev,
+			unsigned long start_addr, unsigned long end_addr)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	struct mmu_gather tlb;
+
+	*prev = vma;
+	if (!can_madv_lru_vma(vma))
+		return -EINVAL;
+
+	if (!numa_demotion_enabled && !vma_is_anonymous(vma) &&
+				(vma->vm_flags & VM_MAYSHARE))
+		return 0;
+
+	lru_add_drain();
+	tlb_gather_mmu(&tlb, mm);
+	madvise_promotion_page_range(&tlb, vma, start_addr, end_addr);
+	tlb_finish_mmu(&tlb);
+
+	return 0;
+}
+
 static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 				unsigned long end, struct mm_walk *walk)
 
@@ -1040,6 +1281,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma,
 		return madvise_cold(vma, prev, start, end);
 	case MADV_PAGEOUT:
 		return madvise_pageout(vma, prev, start, end);
+	case MADV_DEMOTE:
+		return madvise_demotion(vma, prev, start, end);
+	case MADV_PROMOTE:
+		return madvise_promotion(vma, prev, start, end);
 	case MADV_FREE:
 	case MADV_DONTNEED:
 	case MADV_DONTNEED_LOCKED:
@@ -1179,6 +1424,8 @@ madvise_behavior_valid(int behavior)
 	case MADV_FREE:
 	case MADV_COLD:
 	case MADV_PAGEOUT:
+	case MADV_DEMOTE:
+	case MADV_PROMOTE:
 	case MADV_POPULATE_READ:
 	case MADV_POPULATE_WRITE:
 #ifdef CONFIG_KSM
@@ -1210,6 +1457,8 @@ static bool process_madvise_behavior_valid(int behavior)
 	switch (behavior) {
 	case MADV_COLD:
 	case MADV_PAGEOUT:
+	case MADV_DEMOTE:
+	case MADV_PROMOTE:
 	case MADV_WILLNEED:
 	case MADV_COLLAPSE:
 		return true;
@@ -1391,6 +1640,8 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
  *		triggering read faults if required
  *  MADV_POPULATE_WRITE - populate (prefault) page tables writable by
  *		triggering write faults if required
+ *  MADV_DEMOTE  - the application forces pages into slow node.
+ *  MADV_PROMOTE - the application forces pages into fast node.
  *
  * return values:
  *  zero    - success
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c89d0551655e..88d7a1dd05a0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2185,6 +2185,63 @@ unsigned long reclaim_pages(struct list_head *folio_list)
 	return nr_reclaimed;
 }
 
+static unsigned int demotion_folio_list(struct list_head *folio_list,
+				      struct pglist_data *pgdat)
+{
+	struct reclaim_stat dummy_stat;
+	unsigned int nr_demoted;
+	struct folio *folio;
+	struct scan_control sc = {
+		.gfp_mask = GFP_KERNEL,
+		.may_writepage = 1,
+		.may_unmap = 1,
+		.may_swap = 1,
+		.no_demotion = 0,
+	};
+
+	nr_demoted = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, true);
+	while (!list_empty(folio_list)) {
+		folio = lru_to_folio(folio_list);
+		list_del(&folio->lru);
+		folio_putback_lru(folio);
+	}
+
+	return nr_demoted;
+}
+
+unsigned long demotion_pages(struct list_head *folio_list)
+{
+	unsigned int nr_demoted = 0;
+	LIST_HEAD(node_folio_list);
+	unsigned int noreclaim_flag;
+	int nid;
+
+	if (list_empty(folio_list))
+		return nr_demoted;
+
+	noreclaim_flag = memalloc_noreclaim_save();
+
+	nid = folio_nid(lru_to_folio(folio_list));
+	do {
+		struct folio *folio = lru_to_folio(folio_list);
+
+		if (nid == folio_nid(folio)) {
+			folio_clear_active(folio);
+			list_move(&folio->lru, &node_folio_list);
+			continue;
+		}
+
+		nr_demoted += demotion_folio_list(&node_folio_list, NODE_DATA(nid));
+		nid = folio_nid(lru_to_folio(folio_list));
+	} while (!list_empty(folio_list));
+
+	nr_demoted += demotion_folio_list(&node_folio_list, NODE_DATA(nid));
+
+	memalloc_noreclaim_restore(noreclaim_flag);
+
+	return nr_demoted;
+}
+
 static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 				 struct lruvec *lruvec, struct scan_control *sc)
 {
diff --git a/tools/include/uapi/asm-generic/mman-common.h b/tools/include/uapi/asm-generic/mman-common.h
index 6ce1f1ceb432..52222c2245a8 100644
--- a/tools/include/uapi/asm-generic/mman-common.h
+++ b/tools/include/uapi/asm-generic/mman-common.h
@@ -79,6 +79,9 @@
 
 #define MADV_COLLAPSE	25		/* Synchronous hugepage collapse */
 
+#define MADV_DEMOTE	26		/* Demote page into slow node */
+#define MADV_PROMOTE	27		/* Promote page into fast node */
+
 /* compatibility flags */
 #define MAP_FILE	0
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
  2024-08-01  7:56 [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE BiscuitOS Broiler
  2024-08-01  7:56 ` [PATCH v2 1/1] " BiscuitOS Broiler
@ 2024-08-01  8:06 ` David Hildenbrand
  1 sibling, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2024-08-01  8:06 UTC (permalink / raw)
  To: BiscuitOS Broiler, linux-mm, linux-kernel, akpm
  Cc: arnd, linux-arch, chris, jcmvbkbc, James.Bottomley, deller,
	linux-parisc, tsbogend, rdunlap, bhelgaas, linux-mips,
	richard.henderson, ink, mattst88, linux-alpha, jiaoxupo,
	zhou.haofan

On 01.08.24 09:56, BiscuitOS Broiler wrote:
> Sure, here's the Scalable Tiered Memory Control (STMC)
> 
> **Background**
> 
> In the era when artificial intelligence, big data analytics, and
> machine learning have become mainstream research topics and
> application scenarios, the demand for high-capacity and high-
> bandwidth memory in computers has become increasingly important.
> The emergence of CXL (Compute Express Link) provides the
> possibility of high-capacity memory. Although CXL TYPE3 devices
> can provide large memory capacities, their access speed is lower
> than traditional DRAM due to hardware architecture limitations.
> 
> To enjoy the large capacity brought by CXL memory while minimizing
> the impact of high latency, Linux has introduced the Tiered Memory
> architecture. In the Tiered Memory architecture, CXL memory is
> treated as an independent, slower NUMA NODE, while DRAM is
> considered as a relatively faster NUMA NODE. Applications allocate
> memory from the local node, and Tiered Memory, leveraging memory
> reclamation and NUMA Balancing mechanisms, can transparently demote
> physical pages not recently accessed by user processes to the slower
> CXL NUMA NODE. However, when user processes re-access the demoted
> memory, the Tiered Memory mechanism will, based on certain logic,
> decide whether to promote the demoted physical pages back to the
> fast NUMA NODE. If the promotion is successful, the memory accessed
> by the user process will reside in DRAM; otherwise, it will reside in
> the CXL NODE. Through the Tiered Memory mechanism, Linux balances
> betweenlarge memory capacity and latency, striving to maintain an
> equilibrium for applications.
> 
> **Problem**
> Although Tiered Memory strives to balance between large capacity and
> latency, specific scenarios can lead to the following issues:
> 
>    1. In scenarios requiring massive computations, if data is heavily
>       stored in CXL slow memory and Tiered Memory cannot promptly
>       promote this memory to fast DRAM, it will significantly impact
>       program performance.
>    2. Similar to the scenario described in point 1, if Tiered Memory
>       decides to promote these physical pages to fast DRAM NODE, but
>       due to limitations in the DRAM NODE promote ratio, these physical
>       pages cannot be promoted. Consequently, the program will keep
>       running in slow memory.
>    3. After an application finishes computing on a large block of fast
>       memory, it may not immediately re-access it. Hence, this memory
>       can only wait for the memory reclamation mechanism to demote it.
>    4. Similar to the scenario described in point 3, if the demotion
>       speed is slow, these cold pages will occupy the promotion
>       resources, preventing some eligible slow pages from being
>       immediately promoted, severely affecting application efficiency.
> 
> **Solution**
> We propose the **Scalable Tiered Memory Control (STMC)** mechanism,
> which delegates the authority of promoting and demoting memory to the
> application. The principle is simple, as follows:
> 
>    1. When an application is preparing for computation, it can promote
>       the memory it needs to use or ensure the memory resides on a fast
>       NODE.
>    2. When an application will not use the memory shortly, it can
>       immediately demote the memory to slow memory, freeing up valuable
>       promotion resources.
> 
> STMC mechanism is implemented through the madvise system call, providing
> two new advice options: MADV_DEMOTE and MADV_PROMOTE. MADV_DEMOTE
> advises demote the physical memory to the node where slow memory
> resides; this advice only fails if there is no free physical memory on
> the slow memory node. MADV_PROMOTE advises retaining the physical memory
> in the fast memory; this advice only fails if there are no promotion
> slots available on the fast memory node. Benefits brought by STMC
> include:
> 
>    1. The STMC mechanism is a variant of on-demand memory management
>       designed to let applications enjoy fast memory as much as possible,
>       while actively demoting to slow memory when not in use, thus
>       freeing up promotion slots for the NODE and allowing it to run in
>       an optimized Tiered Memory environment.
>    2. The STMC mechanism better balances large capacity and latency.
> 
> **Shortcomings of STMC**
> The STMC mechanism requires the caller to manage memory demotion and
> promotion. If the memory is not promptly demoting after an promotion,
> it may cause issues similar to memory leaks
Ehm, that sounds scary. Can you elaborate what's happening here and why 
it is "similar to memory leaks"?


Can you also point out why migrate_pages() is not suitable? I would 
assume demote/promote is in essence simply migrating memory between nodes.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
@ 2024-08-01  9:57 Zhangrenze
  2024-08-01 12:53 ` David Hildenbrand
  0 siblings, 1 reply; 8+ messages in thread
From: Zhangrenze @ 2024-08-01  9:57 UTC (permalink / raw)
  To: David Hildenbrand, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, akpm@linux-foundation.org
  Cc: arnd@arndb.de, linux-arch@vger.kernel.org, chris@zankel.net,
	jcmvbkbc@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, linux-parisc@vger.kernel.org,
	tsbogend@alpha.franken.de, rdunlap@infradead.org,
	bhelgaas@google.com, linux-mips@vger.kernel.org,
	richard.henderson@linaro.org, ink@jurassic.park.msu.ru,
	mattst88@gmail.com, linux-alpha@vger.kernel.org, Jiaoxupo,
	Zhouhaofan

> > Sure, here's the Scalable Tiered Memory Control (STMC)
> > 
> > **Background**
> > 
> > In the era when artificial intelligence, big data analytics, and
> > machine learning have become mainstream research topics and
> > application scenarios, the demand for high-capacity and high-
> > bandwidth memory in computers has become increasingly important.
> > The emergence of CXL (Compute Express Link) provides the
> > possibility of high-capacity memory. Although CXL TYPE3 devices
> > can provide large memory capacities, their access speed is lower
> > than traditional DRAM due to hardware architecture limitations.
> > 
> > To enjoy the large capacity brought by CXL memory while minimizing
> > the impact of high latency, Linux has introduced the Tiered Memory
> > architecture. In the Tiered Memory architecture, CXL memory is
> > treated as an independent, slower NUMA NODE, while DRAM is
> > considered as a relatively faster NUMA NODE. Applications allocate
> > memory from the local node, and Tiered Memory, leveraging memory
> > reclamation and NUMA Balancing mechanisms, can transparently demote
> > physical pages not recently accessed by user processes to the slower
> > CXL NUMA NODE. However, when user processes re-access the demoted
> > memory, the Tiered Memory mechanism will, based on certain logic,
> > decide whether to promote the demoted physical pages back to the
> > fast NUMA NODE. If the promotion is successful, the memory accessed
> > by the user process will reside in DRAM; otherwise, it will reside in
> > the CXL NODE. Through the Tiered Memory mechanism, Linux balances
> > betweenlarge memory capacity and latency, striving to maintain an
> > equilibrium for applications.
> > 
> > **Problem**
> > Although Tiered Memory strives to balance between large capacity and
> > latency, specific scenarios can lead to the following issues:
> > 
> >    1. In scenarios requiring massive computations, if data is heavily
> >       stored in CXL slow memory and Tiered Memory cannot promptly
> >       promote this memory to fast DRAM, it will significantly impact
> >       program performance.
> >    2. Similar to the scenario described in point 1, if Tiered Memory
> >       decides to promote these physical pages to fast DRAM NODE, but
> >       due to limitations in the DRAM NODE promote ratio, these physical
> >       pages cannot be promoted. Consequently, the program will keep
> >       running in slow memory.
> >    3. After an application finishes computing on a large block of fast
> >       memory, it may not immediately re-access it. Hence, this memory
> >       can only wait for the memory reclamation mechanism to demote it.
> >    4. Similar to the scenario described in point 3, if the demotion
> >       speed is slow, these cold pages will occupy the promotion
> >       resources, preventing some eligible slow pages from being
> >       immediately promoted, severely affecting application efficiency.
> > 
> > **Solution**
> > We propose the **Scalable Tiered Memory Control (STMC)** mechanism,
> > which delegates the authority of promoting and demoting memory to the
> > application. The principle is simple, as follows:
> > 
> >    1. When an application is preparing for computation, it can promote
> >       the memory it needs to use or ensure the memory resides on a fast
> >       NODE.
> >    2. When an application will not use the memory shortly, it can
> >       immediately demote the memory to slow memory, freeing up valuable
> >       promotion resources.
> > 
> > STMC mechanism is implemented through the madvise system call, providing
> > two new advice options: MADV_DEMOTE and MADV_PROMOTE. MADV_DEMOTE
> > advises demote the physical memory to the node where slow memory
> > resides; this advice only fails if there is no free physical memory on
> > the slow memory node. MADV_PROMOTE advises retaining the physical memory
> > in the fast memory; this advice only fails if there are no promotion
> > slots available on the fast memory node. Benefits brought by STMC
> > include:
> > 
> >    1. The STMC mechanism is a variant of on-demand memory management
> >       designed to let applications enjoy fast memory as much as possible,
> >       while actively demoting to slow memory when not in use, thus
> >       freeing up promotion slots for the NODE and allowing it to run in
> >       an optimized Tiered Memory environment.
> >    2. The STMC mechanism better balances large capacity and latency.
> > 
> > **Shortcomings of STMC**
> > The STMC mechanism requires the caller to manage memory demotion and
> > promotion. If the memory is not promptly demoting after an promotion,
> > it may cause issues similar to memory leaks
> Ehm, that sounds scary. Can you elaborate what's happening here and why 
> it is "similar to memory leaks"?
> 
> 
> Can you also point out why migrate_pages() is not suitable? I would 
> assume demote/promote is in essence simply migrating memory between nodes.
> 
> -- 
> Cheers,
> 
> David / dhildenb
> 

Thank you for the response. Below are my points of view. If there are any
mistakes, I appreciate your understanding:

1. In a tiered memory system, fast nodes and slow nodes act as two common
   memory pools. The system has a certain ratio limit for promotion. For
   example, a NODE may stipulate that when the available memory is less
   than 1GB or 1/4 of the node's memory, promotion are prohibited. If we
   use migrate_pages at this point, it will unrestrictedly promote slow
   pages to fast memory, which may prevent other processes’ pages that
   should have been promoted from being promoted. This is what I mean by
   occupying promotion resources.
2. As described in point 1, if we use MADV_PROMOTE to temporarily promote
   a batch of pages and do not demote them immediately after usage, it
   will occupy many promotion resources. Other hot pages that need promote
   will not be able to get promote, which will impact the performance of
   certain processes.
3. MADV_DEMOTE and MADV_PROMOTE only rely on madvise, while migrate_pages
   depends on libnuma.
4. MADV_DEMOTE and MADV_PROMOTE provide a better balance between capacity
   and latency. They allow hot pages that need promoting to be promoted
   smoothly and pages that need demoting to be demoted immediately. This
   helps tiered memory systems to operate more rationally.

:) BiscuitOS Broiler

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
  2024-08-01  9:57 Zhangrenze
@ 2024-08-01 12:53 ` David Hildenbrand
  2024-08-01 13:05   ` David Hildenbrand
  0 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2024-08-01 12:53 UTC (permalink / raw)
  To: Zhangrenze, linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org
  Cc: arnd@arndb.de, linux-arch@vger.kernel.org, chris@zankel.net,
	jcmvbkbc@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, linux-parisc@vger.kernel.org,
	tsbogend@alpha.franken.de, rdunlap@infradead.org,
	bhelgaas@google.com, linux-mips@vger.kernel.org,
	richard.henderson@linaro.org, ink@jurassic.park.msu.ru,
	mattst88@gmail.com, linux-alpha@vger.kernel.org, Jiaoxupo,
	Zhouhaofan

On 01.08.24 11:57, Zhangrenze wrote:
>>> Sure, here's the Scalable Tiered Memory Control (STMC)
>>>
>>> **Background**
>>>
>>> In the era when artificial intelligence, big data analytics, and
>>> machine learning have become mainstream research topics and
>>> application scenarios, the demand for high-capacity and high-
>>> bandwidth memory in computers has become increasingly important.
>>> The emergence of CXL (Compute Express Link) provides the
>>> possibility of high-capacity memory. Although CXL TYPE3 devices
>>> can provide large memory capacities, their access speed is lower
>>> than traditional DRAM due to hardware architecture limitations.
>>>
>>> To enjoy the large capacity brought by CXL memory while minimizing
>>> the impact of high latency, Linux has introduced the Tiered Memory
>>> architecture. In the Tiered Memory architecture, CXL memory is
>>> treated as an independent, slower NUMA NODE, while DRAM is
>>> considered as a relatively faster NUMA NODE. Applications allocate
>>> memory from the local node, and Tiered Memory, leveraging memory
>>> reclamation and NUMA Balancing mechanisms, can transparently demote
>>> physical pages not recently accessed by user processes to the slower
>>> CXL NUMA NODE. However, when user processes re-access the demoted
>>> memory, the Tiered Memory mechanism will, based on certain logic,
>>> decide whether to promote the demoted physical pages back to the
>>> fast NUMA NODE. If the promotion is successful, the memory accessed
>>> by the user process will reside in DRAM; otherwise, it will reside in
>>> the CXL NODE. Through the Tiered Memory mechanism, Linux balances
>>> betweenlarge memory capacity and latency, striving to maintain an
>>> equilibrium for applications.
>>>
>>> **Problem**
>>> Although Tiered Memory strives to balance between large capacity and
>>> latency, specific scenarios can lead to the following issues:
>>>
>>>     1. In scenarios requiring massive computations, if data is heavily
>>>        stored in CXL slow memory and Tiered Memory cannot promptly
>>>        promote this memory to fast DRAM, it will significantly impact
>>>        program performance.
>>>     2. Similar to the scenario described in point 1, if Tiered Memory
>>>        decides to promote these physical pages to fast DRAM NODE, but
>>>        due to limitations in the DRAM NODE promote ratio, these physical
>>>        pages cannot be promoted. Consequently, the program will keep
>>>        running in slow memory.
>>>     3. After an application finishes computing on a large block of fast
>>>        memory, it may not immediately re-access it. Hence, this memory
>>>        can only wait for the memory reclamation mechanism to demote it.
>>>     4. Similar to the scenario described in point 3, if the demotion
>>>        speed is slow, these cold pages will occupy the promotion
>>>        resources, preventing some eligible slow pages from being
>>>        immediately promoted, severely affecting application efficiency.
>>>
>>> **Solution**
>>> We propose the **Scalable Tiered Memory Control (STMC)** mechanism,
>>> which delegates the authority of promoting and demoting memory to the
>>> application. The principle is simple, as follows:
>>>
>>>     1. When an application is preparing for computation, it can promote
>>>        the memory it needs to use or ensure the memory resides on a fast
>>>        NODE.
>>>     2. When an application will not use the memory shortly, it can
>>>        immediately demote the memory to slow memory, freeing up valuable
>>>        promotion resources.
>>>
>>> STMC mechanism is implemented through the madvise system call, providing
>>> two new advice options: MADV_DEMOTE and MADV_PROMOTE. MADV_DEMOTE
>>> advises demote the physical memory to the node where slow memory
>>> resides; this advice only fails if there is no free physical memory on
>>> the slow memory node. MADV_PROMOTE advises retaining the physical memory
>>> in the fast memory; this advice only fails if there are no promotion
>>> slots available on the fast memory node. Benefits brought by STMC
>>> include:
>>>
>>>     1. The STMC mechanism is a variant of on-demand memory management
>>>        designed to let applications enjoy fast memory as much as possible,
>>>        while actively demoting to slow memory when not in use, thus
>>>        freeing up promotion slots for the NODE and allowing it to run in
>>>        an optimized Tiered Memory environment.
>>>     2. The STMC mechanism better balances large capacity and latency.
>>>
>>> **Shortcomings of STMC**
>>> The STMC mechanism requires the caller to manage memory demotion and
>>> promotion. If the memory is not promptly demoting after an promotion,
>>> it may cause issues similar to memory leaks
>> Ehm, that sounds scary. Can you elaborate what's happening here and why
>> it is "similar to memory leaks"?
>>
>>
>> Can you also point out why migrate_pages() is not suitable? I would
>> assume demote/promote is in essence simply migrating memory between nodes.
>>
>> -- 
>> Cheers,
>>
>> David / dhildenb
>>
> 
> Thank you for the response. Below are my points of view. If there are any
> mistakes, I appreciate your understanding:
> 
> 1. In a tiered memory system, fast nodes and slow nodes act as two common
>     memory pools. The system has a certain ratio limit for promotion. For
>     example, a NODE may stipulate that when the available memory is less
>     than 1GB or 1/4 of the node's memory, promotion are prohibited. If we
>     use migrate_pages at this point, it will unrestrictedly promote slow
>     pages to fast memory, which may prevent other processes’ pages that
>     should have been promoted from being promoted. This is what I mean by
>     occupying promotion resources.
> 2. As described in point 1, if we use MADV_PROMOTE to temporarily promote
>     a batch of pages and do not demote them immediately after usage, it
>     will occupy many promotion resources. Other hot pages that need promote
>     will not be able to get promote, which will impact the performance of
>     certain processes.

So, you mean, applications can actively consume "fast memory" and 
"steal" it from other applications? I assume that's what you meant with 
"memory leak".

I would really suggest to *not* call this "similar to memory leaks", in 
your own favor ;)

> 3. MADV_DEMOTE and MADV_PROMOTE only rely on madvise, while migrate_pages
>     depends on libnuma.

Well, you can trivially call that systemcall also without libnuma ;) So 
that shouldn't really make a difference and is rather something that can 
be solved in user space.

> 4. MADV_DEMOTE and MADV_PROMOTE provide a better balance between capacity
>     and latency. They allow hot pages that need promoting to be promoted
>     smoothly and pages that need demoting to be demoted immediately. This
>     helps tiered memory systems to operate more rationally.

Can you summarize why something similar could not be provided by a 
library that builds up on existing functionality, such as migrate_pages? 
It could easily take a look at memory stats to reason whether a 
promotion/demotion makes sense (your example above with the memory 
distribution).

 From the patch itself I read

"MADV_DEMOTE can mark a range of memory pages as cold
pages and immediately demote them to slow memory. MADV_PROMOTE can mark
a range of memory pages as hot pages and immediately promote them to
fast memory"

which sounds to me like migrate_pages / MADV_COLD might be able to 
achieve something similar.

What's the biggest difference that MADV_DEMOTE|MADV_PROMOTE can do better?

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
  2024-08-01 12:53 ` David Hildenbrand
@ 2024-08-01 13:05   ` David Hildenbrand
  0 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2024-08-01 13:05 UTC (permalink / raw)
  To: Zhangrenze, linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org
  Cc: arnd@arndb.de, linux-arch@vger.kernel.org, chris@zankel.net,
	jcmvbkbc@gmail.com, James.Bottomley@HansenPartnership.com,
	deller@gmx.de, linux-parisc@vger.kernel.org,
	tsbogend@alpha.franken.de, rdunlap@infradead.org,
	bhelgaas@google.com, linux-mips@vger.kernel.org,
	richard.henderson@linaro.org, ink@jurassic.park.msu.ru,
	mattst88@gmail.com, linux-alpha@vger.kernel.org, Jiaoxupo,
	Zhouhaofan

>> 4. MADV_DEMOTE and MADV_PROMOTE provide a better balance between capacity
>>      and latency. They allow hot pages that need promoting to be promoted
>>      smoothly and pages that need demoting to be demoted immediately. This
>>      helps tiered memory systems to operate more rationally.
> 
> Can you summarize why something similar could not be provided by a
> library that builds up on existing functionality, such as migrate_pages?

Sorry, I actually wanted to refer to "move_pages", not "migrate_pages".

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
  2024-08-01  7:56 ` [PATCH v2 1/1] " BiscuitOS Broiler
@ 2024-08-01 19:25   ` Andrew Morton
  2024-08-01 20:36     ` David Hildenbrand
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2024-08-01 19:25 UTC (permalink / raw)
  To: BiscuitOS Broiler
  Cc: linux-mm, linux-kernel, arnd, linux-arch, chris, jcmvbkbc,
	James.Bottomley, deller, linux-parisc, tsbogend, rdunlap,
	bhelgaas, linux-mips, richard.henderson, ink, mattst88,
	linux-alpha, jiaoxupo, zhou.haofan

On Thu, 1 Aug 2024 15:56:10 +0800 BiscuitOS Broiler <zhang.renze@h3c.com> wrote:

> From: BiscuitOS Broiler <zhang.renze@h3c.com>

Please use a real name.

From Documentation/process/submitting-patches.rst:

: then you just add a line saying::
: 
: 	Signed-off-by: Random J Developer <random@developer.example.org>
: 
: using a known identity (sorry, no anonymous contributions.)



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE
  2024-08-01 19:25   ` Andrew Morton
@ 2024-08-01 20:36     ` David Hildenbrand
  0 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2024-08-01 20:36 UTC (permalink / raw)
  To: Andrew Morton, BiscuitOS Broiler
  Cc: linux-mm, linux-kernel, arnd, linux-arch, chris, jcmvbkbc,
	James.Bottomley, deller, linux-parisc, tsbogend, rdunlap,
	bhelgaas, linux-mips, richard.henderson, ink, mattst88,
	linux-alpha, jiaoxupo, zhou.haofan

On 01.08.24 21:25, Andrew Morton wrote:
> On Thu, 1 Aug 2024 15:56:10 +0800 BiscuitOS Broiler <zhang.renze@h3c.com> wrote:
> 
>> From: BiscuitOS Broiler <zhang.renze@h3c.com>
> 
> Please use a real name.
> 
>  From Documentation/process/submitting-patches.rst:
> 
> : then you just add a line saying::
> :
> : 	Signed-off-by: Random J Developer <random@developer.example.org>
> :
> : using a known identity (sorry, no anonymous contributions.)
> 
> 

I'm curious, reading d4563201f33a022fc0353033d9dfeb1606a88330, 
pseudonyms are allowed now as long as we are dealing with a "known 
identity".

"Real name" was replaced by "known identity" in that commit.

I'm pretty much in favor of people just using their real name here as 
well. But apparently, "knwon identity" is sufficient. Not that I could 
tell when someone is a "known identity". Likely "BiscuitOS Broiler" 
would be a known identity and not "anonymous"?

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-08-01 20:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-01  7:56 [PATCH v2 0/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE BiscuitOS Broiler
2024-08-01  7:56 ` [PATCH v2 1/1] " BiscuitOS Broiler
2024-08-01 19:25   ` Andrew Morton
2024-08-01 20:36     ` David Hildenbrand
2024-08-01  8:06 ` [PATCH v2 0/1] " David Hildenbrand
  -- strict thread matches above, loose matches on Subject: below --
2024-08-01  9:57 Zhangrenze
2024-08-01 12:53 ` David Hildenbrand
2024-08-01 13:05   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).