linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] Make MIGRATE_ISOLATE a standalone bit
@ 2024-08-28 20:22 Zi Yan
  2024-08-28 20:22 ` [RFC PATCH 1/4] mm/page_isolation: make page isolation " Zi Yan
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Zi Yan @ 2024-08-28 20:22 UTC (permalink / raw)
  To: linux-mm, David Hildenbrand
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel, Zi Yan

Hi all,

This patchset moves MIGRATE_ISOLATE to a standalone bit to avoid
being overwritten during pageblock isolation process. Currently,
MIGRATE_ISOLATE is part of enum migratetype (in include/linux/mmzone.h),
thus, setting a pageblock to MIGRATE_ISOLATE overwrites its original
migratetype. This causes pageblock migratetype loss during
alloc_contig_range() and memory offline, especially when the process
fails due to a failed pageblock isolation and the code tries to undo the
finished pageblock isolations.

I am also trying to collect feedback on how to handle MIGRATE_ISOLATE in
existing code. As MIGRATE_ISOLATE becomes a standalone bit like
PB_migrate_skip (pageblock skipped by compaction), in theory, existing code
could be changed to use {get,clear,set}_pageblock_isolate() like
{get,clear,set}_pageblock_skip() and MIGRATE_ISOLATE could be removed
from enum migratetype. But free list has a separate MIGRATE_ISOLATE list
and the memory info code used by OOM also shows free memory with
different migratetypes. I wonder if we want more extensive changes to
existing code to remove MIGRATE_ISOLATE. If not, what can I do to improve
the MIGRATE_ISOLATE specialized code in Patch 1? For example:


void set_pageblock_migratetype(struct page *page, int migratetype)
{
	if (unlikely(page_group_by_mobility_disabled &&
		     migratetype < MIGRATE_PCPTYPES))
		migratetype = MIGRATE_UNMOVABLE;

#ifdef CONFIG_MEMORY_ISOLATION
	if (migratetype == MIGRATE_ISOLATE)
		set_pageblock_isolate(page);
	else
#endif
	{
		if (get_pageblock_isolate(page))
			clear_pageblock_isolate(page);
		set_pfnblock_flags_mask(page, (unsigned long)migratetype,
				page_to_pfn(page), MIGRATETYPE_MASK);
	}
}


Design
===

Pageblock flags are read in words to achieve good performance and existing
pageblock flags take 4 bits per pageblock. To avoid a substantial change
to the pageblock flag code, pageblock flag bits are expanded to use 8
and MIGRATE_ISOLATE is moved to use the last bit (bit 7).

It might look like the pageblock flags have doubled the overhead, but in
reality, the overhead is only 1 byte per 2MB/4MB (based on pageblock config),
or 0.0000476 %.

In terms of performance for changing pageblock types, I did a very
simple test by offlining and onlining all memory of a 16GB VM 10 times
and did not see a noticeable runtime difference with the patchset.


TODOs
===

1. improve pageblock migratetype handling code to be less hacky (see the
   example above).
2. more performance tests on pageblock migratetype change.


Any comment and/or suggestion is welcome. Thanks.


Zi Yan (4):
  mm/page_isolation: make page isolation a standalone bit.
  mm/page_isolation: remove migratetype from
    move_freepages_block_isolate()
  mm/page_isolation: remove migratetype from undo_isolate_page_range()
  mm/page_isolation: remove migratetype parameter from more functions.

 include/linux/mmzone.h          | 24 +++++++++++---
 include/linux/page-isolation.h  | 11 +++----
 include/linux/pageblock-flags.h | 29 ++++++++++++++++-
 mm/memory_hotplug.c             |  5 ++-
 mm/page_alloc.c                 | 55 +++++++++++++++++++++++++++------
 mm/page_isolation.c             | 55 ++++++++++++++-------------------
 6 files changed, 123 insertions(+), 56 deletions(-)

-- 
2.45.2



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 1/4] mm/page_isolation: make page isolation a standalone bit.
  2024-08-28 20:22 [RFC PATCH 0/4] Make MIGRATE_ISOLATE a standalone bit Zi Yan
@ 2024-08-28 20:22 ` Zi Yan
  2024-08-28 20:22 ` [RFC PATCH 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate() Zi Yan
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 15+ messages in thread
From: Zi Yan @ 2024-08-28 20:22 UTC (permalink / raw)
  To: linux-mm, David Hildenbrand
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel, Zi Yan

During page isolation, the original migratetype is overwritten, since
MIGRATE_* are enums. Change MIGRATE_ISOLATE to be a standalone bit like
PB_migrate_skip. pageblock bits needs to be word aligned, so expand
the number of pageblock bits from 4 to 8 and make migrate isolate bit 7.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/mmzone.h          | 24 ++++++++++++++++++++----
 include/linux/page-isolation.h  |  2 +-
 include/linux/pageblock-flags.h | 29 ++++++++++++++++++++++++++++-
 mm/page_alloc.c                 | 22 ++++++++++++++++++++--
 4 files changed, 69 insertions(+), 8 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 17506e4a2835..5191a90b94f9 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -108,12 +108,28 @@ extern int page_group_by_mobility_disabled;
 
 #define MIGRATETYPE_MASK ((1UL << PB_migratetype_bits) - 1)
 
-#define get_pageblock_migratetype(page)					\
-	get_pfnblock_flags_mask(page, page_to_pfn(page), MIGRATETYPE_MASK)
+#ifdef CONFIG_MEMORY_ISOLATION
+#define get_pageblock_migratetype(page) \
+		(get_pageblock_isolate(page) ? \
+			MIGRATE_ISOLATE : \
+			get_pfnblock_flags_mask(page, page_to_pfn(page), \
+				MIGRATETYPE_MASK))
+
+#define folio_migratetype(folio) \
+		(get_pageblock_isolate(&folio->page) ? \
+			MIGRATE_ISOLATE : \
+			get_pfnblock_flags_mask(&folio->page, folio_pfn(folio), \
+				MIGRATETYPE_MASK))
+#else
+#define get_pageblock_migratetype(page) \
+		get_pfnblock_flags_mask(page, page_to_pfn(page), \
+			MIGRATETYPE_MASK)
 
-#define folio_migratetype(folio)				\
-	get_pfnblock_flags_mask(&folio->page, folio_pfn(folio),		\
+#define folio_migratetype(folio) \
+		get_pfnblock_flags_mask(&folio->page, folio_pfn(folio), \
 			MIGRATETYPE_MASK)
+#endif
+
 struct free_area {
 	struct list_head	free_list[MIGRATE_TYPES];
 	unsigned long		nr_free;
diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index c16db0067090..11b8695115ea 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -9,7 +9,7 @@ static inline bool has_isolate_pageblock(struct zone *zone)
 }
 static inline bool is_migrate_isolate_page(struct page *page)
 {
-	return get_pageblock_migratetype(page) == MIGRATE_ISOLATE;
+	return get_pageblock_isolate(page);
 }
 static inline bool is_migrate_isolate(int migratetype)
 {
diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h
index fc6b9c87cb0a..a8121cab4b4f 100644
--- a/include/linux/pageblock-flags.h
+++ b/include/linux/pageblock-flags.h
@@ -20,7 +20,10 @@ enum pageblock_bits {
 	PB_migrate_end = PB_migrate + PB_migratetype_bits - 1,
 			/* 3 bits required for migrate types */
 	PB_migrate_skip,/* If set the block is skipped by compaction */
-
+#ifdef CONFIG_MEMORY_ISOLATION
+	PB_migrate_isolate = 7, /* If set the block is isolated */
+			/* set it to 7 to make pageblock bit word aligned */
+#endif
 	/*
 	 * Assume the bits will always align on a word. If this assumption
 	 * changes then get/set pageblock needs updating.
@@ -99,4 +102,28 @@ static inline void set_pageblock_skip(struct page *page)
 }
 #endif /* CONFIG_COMPACTION */
 
+#ifdef CONFIG_MEMORY_ISOLATION
+#define get_pageblock_isolate(page) \
+	get_pfnblock_flags_mask(page, page_to_pfn(page),	\
+			(1 << (PB_migrate_isolate)))
+#define clear_pageblock_isolate(page) \
+	set_pfnblock_flags_mask(page, 0, page_to_pfn(page),	\
+			(1 << PB_migrate_isolate))
+#define set_pageblock_isolate(page) \
+	set_pfnblock_flags_mask(page, (1 << PB_migrate_isolate),	\
+			page_to_pfn(page),			\
+			(1 << PB_migrate_isolate))
+#else
+static inline bool get_pageblock_isolate(struct page *page)
+{
+	return false;
+}
+static inline void clear_pageblock_isolate(struct page *page)
+{
+}
+static inline void set_pageblock_isolate(struct page *page)
+{
+}
+#endif /* CONFIG_MEMORY_ISOLATION */
+
 #endif	/* PAGEBLOCK_FLAGS_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c2ffccf9d213..4ea5cd1a07e2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -380,7 +380,12 @@ unsigned long get_pfnblock_flags_mask(const struct page *page,
 static __always_inline int get_pfnblock_migratetype(const struct page *page,
 					unsigned long pfn)
 {
-	return get_pfnblock_flags_mask(page, pfn, MIGRATETYPE_MASK);
+	return
+#ifdef CONFIG_MEMORY_ISOLATION
+		get_pageblock_isolate(page) ?
+		MIGRATE_ISOLATE :
+#endif
+		get_pfnblock_flags_mask(page, pfn, MIGRATETYPE_MASK);
 }
 
 /**
@@ -398,7 +403,11 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 	unsigned long bitidx, word_bitidx;
 	unsigned long word;
 
+#ifdef CONFIG_MEMORY_ISOLATION
+	BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 8);
+#else
 	BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4);
+#endif
 	BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits));
 
 	bitmap = get_pageblock_bitmap(page, pfn);
@@ -422,8 +431,17 @@ void set_pageblock_migratetype(struct page *page, int migratetype)
 		     migratetype < MIGRATE_PCPTYPES))
 		migratetype = MIGRATE_UNMOVABLE;
 
-	set_pfnblock_flags_mask(page, (unsigned long)migratetype,
+#ifdef CONFIG_MEMORY_ISOLATION
+	if (migratetype == MIGRATE_ISOLATE)
+		set_pageblock_isolate(page);
+	else
+#endif
+	{
+		if (get_pageblock_isolate(page))
+			clear_pageblock_isolate(page);
+		set_pfnblock_flags_mask(page, (unsigned long)migratetype,
 				page_to_pfn(page), MIGRATETYPE_MASK);
+	}
 }
 
 #ifdef CONFIG_DEBUG_VM
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate()
  2024-08-28 20:22 [RFC PATCH 0/4] Make MIGRATE_ISOLATE a standalone bit Zi Yan
  2024-08-28 20:22 ` [RFC PATCH 1/4] mm/page_isolation: make page isolation " Zi Yan
@ 2024-08-28 20:22 ` Zi Yan
  2024-09-02 14:42   ` David Hildenbrand
  2024-08-28 20:22 ` [RFC PATCH 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range() Zi Yan
  2024-08-28 20:22 ` [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions Zi Yan
  3 siblings, 1 reply; 15+ messages in thread
From: Zi Yan @ 2024-08-28 20:22 UTC (permalink / raw)
  To: linux-mm, David Hildenbrand
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel, Zi Yan

Since migratetype is no longer overwritten during pageblock isolation,
moving pageblocks to and from MIGRATE_ISOLATE do not need migratetype.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/page-isolation.h |  3 +--
 mm/page_alloc.c                | 27 +++++++++++++++++++++------
 mm/page_isolation.c            | 19 +++++++++----------
 3 files changed, 31 insertions(+), 18 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 11b8695115ea..6a62401410c3 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -35,8 +35,7 @@ static inline bool is_migrate_isolate(int migratetype)
 
 void set_pageblock_migratetype(struct page *page, int migratetype);
 
-bool move_freepages_block_isolate(struct zone *zone, struct page *page,
-				  int migratetype);
+bool move_freepages_block_isolate(struct zone *zone, struct page *page);
 
 int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 			     int migratetype, int flags, gfp_t gfp_flags);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4ea5cd1a07e2..dc7c36461953 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1764,10 +1764,12 @@ static unsigned long find_large_buddy(unsigned long start_pfn)
  *
  * Returns %true if pages could be moved, %false otherwise.
  */
-bool move_freepages_block_isolate(struct zone *zone, struct page *page,
-				  int migratetype)
+bool move_freepages_block_isolate(struct zone *zone, struct page *page)
 {
 	unsigned long start_pfn, pfn;
+	bool is_block_isolated = get_pageblock_isolate(page);
+	int from_mt;
+	int to_mt;
 
 	if (!prep_move_freepages_block(zone, page, &start_pfn, NULL, NULL))
 		return false;
@@ -1784,7 +1786,10 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
 
 		del_page_from_free_list(buddy, zone, order,
 					get_pfnblock_migratetype(buddy, pfn));
-		set_pageblock_migratetype(page, migratetype);
+		if (is_block_isolated)
+			clear_pageblock_isolate(page);
+		else
+			set_pageblock_isolate(page);
 		split_large_buddy(zone, buddy, pfn, order, FPI_NONE);
 		return true;
 	}
@@ -1795,14 +1800,24 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
 
 		del_page_from_free_list(page, zone, order,
 					get_pfnblock_migratetype(page, pfn));
-		set_pageblock_migratetype(page, migratetype);
+		if (is_block_isolated)
+			clear_pageblock_isolate(page);
+		else
+			set_pageblock_isolate(page);
 		split_large_buddy(zone, page, pfn, order, FPI_NONE);
 		return true;
 	}
 move:
+	if (is_block_isolated)
+		clear_pageblock_isolate(page);
+
+	from_mt = is_block_isolated ? MIGRATE_ISOLATE : get_pageblock_migratetype(page);
+	to_mt = is_block_isolated ? get_pageblock_migratetype(page) : MIGRATE_ISOLATE;
+
+	if (!is_block_isolated)
+		set_pageblock_isolate(page);
 	__move_freepages_block(zone, start_pfn,
-			       get_pfnblock_migratetype(page, start_pfn),
-			       migratetype);
+			       from_mt, to_mt);
 	return true;
 }
 #endif /* CONFIG_MEMORY_ISOLATION */
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 7e04047977cf..3ffdfddbdd50 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -181,7 +181,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
 	unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
 			migratetype, isol_flags);
 	if (!unmovable) {
-		if (!move_freepages_block_isolate(zone, page, MIGRATE_ISOLATE)) {
+		if (!move_freepages_block_isolate(zone, page)) {
 			spin_unlock_irqrestore(&zone->lock, flags);
 			return -EBUSY;
 		}
@@ -202,7 +202,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
 	return -EBUSY;
 }
 
-static void unset_migratetype_isolate(struct page *page, int migratetype)
+static void unset_migratetype_isolate(struct page *page)
 {
 	struct zone *zone;
 	unsigned long flags;
@@ -255,10 +255,10 @@ static void unset_migratetype_isolate(struct page *page, int migratetype)
 		 * Isolating this block already succeeded, so this
 		 * should not fail on zone boundaries.
 		 */
-		WARN_ON_ONCE(!move_freepages_block_isolate(zone, page, migratetype));
+		WARN_ON_ONCE(!move_freepages_block_isolate(zone, page));
 	} else {
-		set_pageblock_migratetype(page, migratetype);
-		__putback_isolated_page(page, order, migratetype);
+		clear_pageblock_isolate(page);
+		__putback_isolated_page(page, order, get_pageblock_migratetype(page));
 	}
 	zone->nr_isolate_pageblock--;
 out:
@@ -428,7 +428,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
 failed:
 	/* restore the original migratetype */
 	if (!skip_isolation)
-		unset_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype);
+		unset_migratetype_isolate(pfn_to_page(isolate_pageblock));
 	return -EBUSY;
 }
 
@@ -501,7 +501,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 	ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true,
 			skip_isolation, migratetype);
 	if (ret) {
-		unset_migratetype_isolate(pfn_to_page(isolate_start), migratetype);
+		unset_migratetype_isolate(pfn_to_page(isolate_start));
 		return ret;
 	}
 
@@ -514,8 +514,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 					start_pfn, end_pfn)) {
 			undo_isolate_page_range(isolate_start, pfn, migratetype);
 			unset_migratetype_isolate(
-				pfn_to_page(isolate_end - pageblock_nr_pages),
-				migratetype);
+				pfn_to_page(isolate_end - pageblock_nr_pages));
 			return -EBUSY;
 		}
 	}
@@ -545,7 +544,7 @@ void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 		page = __first_valid_page(pfn, pageblock_nr_pages);
 		if (!page || !is_migrate_isolate_page(page))
 			continue;
-		unset_migratetype_isolate(page, migratetype);
+		unset_migratetype_isolate(page);
 	}
 }
 /*
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range()
  2024-08-28 20:22 [RFC PATCH 0/4] Make MIGRATE_ISOLATE a standalone bit Zi Yan
  2024-08-28 20:22 ` [RFC PATCH 1/4] mm/page_isolation: make page isolation " Zi Yan
  2024-08-28 20:22 ` [RFC PATCH 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate() Zi Yan
@ 2024-08-28 20:22 ` Zi Yan
  2024-09-02  9:00   ` David Hildenbrand
  2024-08-28 20:22 ` [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions Zi Yan
  3 siblings, 1 reply; 15+ messages in thread
From: Zi Yan @ 2024-08-28 20:22 UTC (permalink / raw)
  To: linux-mm, David Hildenbrand
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel, Zi Yan

Since migratetype is no longer overwritten during pageblock isolation,
undoing pageblock isolation no longer needs which migratetype to restore.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/page-isolation.h | 3 +--
 mm/memory_hotplug.c            | 4 ++--
 mm/page_alloc.c                | 2 +-
 mm/page_isolation.c            | 9 +++------
 4 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 6a62401410c3..c2a1bd621561 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -40,8 +40,7 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page);
 int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 			     int migratetype, int flags, gfp_t gfp_flags);
 
-void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
-			     int migratetype);
+void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
 
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 			int isol_flags);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 721392906dcb..4265272faf4c 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1202,7 +1202,7 @@ int online_pages(unsigned long pfn, unsigned long nr_pages,
 		build_all_zonelists(NULL);
 
 	/* Basic onlining is complete, allow allocation of onlined pages. */
-	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
+	undo_isolate_page_range(pfn, pfn + nr_pages);
 
 	/*
 	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
@@ -2104,7 +2104,7 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 
 failed_removal_isolated:
 	/* pushback to free area */
-	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
+	undo_isolate_page_range(start_pfn, end_pfn);
 	memory_notify(MEM_CANCEL_OFFLINE, &arg);
 failed_removal_pcplists_disabled:
 	lru_cache_enable();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dc7c36461953..4d06932ba69a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6679,7 +6679,7 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
 		     start, end, outer_start, outer_end);
 	}
 done:
-	undo_isolate_page_range(start, end, migratetype);
+	undo_isolate_page_range(start, end);
 	return ret;
 }
 EXPORT_SYMBOL(alloc_contig_range_noprof);
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 3ffdfddbdd50..4c65157d78ef 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -512,7 +512,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 		page = __first_valid_page(pfn, pageblock_nr_pages);
 		if (page && set_migratetype_isolate(page, migratetype, flags,
 					start_pfn, end_pfn)) {
-			undo_isolate_page_range(isolate_start, pfn, migratetype);
+			undo_isolate_page_range(isolate_start, pfn);
 			unset_migratetype_isolate(
 				pfn_to_page(isolate_end - pageblock_nr_pages));
 			return -EBUSY;
@@ -525,13 +525,10 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  * undo_isolate_page_range - undo effects of start_isolate_page_range()
  * @start_pfn:		The first PFN of the isolated range
  * @end_pfn:		The last PFN of the isolated range
- * @migratetype:	New migrate type to set on the range
  *
- * This finds every MIGRATE_ISOLATE page block in the given range
- * and switches it to @migratetype.
+ * This finds and unsets every MIGRATE_ISOLATE page block in the given range
  */
-void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
-			    int migratetype)
+void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn)
 {
 	unsigned long pfn;
 	struct page *page;
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions.
  2024-08-28 20:22 [RFC PATCH 0/4] Make MIGRATE_ISOLATE a standalone bit Zi Yan
                   ` (2 preceding siblings ...)
  2024-08-28 20:22 ` [RFC PATCH 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range() Zi Yan
@ 2024-08-28 20:22 ` Zi Yan
  2024-09-02  9:06   ` David Hildenbrand
  3 siblings, 1 reply; 15+ messages in thread
From: Zi Yan @ 2024-08-28 20:22 UTC (permalink / raw)
  To: linux-mm, David Hildenbrand
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel, Zi Yan

migratetype is no longer overwritten during pageblock isolation,
start_isolate_page_range(), has_unmovable_pages(), and
set_migratetype_isolate() no longer need which migratetype to restore
during isolation failure. For has_unmoable_pages(), it needs to know if
the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation
flags to provide the information.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/page-isolation.h |  3 ++-
 mm/memory_hotplug.c            |  1 -
 mm/page_alloc.c                |  4 +++-
 mm/page_isolation.c            | 27 +++++++++++----------------
 4 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index c2a1bd621561..e94117101b6c 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype)
 
 #define MEMORY_OFFLINE	0x1
 #define REPORT_FAILURE	0x2
+#define CMA_ALLOCATION	0x4
 
 void set_pageblock_migratetype(struct page *page, int migratetype);
 
 bool move_freepages_block_isolate(struct zone *zone, struct page *page);
 
 int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
-			     int migratetype, int flags, gfp_t gfp_flags);
+			     int flags, gfp_t gfp_flags);
 
 void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 4265272faf4c..fe0b71e0f307 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 
 	/* set above range as isolated */
 	ret = start_isolate_page_range(start_pfn, end_pfn,
-				       MIGRATE_MOVABLE,
 				       MEMORY_OFFLINE | REPORT_FAILURE,
 				       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
 	if (ret) {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4d06932ba69a..c60bb95d7e65 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
 	 * put back to page allocator so that buddy can use them.
 	 */
 
-	ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask);
+	ret = start_isolate_page_range(start, end,
+			migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0,
+			gfp_mask);
 	if (ret)
 		goto done;
 
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 4c65157d78ef..07c58b82db76 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -31,7 +31,7 @@
  *
  */
 static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long end_pfn,
-				int migratetype, int flags)
+				int flags)
 {
 	struct page *page = pfn_to_page(start_pfn);
 	struct zone *zone = page_zone(page);
@@ -46,7 +46,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
 		 * isolate CMA pageblocks even when they are not movable in fact
 		 * so consider them movable here.
 		 */
-		if (is_migrate_cma(migratetype))
+		if (flags & CMA_ALLOCATION)
 			return NULL;
 
 		return page;
@@ -144,7 +144,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
  * present in [start_pfn, end_pfn). The pageblock must intersect with
  * [start_pfn, end_pfn).
  */
-static int set_migratetype_isolate(struct page *page, int migratetype, int isol_flags,
+static int set_migratetype_isolate(struct page *page, int isol_flags,
 			unsigned long start_pfn, unsigned long end_pfn)
 {
 	struct zone *zone = page_zone(page);
@@ -179,7 +179,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
 				  end_pfn);
 
 	unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
-			migratetype, isol_flags);
+			isol_flags);
 	if (!unmovable) {
 		if (!move_freepages_block_isolate(zone, page)) {
 			spin_unlock_irqrestore(&zone->lock, flags);
@@ -290,7 +290,6 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
  * @isolate_before:	isolate the pageblock before the boundary_pfn
  * @skip_isolation:	the flag to skip the pageblock isolation in second
  *			isolate_single_pageblock()
- * @migratetype:	migrate type to set in error recovery.
  *
  * Free and in-use pages can be as big as MAX_PAGE_ORDER and contain more than one
  * pageblock. When not all pageblocks within a page are isolated at the same
@@ -306,8 +305,7 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
  * the in-use page then splitting the free page.
  */
 static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
-			gfp_t gfp_flags, bool isolate_before, bool skip_isolation,
-			int migratetype)
+			gfp_t gfp_flags, bool isolate_before, bool skip_isolation)
 {
 	unsigned long start_pfn;
 	unsigned long isolate_pageblock;
@@ -333,11 +331,9 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
 				      zone->zone_start_pfn);
 
 	if (skip_isolation) {
-		int mt __maybe_unused = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));
-
-		VM_BUG_ON(!is_migrate_isolate(mt));
+		VM_BUG_ON(!get_pageblock_isolate(pfn_to_page(isolate_pageblock)));
 	} else {
-		ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype,
+		ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock),
 				flags, isolate_pageblock, isolate_pageblock + pageblock_nr_pages);
 
 		if (ret)
@@ -436,7 +432,6 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
  * start_isolate_page_range() - mark page range MIGRATE_ISOLATE
  * @start_pfn:		The first PFN of the range to be isolated.
  * @end_pfn:		The last PFN of the range to be isolated.
- * @migratetype:	Migrate type to set in error recovery.
  * @flags:		The following flags are allowed (they can be combined in
  *			a bit mask)
  *			MEMORY_OFFLINE - isolate to offline (!allocate) memory
@@ -478,7 +473,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
  * Return: 0 on success and -EBUSY if any part of range cannot be isolated.
  */
 int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
-			     int migratetype, int flags, gfp_t gfp_flags)
+			     int flags, gfp_t gfp_flags)
 {
 	unsigned long pfn;
 	struct page *page;
@@ -490,7 +485,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 
 	/* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */
 	ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false,
-			skip_isolation, migratetype);
+			skip_isolation);
 	if (ret)
 		return ret;
 
@@ -499,7 +494,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 
 	/* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */
 	ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true,
-			skip_isolation, migratetype);
+			skip_isolation);
 	if (ret) {
 		unset_migratetype_isolate(pfn_to_page(isolate_start));
 		return ret;
@@ -510,7 +505,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
 	     pfn < isolate_end - pageblock_nr_pages;
 	     pfn += pageblock_nr_pages) {
 		page = __first_valid_page(pfn, pageblock_nr_pages);
-		if (page && set_migratetype_isolate(page, migratetype, flags,
+		if (page && set_migratetype_isolate(page, flags,
 					start_pfn, end_pfn)) {
 			undo_isolate_page_range(isolate_start, pfn);
 			unset_migratetype_isolate(
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range()
  2024-08-28 20:22 ` [RFC PATCH 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range() Zi Yan
@ 2024-09-02  9:00   ` David Hildenbrand
  2024-09-02 15:34     ` Zi Yan
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2024-09-02  9:00 UTC (permalink / raw)
  To: Zi Yan, linux-mm
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel

On 28.08.24 22:22, Zi Yan wrote:
> Since migratetype is no longer overwritten during pageblock isolation,
> undoing pageblock isolation no longer needs which migratetype to restore.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---

Yes, that's the right way of doing it.

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions.
  2024-08-28 20:22 ` [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions Zi Yan
@ 2024-09-02  9:06   ` David Hildenbrand
  2024-09-02 15:34     ` Zi Yan
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2024-09-02  9:06 UTC (permalink / raw)
  To: Zi Yan, linux-mm
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel

On 28.08.24 22:22, Zi Yan wrote:
> migratetype is no longer overwritten during pageblock isolation,
> start_isolate_page_range(), has_unmovable_pages(), and
> set_migratetype_isolate() no longer need which migratetype to restore
> during isolation failure. For has_unmoable_pages(), it needs to know if
> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation
> flags to provide the information.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>   include/linux/page-isolation.h |  3 ++-
>   mm/memory_hotplug.c            |  1 -
>   mm/page_alloc.c                |  4 +++-
>   mm/page_isolation.c            | 27 +++++++++++----------------
>   4 files changed, 16 insertions(+), 19 deletions(-)
> 
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index c2a1bd621561..e94117101b6c 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype)
>   
>   #define MEMORY_OFFLINE	0x1
>   #define REPORT_FAILURE	0x2
> +#define CMA_ALLOCATION	0x4
>   
>   void set_pageblock_migratetype(struct page *page, int migratetype);
>   
>   bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>   
>   int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
> -			     int migratetype, int flags, gfp_t gfp_flags);
> +			     int flags, gfp_t gfp_flags);
>   
>   void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
>   
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 4265272faf4c..fe0b71e0f307 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
>   
>   	/* set above range as isolated */
>   	ret = start_isolate_page_range(start_pfn, end_pfn,
> -				       MIGRATE_MOVABLE,
>   				       MEMORY_OFFLINE | REPORT_FAILURE,
>   				       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
>   	if (ret) {
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4d06932ba69a..c60bb95d7e65 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
>   	 * put back to page allocator so that buddy can use them.
>   	 */
>   
> -	ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask);
> +	ret = start_isolate_page_range(start, end,
> +			migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0,

Can we have flags for alloc_contig_range() instead of passing in a 
(weird) migratetype?

Then, we should make sure that we warn if we try a CMA allocation on any 
pageblock that is not of type CMA.

I'm trying to remember what happens if we try offlining a memory region 
that is of type MIGRATE_CMA right now ... I remember that we fail it. We 
should make sure that is still the case, otherwise we could seriously 
break something.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate()
  2024-08-28 20:22 ` [RFC PATCH 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate() Zi Yan
@ 2024-09-02 14:42   ` David Hildenbrand
  2024-09-02 15:30     ` Zi Yan
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2024-09-02 14:42 UTC (permalink / raw)
  To: Zi Yan, linux-mm
  Cc: Oscar Salvador, Vlastimil Babka, Johannes Weiner, Baolin Wang,
	Kirill A. Shutemov, Mel Gorman, linux-kernel

On 28.08.24 22:22, Zi Yan wrote:
> Since migratetype is no longer overwritten during pageblock isolation,
> moving pageblocks to and from MIGRATE_ISOLATE do not need migratetype.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>   include/linux/page-isolation.h |  3 +--
>   mm/page_alloc.c                | 27 +++++++++++++++++++++------
>   mm/page_isolation.c            | 19 +++++++++----------
>   3 files changed, 31 insertions(+), 18 deletions(-)
> 
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index 11b8695115ea..6a62401410c3 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -35,8 +35,7 @@ static inline bool is_migrate_isolate(int migratetype)
>   
>   void set_pageblock_migratetype(struct page *page, int migratetype);
>   
> -bool move_freepages_block_isolate(struct zone *zone, struct page *page,
> -				  int migratetype);
> +bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>   
>   int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>   			     int migratetype, int flags, gfp_t gfp_flags);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4ea5cd1a07e2..dc7c36461953 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1764,10 +1764,12 @@ static unsigned long find_large_buddy(unsigned long start_pfn)
>    *
>    * Returns %true if pages could be moved, %false otherwise.
>    */
> -bool move_freepages_block_isolate(struct zone *zone, struct page *page,
> -				  int migratetype)
> +bool move_freepages_block_isolate(struct zone *zone, struct page *page)
>   {
>   	unsigned long start_pfn, pfn;
> +	bool is_block_isolated = get_pageblock_isolate(page);
> +	int from_mt;

I think we should have two functions, one that isolates, another one 
that un-isolates. Or at least make the semantics not depend on the 
current state of the pageblock.

bool pageblock_set_isolated_and_move_free_pages(struct zone *zone, 
struct page *page, bool isolated);

vs.

pageblock_isolate_and_move_free_pages()
pageblock_unisolate_and_move_free_pages()

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate()
  2024-09-02 14:42   ` David Hildenbrand
@ 2024-09-02 15:30     ` Zi Yan
  0 siblings, 0 replies; 15+ messages in thread
From: Zi Yan @ 2024-09-02 15:30 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-mm, Oscar Salvador, Vlastimil Babka, Johannes Weiner,
	Baolin Wang, Kirill A. Shutemov, Mel Gorman, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2281 bytes --]

On 2 Sep 2024, at 10:42, David Hildenbrand wrote:

> On 28.08.24 22:22, Zi Yan wrote:
>> Since migratetype is no longer overwritten during pageblock isolation,
>> moving pageblocks to and from MIGRATE_ISOLATE do not need migratetype.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>>   include/linux/page-isolation.h |  3 +--
>>   mm/page_alloc.c                | 27 +++++++++++++++++++++------
>>   mm/page_isolation.c            | 19 +++++++++----------
>>   3 files changed, 31 insertions(+), 18 deletions(-)
>>
>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>> index 11b8695115ea..6a62401410c3 100644
>> --- a/include/linux/page-isolation.h
>> +++ b/include/linux/page-isolation.h
>> @@ -35,8 +35,7 @@ static inline bool is_migrate_isolate(int migratetype)
>>    void set_pageblock_migratetype(struct page *page, int migratetype);
>>  -bool move_freepages_block_isolate(struct zone *zone, struct page *page,
>> -				  int migratetype);
>> +bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>>    int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>>   			     int migratetype, int flags, gfp_t gfp_flags);
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 4ea5cd1a07e2..dc7c36461953 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1764,10 +1764,12 @@ static unsigned long find_large_buddy(unsigned long start_pfn)
>>    *
>>    * Returns %true if pages could be moved, %false otherwise.
>>    */
>> -bool move_freepages_block_isolate(struct zone *zone, struct page *page,
>> -				  int migratetype)
>> +bool move_freepages_block_isolate(struct zone *zone, struct page *page)
>>   {
>>   	unsigned long start_pfn, pfn;
>> +	bool is_block_isolated = get_pageblock_isolate(page);
>> +	int from_mt;
>
> I think we should have two functions, one that isolates, another one that un-isolates. Or at least make the semantics not depend on the current state of the pageblock.
>
> bool pageblock_set_isolated_and_move_free_pages(struct zone *zone, struct page *page, bool isolated);
>
> vs.
>
> pageblock_isolate_and_move_free_pages()
> pageblock_unisolate_and_move_free_pages()

Sure. Will do.

--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions.
  2024-09-02  9:06   ` David Hildenbrand
@ 2024-09-02 15:34     ` Zi Yan
  2024-09-02 16:42       ` David Hildenbrand
  0 siblings, 1 reply; 15+ messages in thread
From: Zi Yan @ 2024-09-02 15:34 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-mm, Oscar Salvador, Vlastimil Babka, Johannes Weiner,
	Baolin Wang, Kirill A. Shutemov, Mel Gorman, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3406 bytes --]

On 2 Sep 2024, at 5:06, David Hildenbrand wrote:

> On 28.08.24 22:22, Zi Yan wrote:
>> migratetype is no longer overwritten during pageblock isolation,
>> start_isolate_page_range(), has_unmovable_pages(), and
>> set_migratetype_isolate() no longer need which migratetype to restore
>> during isolation failure. For has_unmoable_pages(), it needs to know if
>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation
>> flags to provide the information.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>>   include/linux/page-isolation.h |  3 ++-
>>   mm/memory_hotplug.c            |  1 -
>>   mm/page_alloc.c                |  4 +++-
>>   mm/page_isolation.c            | 27 +++++++++++----------------
>>   4 files changed, 16 insertions(+), 19 deletions(-)
>>
>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>> index c2a1bd621561..e94117101b6c 100644
>> --- a/include/linux/page-isolation.h
>> +++ b/include/linux/page-isolation.h
>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype)
>>    #define MEMORY_OFFLINE	0x1
>>   #define REPORT_FAILURE	0x2
>> +#define CMA_ALLOCATION	0x4
>>    void set_pageblock_migratetype(struct page *page, int migratetype);
>>    bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>>    int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>> -			     int migratetype, int flags, gfp_t gfp_flags);
>> +			     int flags, gfp_t gfp_flags);
>>    void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
>>  diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index 4265272faf4c..fe0b71e0f307 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
>>    	/* set above range as isolated */
>>   	ret = start_isolate_page_range(start_pfn, end_pfn,
>> -				       MIGRATE_MOVABLE,
>>   				       MEMORY_OFFLINE | REPORT_FAILURE,
>>   				       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
>>   	if (ret) {
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 4d06932ba69a..c60bb95d7e65 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
>>   	 * put back to page allocator so that buddy can use them.
>>   	 */
>>  -	ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask);
>> +	ret = start_isolate_page_range(start, end,
>> +			migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0,
>
> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype?
>
> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA.

Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE,
and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one.

>
> I'm trying to remember what happens if we try offlining a memory region that is of type MIGRATE_CMA right now ... I remember that we fail it. We should make sure that is still the case, otherwise we could seriously break something.

Yes, that fails. That is why I added CMA_ALLOCATION, which is used in
has_unmovable_pages() for this situation.


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range()
  2024-09-02  9:00   ` David Hildenbrand
@ 2024-09-02 15:34     ` Zi Yan
  0 siblings, 0 replies; 15+ messages in thread
From: Zi Yan @ 2024-09-02 15:34 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-mm, Oscar Salvador, Vlastimil Babka, Johannes Weiner,
	Baolin Wang, Kirill A. Shutemov, Mel Gorman, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 420 bytes --]

On 2 Sep 2024, at 5:00, David Hildenbrand wrote:

> On 28.08.24 22:22, Zi Yan wrote:
>> Since migratetype is no longer overwritten during pageblock isolation,
>> undoing pageblock isolation no longer needs which migratetype to restore.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>
> Yes, that's the right way of doing it.
>
> Acked-by: David Hildenbrand <david@redhat.com>

Thanks. :)

--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions.
  2024-09-02 15:34     ` Zi Yan
@ 2024-09-02 16:42       ` David Hildenbrand
  2024-09-04  2:02         ` Zi Yan
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2024-09-02 16:42 UTC (permalink / raw)
  To: Zi Yan
  Cc: linux-mm, Oscar Salvador, Vlastimil Babka, Johannes Weiner,
	Baolin Wang, Kirill A. Shutemov, Mel Gorman, linux-kernel

On 02.09.24 17:34, Zi Yan wrote:
> On 2 Sep 2024, at 5:06, David Hildenbrand wrote:
> 
>> On 28.08.24 22:22, Zi Yan wrote:
>>> migratetype is no longer overwritten during pageblock isolation,
>>> start_isolate_page_range(), has_unmovable_pages(), and
>>> set_migratetype_isolate() no longer need which migratetype to restore
>>> during isolation failure. For has_unmoable_pages(), it needs to know if
>>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation
>>> flags to provide the information.
>>>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> ---
>>>    include/linux/page-isolation.h |  3 ++-
>>>    mm/memory_hotplug.c            |  1 -
>>>    mm/page_alloc.c                |  4 +++-
>>>    mm/page_isolation.c            | 27 +++++++++++----------------
>>>    4 files changed, 16 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>>> index c2a1bd621561..e94117101b6c 100644
>>> --- a/include/linux/page-isolation.h
>>> +++ b/include/linux/page-isolation.h
>>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype)
>>>     #define MEMORY_OFFLINE	0x1
>>>    #define REPORT_FAILURE	0x2
>>> +#define CMA_ALLOCATION	0x4
>>>     void set_pageblock_migratetype(struct page *page, int migratetype);
>>>     bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>>>     int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>>> -			     int migratetype, int flags, gfp_t gfp_flags);
>>> +			     int flags, gfp_t gfp_flags);
>>>     void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
>>>   diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 4265272faf4c..fe0b71e0f307 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
>>>     	/* set above range as isolated */
>>>    	ret = start_isolate_page_range(start_pfn, end_pfn,
>>> -				       MIGRATE_MOVABLE,
>>>    				       MEMORY_OFFLINE | REPORT_FAILURE,
>>>    				       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
>>>    	if (ret) {
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 4d06932ba69a..c60bb95d7e65 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
>>>    	 * put back to page allocator so that buddy can use them.
>>>    	 */
>>>   -	ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask);
>>> +	ret = start_isolate_page_range(start, end,
>>> +			migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0,
>>
>> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype?
>>
>> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA.
> 
> Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE,
> and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one.
> 

Maybe we want some proper, distinct alloc_contig_range() falgs 
"acr_flags_t". Might be cleanest, to express anything that doesn't fall 
into the gfp_t flag category.

Exposing MEMORY_OFFLINE feels wrong, for example.

>>
>> I'm trying to remember what happens if we try offlining a memory region that is of type MIGRATE_CMA right now ... I remember that we fail it. We should make sure that is still the case, otherwise we could seriously break something.
> 
> Yes, that fails. That is why I added CMA_ALLOCATION, which is used in
> has_unmovable_pages() for this situation.


Ah, okay I stumbled over that but wasn't sure if it gets the job done.

thanks!

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions.
  2024-09-02 16:42       ` David Hildenbrand
@ 2024-09-04  2:02         ` Zi Yan
  2024-09-04  8:50           ` David Hildenbrand
  0 siblings, 1 reply; 15+ messages in thread
From: Zi Yan @ 2024-09-04  2:02 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-mm, Oscar Salvador, Vlastimil Babka, Johannes Weiner,
	Baolin Wang, Kirill A. Shutemov, Mel Gorman, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 4620 bytes --]

On 2 Sep 2024, at 12:42, David Hildenbrand wrote:

> On 02.09.24 17:34, Zi Yan wrote:
>> On 2 Sep 2024, at 5:06, David Hildenbrand wrote:
>>
>>> On 28.08.24 22:22, Zi Yan wrote:
>>>> migratetype is no longer overwritten during pageblock isolation,
>>>> start_isolate_page_range(), has_unmovable_pages(), and
>>>> set_migratetype_isolate() no longer need which migratetype to restore
>>>> during isolation failure. For has_unmoable_pages(), it needs to know if
>>>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation
>>>> flags to provide the information.
>>>>
>>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>>> ---
>>>>    include/linux/page-isolation.h |  3 ++-
>>>>    mm/memory_hotplug.c            |  1 -
>>>>    mm/page_alloc.c                |  4 +++-
>>>>    mm/page_isolation.c            | 27 +++++++++++----------------
>>>>    4 files changed, 16 insertions(+), 19 deletions(-)
>>>>
>>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>>>> index c2a1bd621561..e94117101b6c 100644
>>>> --- a/include/linux/page-isolation.h
>>>> +++ b/include/linux/page-isolation.h
>>>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype)
>>>>     #define MEMORY_OFFLINE	0x1
>>>>    #define REPORT_FAILURE	0x2
>>>> +#define CMA_ALLOCATION	0x4
>>>>     void set_pageblock_migratetype(struct page *page, int migratetype);
>>>>     bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>>>>     int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>>>> -			     int migratetype, int flags, gfp_t gfp_flags);
>>>> +			     int flags, gfp_t gfp_flags);
>>>>     void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
>>>>   diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>> index 4265272faf4c..fe0b71e0f307 100644
>>>> --- a/mm/memory_hotplug.c
>>>> +++ b/mm/memory_hotplug.c
>>>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
>>>>     	/* set above range as isolated */
>>>>    	ret = start_isolate_page_range(start_pfn, end_pfn,
>>>> -				       MIGRATE_MOVABLE,
>>>>    				       MEMORY_OFFLINE | REPORT_FAILURE,
>>>>    				       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
>>>>    	if (ret) {
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index 4d06932ba69a..c60bb95d7e65 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
>>>>    	 * put back to page allocator so that buddy can use them.
>>>>    	 */
>>>>   -	ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask);
>>>> +	ret = start_isolate_page_range(start, end,
>>>> +			migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0,
>>>
>>> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype?
>>>
>>> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA.
>>
>> Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE,
>> and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one.
>>
>
> Maybe we want some proper, distinct alloc_contig_range() falgs "acr_flags_t". Might be cleanest, to express anything that doesn't fall into the gfp_t flag category.
>
> Exposing MEMORY_OFFLINE feels wrong, for example.

OK, it seems that I mixed up of start_isolate_page_range() flags with
alloc_contig_range() flags. Let me clarify them below.

For start_isolate_page_range(), memory offline calls it separately and
needs MEMORY_OFFLINE and REPORT_FAILURE; CMA allocation uses it via
alloc_contig_range() and needs a flag (like CMA_ALLOCATION) for its
own checks.

BTW, it seems to me that MEMORY_OFFLINE and REPORT_FAILURE can be merged,
since they are always used together. Let me know if you disagree.

For alloc_contig_range(), migratetype parameter is what you are talking about
above. There are two callers: cma_alloc() and alloc_contig_pages().
The acr_flags_t is basically a caller id. Something like?
enum acr_flags_t {
	ACR_CMA_ALLOC,
	ACR_CONTIG_PAGES,
};

And ACR_CMA_ALLOC needs to be translated to CMA_ALLOCATION when
start_isolate_page_range() is called.

BTW, after removing migratetype parameter from alloc_contig_range(),
the tracepoint in __alloc_contig_migrate_range() needs to be changed to
use acr_flags_t, since I do not think we want to convert acr_flags_t
back to migratetype.


Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions.
  2024-09-04  2:02         ` Zi Yan
@ 2024-09-04  8:50           ` David Hildenbrand
  2024-09-04 13:53             ` Zi Yan
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2024-09-04  8:50 UTC (permalink / raw)
  To: Zi Yan
  Cc: linux-mm, Oscar Salvador, Vlastimil Babka, Johannes Weiner,
	Baolin Wang, Kirill A. Shutemov, Mel Gorman, linux-kernel

On 04.09.24 04:02, Zi Yan wrote:
> On 2 Sep 2024, at 12:42, David Hildenbrand wrote:
> 
>> On 02.09.24 17:34, Zi Yan wrote:
>>> On 2 Sep 2024, at 5:06, David Hildenbrand wrote:
>>>
>>>> On 28.08.24 22:22, Zi Yan wrote:
>>>>> migratetype is no longer overwritten during pageblock isolation,
>>>>> start_isolate_page_range(), has_unmovable_pages(), and
>>>>> set_migratetype_isolate() no longer need which migratetype to restore
>>>>> during isolation failure. For has_unmoable_pages(), it needs to know if
>>>>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation
>>>>> flags to provide the information.
>>>>>
>>>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>>>> ---
>>>>>     include/linux/page-isolation.h |  3 ++-
>>>>>     mm/memory_hotplug.c            |  1 -
>>>>>     mm/page_alloc.c                |  4 +++-
>>>>>     mm/page_isolation.c            | 27 +++++++++++----------------
>>>>>     4 files changed, 16 insertions(+), 19 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>>>>> index c2a1bd621561..e94117101b6c 100644
>>>>> --- a/include/linux/page-isolation.h
>>>>> +++ b/include/linux/page-isolation.h
>>>>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype)
>>>>>      #define MEMORY_OFFLINE	0x1
>>>>>     #define REPORT_FAILURE	0x2
>>>>> +#define CMA_ALLOCATION	0x4
>>>>>      void set_pageblock_migratetype(struct page *page, int migratetype);
>>>>>      bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>>>>>      int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>>>>> -			     int migratetype, int flags, gfp_t gfp_flags);
>>>>> +			     int flags, gfp_t gfp_flags);
>>>>>      void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
>>>>>    diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>>> index 4265272faf4c..fe0b71e0f307 100644
>>>>> --- a/mm/memory_hotplug.c
>>>>> +++ b/mm/memory_hotplug.c
>>>>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
>>>>>      	/* set above range as isolated */
>>>>>     	ret = start_isolate_page_range(start_pfn, end_pfn,
>>>>> -				       MIGRATE_MOVABLE,
>>>>>     				       MEMORY_OFFLINE | REPORT_FAILURE,
>>>>>     				       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
>>>>>     	if (ret) {
>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>>> index 4d06932ba69a..c60bb95d7e65 100644
>>>>> --- a/mm/page_alloc.c
>>>>> +++ b/mm/page_alloc.c
>>>>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
>>>>>     	 * put back to page allocator so that buddy can use them.
>>>>>     	 */
>>>>>    -	ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask);
>>>>> +	ret = start_isolate_page_range(start, end,
>>>>> +			migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0,
>>>>
>>>> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype?
>>>>
>>>> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA.
>>>
>>> Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE,
>>> and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one.
>>>
>>
>> Maybe we want some proper, distinct alloc_contig_range() falgs "acr_flags_t". Might be cleanest, to express anything that doesn't fall into the gfp_t flag category.
>>
>> Exposing MEMORY_OFFLINE feels wrong, for example.
> 
> OK, it seems that I mixed up of start_isolate_page_range() flags with
> alloc_contig_range() flags. Let me clarify them below.
> 
> For start_isolate_page_range(), memory offline calls it separately and
> needs MEMORY_OFFLINE and REPORT_FAILURE; CMA allocation uses it via
> alloc_contig_range() and needs a flag (like CMA_ALLOCATION) for its
> own checks.
> 
> BTW, it seems to me that MEMORY_OFFLINE and REPORT_FAILURE can be merged,
> since they are always used together. Let me know if you disagree.

I think there was a discussion about possibly using REPORT_FAILURE in 
other cases, but I agree that we might just merge them at this point.

> 
> For alloc_contig_range(), migratetype parameter is what you are talking about
> above. There are two callers: cma_alloc() and alloc_contig_pages().
> The acr_flags_t is basically a caller id. Something like?
> enum acr_flags_t {
> 	ACR_CMA_ALLOC,
> 	ACR_CONTIG_PAGES,
> };

I'd do something like:

typedef unsigned int __bitwise acr_flags_t;

#define ACR_CMA		((__force acr_flags_t)BIT(0))

No need for "ACR_CONTIG_PAGES", it's implicit if the CMA flag is not set.


> 
> And ACR_CMA_ALLOC needs to be translated to CMA_ALLOCATION when
> start_isolate_page_range() is called.

Yes.

> 
> BTW, after removing migratetype parameter from alloc_contig_range(),
> the tracepoint in __alloc_contig_migrate_range() needs to be changed to
> use acr_flags_t, since I do not think we want to convert acr_flags_t
> back to migratetype.

Sure, feel free to modify these tracepoints as it suits you.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions.
  2024-09-04  8:50           ` David Hildenbrand
@ 2024-09-04 13:53             ` Zi Yan
  0 siblings, 0 replies; 15+ messages in thread
From: Zi Yan @ 2024-09-04 13:53 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-mm, Oscar Salvador, Vlastimil Babka, Johannes Weiner,
	Baolin Wang, Kirill A. Shutemov, Mel Gorman, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5450 bytes --]

On 4 Sep 2024, at 4:50, David Hildenbrand wrote:

> On 04.09.24 04:02, Zi Yan wrote:
>> On 2 Sep 2024, at 12:42, David Hildenbrand wrote:
>>
>>> On 02.09.24 17:34, Zi Yan wrote:
>>>> On 2 Sep 2024, at 5:06, David Hildenbrand wrote:
>>>>
>>>>> On 28.08.24 22:22, Zi Yan wrote:
>>>>>> migratetype is no longer overwritten during pageblock isolation,
>>>>>> start_isolate_page_range(), has_unmovable_pages(), and
>>>>>> set_migratetype_isolate() no longer need which migratetype to restore
>>>>>> during isolation failure. For has_unmoable_pages(), it needs to know if
>>>>>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation
>>>>>> flags to provide the information.
>>>>>>
>>>>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>>>>> ---
>>>>>>     include/linux/page-isolation.h |  3 ++-
>>>>>>     mm/memory_hotplug.c            |  1 -
>>>>>>     mm/page_alloc.c                |  4 +++-
>>>>>>     mm/page_isolation.c            | 27 +++++++++++----------------
>>>>>>     4 files changed, 16 insertions(+), 19 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>>>>>> index c2a1bd621561..e94117101b6c 100644
>>>>>> --- a/include/linux/page-isolation.h
>>>>>> +++ b/include/linux/page-isolation.h
>>>>>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype)
>>>>>>      #define MEMORY_OFFLINE	0x1
>>>>>>     #define REPORT_FAILURE	0x2
>>>>>> +#define CMA_ALLOCATION	0x4
>>>>>>      void set_pageblock_migratetype(struct page *page, int migratetype);
>>>>>>      bool move_freepages_block_isolate(struct zone *zone, struct page *page);
>>>>>>      int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>>>>>> -			     int migratetype, int flags, gfp_t gfp_flags);
>>>>>> +			     int flags, gfp_t gfp_flags);
>>>>>>      void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
>>>>>>    diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>>>> index 4265272faf4c..fe0b71e0f307 100644
>>>>>> --- a/mm/memory_hotplug.c
>>>>>> +++ b/mm/memory_hotplug.c
>>>>>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
>>>>>>      	/* set above range as isolated */
>>>>>>     	ret = start_isolate_page_range(start_pfn, end_pfn,
>>>>>> -				       MIGRATE_MOVABLE,
>>>>>>     				       MEMORY_OFFLINE | REPORT_FAILURE,
>>>>>>     				       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
>>>>>>     	if (ret) {
>>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>>>> index 4d06932ba69a..c60bb95d7e65 100644
>>>>>> --- a/mm/page_alloc.c
>>>>>> +++ b/mm/page_alloc.c
>>>>>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
>>>>>>     	 * put back to page allocator so that buddy can use them.
>>>>>>     	 */
>>>>>>    -	ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask);
>>>>>> +	ret = start_isolate_page_range(start, end,
>>>>>> +			migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0,
>>>>>
>>>>> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype?
>>>>>
>>>>> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA.
>>>>
>>>> Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE,
>>>> and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one.
>>>>
>>>
>>> Maybe we want some proper, distinct alloc_contig_range() falgs "acr_flags_t". Might be cleanest, to express anything that doesn't fall into the gfp_t flag category.
>>>
>>> Exposing MEMORY_OFFLINE feels wrong, for example.
>>
>> OK, it seems that I mixed up of start_isolate_page_range() flags with
>> alloc_contig_range() flags. Let me clarify them below.
>>
>> For start_isolate_page_range(), memory offline calls it separately and
>> needs MEMORY_OFFLINE and REPORT_FAILURE; CMA allocation uses it via
>> alloc_contig_range() and needs a flag (like CMA_ALLOCATION) for its
>> own checks.
>>
>> BTW, it seems to me that MEMORY_OFFLINE and REPORT_FAILURE can be merged,
>> since they are always used together. Let me know if you disagree.
>
> I think there was a discussion about possibly using REPORT_FAILURE in other cases, but I agree that we might just merge them at this point.
>
>>
>> For alloc_contig_range(), migratetype parameter is what you are talking about
>> above. There are two callers: cma_alloc() and alloc_contig_pages().
>> The acr_flags_t is basically a caller id. Something like?
>> enum acr_flags_t {
>> 	ACR_CMA_ALLOC,
>> 	ACR_CONTIG_PAGES,
>> };
>
> I'd do something like:
>
> typedef unsigned int __bitwise acr_flags_t;
>
> #define ACR_CMA		((__force acr_flags_t)BIT(0))
>
> No need for "ACR_CONTIG_PAGES", it's implicit if the CMA flag is not set.

Got it. Will use this in the next version.
>
>
>>
>> And ACR_CMA_ALLOC needs to be translated to CMA_ALLOCATION when
>> start_isolate_page_range() is called.
>
> Yes.
>
>>
>> BTW, after removing migratetype parameter from alloc_contig_range(),
>> the tracepoint in __alloc_contig_migrate_range() needs to be changed to
>> use acr_flags_t, since I do not think we want to convert acr_flags_t
>> back to migratetype.
>
> Sure, feel free to modify these tracepoints as it suits you.

Thanks. :)

--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-09-04 13:53 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-28 20:22 [RFC PATCH 0/4] Make MIGRATE_ISOLATE a standalone bit Zi Yan
2024-08-28 20:22 ` [RFC PATCH 1/4] mm/page_isolation: make page isolation " Zi Yan
2024-08-28 20:22 ` [RFC PATCH 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate() Zi Yan
2024-09-02 14:42   ` David Hildenbrand
2024-09-02 15:30     ` Zi Yan
2024-08-28 20:22 ` [RFC PATCH 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range() Zi Yan
2024-09-02  9:00   ` David Hildenbrand
2024-09-02 15:34     ` Zi Yan
2024-08-28 20:22 ` [RFC PATCH 4/4] mm/page_isolation: remove migratetype parameter from more functions Zi Yan
2024-09-02  9:06   ` David Hildenbrand
2024-09-02 15:34     ` Zi Yan
2024-09-02 16:42       ` David Hildenbrand
2024-09-04  2:02         ` Zi Yan
2024-09-04  8:50           ` David Hildenbrand
2024-09-04 13:53             ` Zi Yan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).