linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/11] Add and use memdesc_flags_t
@ 2025-08-05 17:22 Matthew Wilcox (Oracle)
  2025-08-05 17:22 ` [PATCH 01/11] mm: Introduce memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (11 more replies)
  0 siblings, 12 replies; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

At some point struct page will be separated from struct slab and struct
folio.  This is a step towards that by introducing a type for the 'flags'
word of all three structures.  This gives us a certain amount of type
safety by establishing that some of these unsigned longs are different
from other unsigned longs in that they contain things like node ID,
section number and zone number in the upper bits.  That lets us have
functions that can be easily called by anyone who has a slab, folio or
page (but not easily by anyone else) to get the node or zone.

There's going to be some unusual merge problems with this as some odd
bits of the kernel decide they want to print out the flags value or
something similar by writing page->flags and now they'll need to write
page->flags.f instead.  That's most of the churn here.  Maybe we should
be removing these things from the debug output?

The build bots have had since Friday to chew on this as I pushed it
to git://git.infradead.org/users/willy/pagecache.git folio-page-split

Matthew Wilcox (Oracle) (11):
  mm: Introduce memdesc_flags_t
  mm: Convert page_to_section() to memdesc_section()
  mm: Introduce memdesc_nid()
  mm: Introduce memdesc_zonenum()
  slab: Use memdesc_flags_t
  slab: Use memdesc_nid()
  mm: Introduce memdesc_is_zone_device()
  mm: Reimplement folio_is_device_private()
  mm: Reimplement folio_is_device_coherent()
  mm: Reimplement folio_is_fsdax()
  mm: Add folio_is_pci_p2pdma()

 arch/x86/mm/pat/memtype.c          |  6 ++--
 fs/fuse/dev.c                      |  2 +-
 fs/gfs2/glops.c                    |  2 +-
 fs/jffs2/file.c                    |  4 +--
 fs/nilfs2/page.c                   |  2 +-
 fs/proc/page.c                     |  4 +--
 fs/ubifs/file.c                    |  6 ++--
 include/asm-generic/memory_model.h |  2 +-
 include/linux/memremap.h           | 39 +++++++++++++---------
 include/linux/mm.h                 | 53 ++++++++++++++++--------------
 include/linux/mm_inline.h          | 12 +++----
 include/linux/mm_types.h           |  8 +++--
 include/linux/mmzone.h             | 30 +++++++++++------
 include/linux/page-flags.h         | 40 +++++++++++-----------
 include/linux/pgalloc_tag.h        |  7 ++--
 include/trace/events/page_ref.h    |  4 +--
 mm/filemap.c                       |  8 ++---
 mm/gup.c                           |  2 +-
 mm/huge_memory.c                   |  4 +--
 mm/memory-failure.c                | 12 +++----
 mm/mmzone.c                        |  4 +--
 mm/page_alloc.c                    | 12 +++----
 mm/slab.h                          |  6 ++--
 mm/slub.c                          | 18 +++++-----
 mm/sparse.c                        |  6 ++--
 mm/swap.c                          |  8 ++---
 mm/vmscan.c                        | 18 +++++-----
 mm/workingset.c                    |  2 +-
 28 files changed, 174 insertions(+), 147 deletions(-)

-- 
2.47.2



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 01/11] mm: Introduce memdesc_flags_t
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 18:24   ` Zi Yan
  2025-08-19 17:49   ` Kairui Song
  2025-08-05 17:22 ` [PATCH 02/11] mm: Convert page_to_section() to memdesc_section() Matthew Wilcox (Oracle)
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

Wrap the unsigned long flags in a typedef.  In upcoming patches, this
will provide a strong hint that you can't just pass a random unsigned
long to functions which take this as an argument.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 arch/x86/mm/pat/memtype.c       |  6 ++---
 fs/fuse/dev.c                   |  2 +-
 fs/gfs2/glops.c                 |  2 +-
 fs/jffs2/file.c                 |  4 ++--
 fs/nilfs2/page.c                |  2 +-
 fs/proc/page.c                  |  4 ++--
 fs/ubifs/file.c                 |  6 ++---
 include/linux/mm.h              | 32 +++++++++++++-------------
 include/linux/mm_inline.h       | 12 +++++-----
 include/linux/mm_types.h        |  8 +++++--
 include/linux/mmzone.h          |  2 +-
 include/linux/page-flags.h      | 40 ++++++++++++++++-----------------
 include/linux/pgalloc_tag.h     |  7 +++---
 include/trace/events/page_ref.h |  4 ++--
 mm/filemap.c                    |  8 +++----
 mm/huge_memory.c                |  4 ++--
 mm/memory-failure.c             | 12 +++++-----
 mm/mmzone.c                     |  4 ++--
 mm/page_alloc.c                 | 12 +++++-----
 mm/swap.c                       |  8 +++----
 mm/vmscan.c                     | 18 +++++++--------
 mm/workingset.c                 |  2 +-
 22 files changed, 102 insertions(+), 97 deletions(-)

diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index c09284302dd3..b68200a0e0c6 100644
--- a/arch/x86/mm/pat/memtype.c
+++ b/arch/x86/mm/pat/memtype.c
@@ -126,7 +126,7 @@ __setup("debugpat", pat_debug_setup);
 
 static inline enum page_cache_mode get_page_memtype(struct page *pg)
 {
-	unsigned long pg_flags = pg->flags & _PGMT_MASK;
+	unsigned long pg_flags = pg->flags.f & _PGMT_MASK;
 
 	if (pg_flags == _PGMT_WB)
 		return _PAGE_CACHE_MODE_WB;
@@ -161,10 +161,10 @@ static inline void set_page_memtype(struct page *pg,
 		break;
 	}
 
-	old_flags = READ_ONCE(pg->flags);
+	old_flags = READ_ONCE(pg->flags.f);
 	do {
 		new_flags = (old_flags & _PGMT_CLEAR_MASK) | memtype_flags;
-	} while (!try_cmpxchg(&pg->flags, &old_flags, new_flags));
+	} while (!try_cmpxchg(&pg->flags.f, &old_flags, new_flags));
 }
 #else
 static inline enum page_cache_mode get_page_memtype(struct page *pg)
diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
index e80cd8f2c049..8a89f0aa1d4d 100644
--- a/fs/fuse/dev.c
+++ b/fs/fuse/dev.c
@@ -935,7 +935,7 @@ static int fuse_check_folio(struct folio *folio)
 {
 	if (folio_mapped(folio) ||
 	    folio->mapping != NULL ||
-	    (folio->flags & PAGE_FLAGS_CHECK_AT_PREP &
+	    (folio->flags.f & PAGE_FLAGS_CHECK_AT_PREP &
 	     ~(1 << PG_locked |
 	       1 << PG_referenced |
 	       1 << PG_lru |
diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
index fe0faad4892f..0c0a80b3baca 100644
--- a/fs/gfs2/glops.c
+++ b/fs/gfs2/glops.c
@@ -40,7 +40,7 @@ static void gfs2_ail_error(struct gfs2_glock *gl, const struct buffer_head *bh)
 	       "AIL buffer %p: blocknr %llu state 0x%08lx mapping %p page "
 	       "state 0x%lx\n",
 	       bh, (unsigned long long)bh->b_blocknr, bh->b_state,
-	       bh->b_folio->mapping, bh->b_folio->flags);
+	       bh->b_folio->mapping, bh->b_folio->flags.f);
 	fs_err(sdp, "AIL glock %u:%llu mapping %p\n",
 	       gl->gl_name.ln_type, gl->gl_name.ln_number,
 	       gfs2_glock2aspace(gl));
diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
index dd3dff95cb24..b697f3c259ef 100644
--- a/fs/jffs2/file.c
+++ b/fs/jffs2/file.c
@@ -230,7 +230,7 @@ static int jffs2_write_begin(const struct kiocb *iocb,
 			goto release_sem;
 		}
 	}
-	jffs2_dbg(1, "end write_begin(). folio->flags %lx\n", folio->flags);
+	jffs2_dbg(1, "end write_begin(). folio->flags %lx\n", folio->flags.f);
 
 release_sem:
 	mutex_unlock(&c->alloc_sem);
@@ -259,7 +259,7 @@ static int jffs2_write_end(const struct kiocb *iocb,
 
 	jffs2_dbg(1, "%s(): ino #%lu, page at 0x%llx, range %d-%d, flags %lx\n",
 		  __func__, inode->i_ino, folio_pos(folio),
-		  start, end, folio->flags);
+		  start, end, folio->flags.f);
 
 	/* We need to avoid deadlock with page_cache_read() in
 	   jffs2_garbage_collect_pass(). So the folio must be
diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index 806b056d2260..56c4da417b6a 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -167,7 +167,7 @@ void nilfs_folio_bug(struct folio *folio)
 	printk(KERN_CRIT "NILFS_FOLIO_BUG(%p): cnt=%d index#=%llu flags=0x%lx "
 	       "mapping=%p ino=%lu\n",
 	       folio, folio_ref_count(folio),
-	       (unsigned long long)folio->index, folio->flags, m, ino);
+	       (unsigned long long)folio->index, folio->flags.f, m, ino);
 
 	head = folio_buffers(folio);
 	if (head) {
diff --git a/fs/proc/page.c b/fs/proc/page.c
index ba3568e97fd1..771e0b6bc630 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -163,7 +163,7 @@ u64 stable_page_flags(const struct page *page)
 	snapshot_page(&ps, page);
 	folio = &ps.folio_snapshot;
 
-	k = folio->flags;
+	k = folio->flags.f;
 	mapping = (unsigned long)folio->mapping;
 	is_anon = mapping & FOLIO_MAPPING_ANON;
 
@@ -238,7 +238,7 @@ u64 stable_page_flags(const struct page *page)
 	if (u & (1 << KPF_HUGE))
 		u |= kpf_copy_bit(k, KPF_HWPOISON,	PG_hwpoison);
 	else
-		u |= kpf_copy_bit(ps.page_snapshot.flags, KPF_HWPOISON, PG_hwpoison);
+		u |= kpf_copy_bit(ps.page_snapshot.flags.f, KPF_HWPOISON, PG_hwpoison);
 #endif
 
 	u |= kpf_copy_bit(k, KPF_RESERVED,	PG_reserved);
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index e75a6cec67be..ca41ce8208c4 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -107,7 +107,7 @@ static int do_readpage(struct folio *folio)
 	size_t offset = 0;
 
 	dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx",
-		inode->i_ino, folio->index, i_size, folio->flags);
+		inode->i_ino, folio->index, i_size, folio->flags.f);
 	ubifs_assert(c, !folio_test_checked(folio));
 	ubifs_assert(c, !folio->private);
 
@@ -600,7 +600,7 @@ static int populate_page(struct ubifs_info *c, struct folio *folio,
 	pgoff_t end_index;
 
 	dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx",
-		inode->i_ino, folio->index, i_size, folio->flags);
+		inode->i_ino, folio->index, i_size, folio->flags.f);
 
 	end_index = (i_size - 1) >> PAGE_SHIFT;
 	if (!i_size || folio->index > end_index) {
@@ -988,7 +988,7 @@ static int ubifs_writepage(struct folio *folio, struct writeback_control *wbc)
 	int err, len = folio_size(folio);
 
 	dbg_gen("ino %lu, pg %lu, pg flags %#lx",
-		inode->i_ino, folio->index, folio->flags);
+		inode->i_ino, folio->index, folio->flags.f);
 	ubifs_assert(c, folio->private != NULL);
 
 	/* Is the folio fully outside @i_size? (truncate in progress) */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 349f0d9aad22..779822a829a9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -973,7 +973,7 @@ static inline unsigned int compound_order(struct page *page)
 {
 	struct folio *folio = (struct folio *)page;
 
-	if (!test_bit(PG_head, &folio->flags))
+	if (!test_bit(PG_head, &folio->flags.f))
 		return 0;
 	return folio_large_order(folio);
 }
@@ -1503,7 +1503,7 @@ static inline bool is_nommu_shared_mapping(vm_flags_t flags)
  */
 static inline int page_zone_id(struct page *page)
 {
-	return (page->flags >> ZONEID_PGSHIFT) & ZONEID_MASK;
+	return (page->flags.f >> ZONEID_PGSHIFT) & ZONEID_MASK;
 }
 
 #ifdef NODE_NOT_IN_PAGE_FLAGS
@@ -1511,7 +1511,7 @@ int page_to_nid(const struct page *page);
 #else
 static inline int page_to_nid(const struct page *page)
 {
-	return (PF_POISONED_CHECK(page)->flags >> NODES_PGSHIFT) & NODES_MASK;
+	return (PF_POISONED_CHECK(page)->flags.f >> NODES_PGSHIFT) & NODES_MASK;
 }
 #endif
 
@@ -1586,14 +1586,14 @@ static inline void page_cpupid_reset_last(struct page *page)
 #else
 static inline int folio_last_cpupid(struct folio *folio)
 {
-	return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
+	return (folio->flags.f >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 }
 
 int folio_xchg_last_cpupid(struct folio *folio, int cpupid);
 
 static inline void page_cpupid_reset_last(struct page *page)
 {
-	page->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT;
+	page->flags.f |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT;
 }
 #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */
 
@@ -1689,7 +1689,7 @@ static inline u8 page_kasan_tag(const struct page *page)
 	u8 tag = KASAN_TAG_KERNEL;
 
 	if (kasan_enabled()) {
-		tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+		tag = (page->flags.f >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
 		tag ^= 0xff;
 	}
 
@@ -1704,12 +1704,12 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
 		return;
 
 	tag ^= 0xff;
-	old_flags = READ_ONCE(page->flags);
+	old_flags = READ_ONCE(page->flags.f);
 	do {
 		flags = old_flags;
 		flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
 		flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
-	} while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
+	} while (unlikely(!try_cmpxchg(&page->flags.f, &old_flags, flags)));
 }
 
 static inline void page_kasan_tag_reset(struct page *page)
@@ -1753,13 +1753,13 @@ static inline pg_data_t *folio_pgdat(const struct folio *folio)
 #ifdef SECTION_IN_PAGE_FLAGS
 static inline void set_page_section(struct page *page, unsigned long section)
 {
-	page->flags &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT);
-	page->flags |= (section & SECTIONS_MASK) << SECTIONS_PGSHIFT;
+	page->flags.f &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT);
+	page->flags.f |= (section & SECTIONS_MASK) << SECTIONS_PGSHIFT;
 }
 
 static inline unsigned long page_to_section(const struct page *page)
 {
-	return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
+	return (page->flags.f >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
 }
 #endif
 
@@ -1964,14 +1964,14 @@ static inline bool folio_is_longterm_pinnable(struct folio *folio)
 
 static inline void set_page_zone(struct page *page, enum zone_type zone)
 {
-	page->flags &= ~(ZONES_MASK << ZONES_PGSHIFT);
-	page->flags |= (zone & ZONES_MASK) << ZONES_PGSHIFT;
+	page->flags.f &= ~(ZONES_MASK << ZONES_PGSHIFT);
+	page->flags.f |= (zone & ZONES_MASK) << ZONES_PGSHIFT;
 }
 
 static inline void set_page_node(struct page *page, unsigned long node)
 {
-	page->flags &= ~(NODES_MASK << NODES_PGSHIFT);
-	page->flags |= (node & NODES_MASK) << NODES_PGSHIFT;
+	page->flags.f &= ~(NODES_MASK << NODES_PGSHIFT);
+	page->flags.f |= (node & NODES_MASK) << NODES_PGSHIFT;
 }
 
 static inline void set_page_links(struct page *page, enum zone_type zone,
@@ -2013,7 +2013,7 @@ static inline long compound_nr(struct page *page)
 {
 	struct folio *folio = (struct folio *)page;
 
-	if (!test_bit(PG_head, &folio->flags))
+	if (!test_bit(PG_head, &folio->flags.f))
 		return 1;
 	return folio_large_nr_pages(folio);
 }
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 89b518ff097e..150302b4a905 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -143,7 +143,7 @@ static inline int lru_tier_from_refs(int refs, bool workingset)
 
 static inline int folio_lru_refs(struct folio *folio)
 {
-	unsigned long flags = READ_ONCE(folio->flags);
+	unsigned long flags = READ_ONCE(folio->flags.f);
 
 	if (!(flags & BIT(PG_referenced)))
 		return 0;
@@ -156,7 +156,7 @@ static inline int folio_lru_refs(struct folio *folio)
 
 static inline int folio_lru_gen(struct folio *folio)
 {
-	unsigned long flags = READ_ONCE(folio->flags);
+	unsigned long flags = READ_ONCE(folio->flags.f);
 
 	return ((flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
 }
@@ -268,7 +268,7 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio,
 	gen = lru_gen_from_seq(seq);
 	flags = (gen + 1UL) << LRU_GEN_PGOFF;
 	/* see the comment on MIN_NR_GENS about PG_active */
-	set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags);
+	set_mask_bits(&folio->flags.f, LRU_GEN_MASK | BIT(PG_active), flags);
 
 	lru_gen_update_size(lruvec, folio, -1, gen);
 	/* for folio_rotate_reclaimable() */
@@ -293,7 +293,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
 
 	/* for folio_migrate_flags() */
 	flags = !reclaiming && lru_gen_is_active(lruvec, gen) ? BIT(PG_active) : 0;
-	flags = set_mask_bits(&folio->flags, LRU_GEN_MASK, flags);
+	flags = set_mask_bits(&folio->flags.f, LRU_GEN_MASK, flags);
 	gen = ((flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
 
 	lru_gen_update_size(lruvec, folio, gen, -1);
@@ -304,9 +304,9 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
 
 static inline void folio_migrate_refs(struct folio *new, struct folio *old)
 {
-	unsigned long refs = READ_ONCE(old->flags) & LRU_REFS_MASK;
+	unsigned long refs = READ_ONCE(old->flags.f) & LRU_REFS_MASK;
 
-	set_mask_bits(&new->flags, LRU_REFS_MASK, refs);
+	set_mask_bits(&new->flags.f, LRU_REFS_MASK, refs);
 }
 #else /* !CONFIG_LRU_GEN */
 
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 08bc2442db93..15bb1c3738c0 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -33,6 +33,10 @@ struct address_space;
 struct futex_private_hash;
 struct mem_cgroup;
 
+typedef struct {
+	unsigned long f;
+} memdesc_flags_t;
+
 /*
  * Each physical page in the system has a struct page associated with
  * it to keep track of whatever it is we are using the page for at the
@@ -71,7 +75,7 @@ struct mem_cgroup;
 #endif
 
 struct page {
-	unsigned long flags;		/* Atomic flags, some possibly
+	memdesc_flags_t flags;		/* Atomic flags, some possibly
 					 * updated asynchronously */
 	/*
 	 * Five words (20/40 bytes) are available in this union.
@@ -382,7 +386,7 @@ struct folio {
 	union {
 		struct {
 	/* public: */
-			unsigned long flags;
+			memdesc_flags_t flags;
 			union {
 				struct list_head lru;
 	/* private: avoid cluttering the output */
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 0c5da9141983..b4852269da0e 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1172,7 +1172,7 @@ static inline bool zone_is_empty(struct zone *zone)
 static inline enum zone_type page_zonenum(const struct page *page)
 {
 	ASSERT_EXCLUSIVE_BITS(page->flags, ZONES_MASK << ZONES_PGSHIFT);
-	return (page->flags >> ZONES_PGSHIFT) & ZONES_MASK;
+	return (page->flags.f >> ZONES_PGSHIFT) & ZONES_MASK;
 }
 
 static inline enum zone_type folio_zonenum(const struct folio *folio)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 8e4d6eda8a8d..822b3ba48163 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -217,7 +217,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
 	 * cold cacheline in some cases.
 	 */
 	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
-	    test_bit(PG_head, &page->flags)) {
+	    test_bit(PG_head, &page->flags.f)) {
 		/*
 		 * We can safely access the field of the @page[1] with PG_head
 		 * because the @page is a compound page composed with at least
@@ -325,14 +325,14 @@ static __always_inline int PageTail(const struct page *page)
 
 static __always_inline int PageCompound(const struct page *page)
 {
-	return test_bit(PG_head, &page->flags) ||
+	return test_bit(PG_head, &page->flags.f) ||
 	       READ_ONCE(page->compound_head) & 1;
 }
 
 #define	PAGE_POISON_PATTERN	-1l
 static inline int PagePoisoned(const struct page *page)
 {
-	return READ_ONCE(page->flags) == PAGE_POISON_PATTERN;
+	return READ_ONCE(page->flags.f) == PAGE_POISON_PATTERN;
 }
 
 #ifdef CONFIG_DEBUG_VM
@@ -349,8 +349,8 @@ static const unsigned long *const_folio_flags(const struct folio *folio,
 	const struct page *page = &folio->page;
 
 	VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
-	VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
-	return &page[n].flags;
+	VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page);
+	return &page[n].flags.f;
 }
 
 static unsigned long *folio_flags(struct folio *folio, unsigned n)
@@ -358,8 +358,8 @@ static unsigned long *folio_flags(struct folio *folio, unsigned n)
 	struct page *page = &folio->page;
 
 	VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
-	VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
-	return &page[n].flags;
+	VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page);
+	return &page[n].flags.f;
 }
 
 /*
@@ -449,37 +449,37 @@ FOLIO_CLEAR_FLAG(name, page)
 #define TESTPAGEFLAG(uname, lname, policy)				\
 FOLIO_TEST_FLAG(lname, FOLIO_##policy)					\
 static __always_inline int Page##uname(const struct page *page)		\
-{ return test_bit(PG_##lname, &policy(page, 0)->flags); }
+{ return test_bit(PG_##lname, &policy(page, 0)->flags.f); }
 
 #define SETPAGEFLAG(uname, lname, policy)				\
 FOLIO_SET_FLAG(lname, FOLIO_##policy)					\
 static __always_inline void SetPage##uname(struct page *page)		\
-{ set_bit(PG_##lname, &policy(page, 1)->flags); }
+{ set_bit(PG_##lname, &policy(page, 1)->flags.f); }
 
 #define CLEARPAGEFLAG(uname, lname, policy)				\
 FOLIO_CLEAR_FLAG(lname, FOLIO_##policy)					\
 static __always_inline void ClearPage##uname(struct page *page)		\
-{ clear_bit(PG_##lname, &policy(page, 1)->flags); }
+{ clear_bit(PG_##lname, &policy(page, 1)->flags.f); }
 
 #define __SETPAGEFLAG(uname, lname, policy)				\
 __FOLIO_SET_FLAG(lname, FOLIO_##policy)					\
 static __always_inline void __SetPage##uname(struct page *page)		\
-{ __set_bit(PG_##lname, &policy(page, 1)->flags); }
+{ __set_bit(PG_##lname, &policy(page, 1)->flags.f); }
 
 #define __CLEARPAGEFLAG(uname, lname, policy)				\
 __FOLIO_CLEAR_FLAG(lname, FOLIO_##policy)				\
 static __always_inline void __ClearPage##uname(struct page *page)	\
-{ __clear_bit(PG_##lname, &policy(page, 1)->flags); }
+{ __clear_bit(PG_##lname, &policy(page, 1)->flags.f); }
 
 #define TESTSETFLAG(uname, lname, policy)				\
 FOLIO_TEST_SET_FLAG(lname, FOLIO_##policy)				\
 static __always_inline int TestSetPage##uname(struct page *page)	\
-{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); }
+{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags.f); }
 
 #define TESTCLEARFLAG(uname, lname, policy)				\
 FOLIO_TEST_CLEAR_FLAG(lname, FOLIO_##policy)				\
 static __always_inline int TestClearPage##uname(struct page *page)	\
-{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); }
+{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags.f); }
 
 #define PAGEFLAG(uname, lname, policy)					\
 	TESTPAGEFLAG(uname, lname, policy)				\
@@ -848,7 +848,7 @@ static __always_inline bool folio_test_head(const struct folio *folio)
 static __always_inline int PageHead(const struct page *page)
 {
 	PF_POISONED_CHECK(page);
-	return test_bit(PG_head, &page->flags) && !page_is_fake_head(page);
+	return test_bit(PG_head, &page->flags.f) && !page_is_fake_head(page);
 }
 
 __SETPAGEFLAG(Head, head, PF_ANY)
@@ -1172,28 +1172,28 @@ static __always_inline int PageAnonExclusive(const struct page *page)
 	 */
 	if (PageHuge(page))
 		page = compound_head(page);
-	return test_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+	return test_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
 }
 
 static __always_inline void SetPageAnonExclusive(struct page *page)
 {
 	VM_BUG_ON_PGFLAGS(!PageAnonNotKsm(page), page);
 	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
-	set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+	set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
 }
 
 static __always_inline void ClearPageAnonExclusive(struct page *page)
 {
 	VM_BUG_ON_PGFLAGS(!PageAnonNotKsm(page), page);
 	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
-	clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+	clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
 }
 
 static __always_inline void __ClearPageAnonExclusive(struct page *page)
 {
 	VM_BUG_ON_PGFLAGS(!PageAnon(page), page);
 	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
-	__clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
+	__clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
 }
 
 #ifdef CONFIG_MMU
@@ -1243,7 +1243,7 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page)
  */
 static inline int folio_has_private(const struct folio *folio)
 {
-	return !!(folio->flags & PAGE_FLAGS_PRIVATE);
+	return !!(folio->flags.f & PAGE_FLAGS_PRIVATE);
 }
 
 #undef PF_ANY
diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
index 8a7f4f802c57..38a82d65e58e 100644
--- a/include/linux/pgalloc_tag.h
+++ b/include/linux/pgalloc_tag.h
@@ -107,7 +107,8 @@ static inline bool get_page_tag_ref(struct page *page, union codetag_ref *ref,
 	if (static_key_enabled(&mem_profiling_compressed)) {
 		pgalloc_tag_idx idx;
 
-		idx = (page->flags >> alloc_tag_ref_offs) & alloc_tag_ref_mask;
+		idx = (page->flags.f >> alloc_tag_ref_offs) &
+			alloc_tag_ref_mask;
 		idx_to_ref(idx, ref);
 		handle->page = page;
 	} else {
@@ -149,11 +150,11 @@ static inline void update_page_tag_ref(union pgtag_ref_handle handle, union code
 		idx = (unsigned long)ref_to_idx(ref);
 		idx = (idx & alloc_tag_ref_mask) << alloc_tag_ref_offs;
 		do {
-			old_flags = READ_ONCE(page->flags);
+			old_flags = READ_ONCE(page->flags.f);
 			flags = old_flags;
 			flags &= ~(alloc_tag_ref_mask << alloc_tag_ref_offs);
 			flags |= idx;
-		} while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
+		} while (unlikely(!try_cmpxchg(&page->flags.f, &old_flags, flags)));
 	} else {
 		if (WARN_ON(!handle.ref || !ref))
 			return;
diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
index fe33a255b7d0..ea6b5c4baf3d 100644
--- a/include/trace/events/page_ref.h
+++ b/include/trace/events/page_ref.h
@@ -28,7 +28,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
 
 	TP_fast_assign(
 		__entry->pfn = page_to_pfn(page);
-		__entry->flags = page->flags;
+		__entry->flags = page->flags.f;
 		__entry->count = page_ref_count(page);
 		__entry->mapcount = atomic_read(&page->_mapcount);
 		__entry->mapping = page->mapping;
@@ -77,7 +77,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
 
 	TP_fast_assign(
 		__entry->pfn = page_to_pfn(page);
-		__entry->flags = page->flags;
+		__entry->flags = page->flags.f;
 		__entry->count = page_ref_count(page);
 		__entry->mapcount = atomic_read(&page->_mapcount);
 		__entry->mapping = page->mapping;
diff --git a/mm/filemap.c b/mm/filemap.c
index 751838ef05e5..2e63f98c9520 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1140,10 +1140,10 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync,
 	 */
 	flags = wait->flags;
 	if (flags & WQ_FLAG_EXCLUSIVE) {
-		if (test_bit(key->bit_nr, &key->folio->flags))
+		if (test_bit(key->bit_nr, &key->folio->flags.f))
 			return -1;
 		if (flags & WQ_FLAG_CUSTOM) {
-			if (test_and_set_bit(key->bit_nr, &key->folio->flags))
+			if (test_and_set_bit(key->bit_nr, &key->folio->flags.f))
 				return -1;
 			flags |= WQ_FLAG_DONE;
 		}
@@ -1226,9 +1226,9 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr,
 					struct wait_queue_entry *wait)
 {
 	if (wait->flags & WQ_FLAG_EXCLUSIVE) {
-		if (test_and_set_bit(bit_nr, &folio->flags))
+		if (test_and_set_bit(bit_nr, &folio->flags.f))
 			return false;
-	} else if (test_bit(bit_nr, &folio->flags))
+	} else if (test_bit(bit_nr, &folio->flags.f))
 		return false;
 
 	wait->flags |= WQ_FLAG_WOKEN | WQ_FLAG_DONE;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9c38a95e9f09..6b5f8b0db6c4 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3310,8 +3310,8 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
 		 * unreferenced sub-pages of an anonymous THP: we can simply drop
 		 * PG_anon_exclusive (-> PG_mappedtodisk) for these here.
 		 */
-		new_folio->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
-		new_folio->flags |= (folio->flags &
+		new_folio->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
+		new_folio->flags.f |= (folio->flags.f &
 				((1L << PG_referenced) |
 				 (1L << PG_swapbacked) |
 				 (1L << PG_swapcache) |
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 3047b9ac667e..718eb37bd077 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1693,10 +1693,10 @@ static int identify_page_state(unsigned long pfn, struct page *p,
 	 * carried out only if the first check can't determine the page status.
 	 */
 	for (ps = error_states;; ps++)
-		if ((p->flags & ps->mask) == ps->res)
+		if ((p->flags.f & ps->mask) == ps->res)
 			break;
 
-	page_flags |= (p->flags & (1UL << PG_dirty));
+	page_flags |= (p->flags.f & (1UL << PG_dirty));
 
 	if (!ps->mask)
 		for (ps = error_states;; ps++)
@@ -2123,7 +2123,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb
 		return action_result(pfn, MF_MSG_FREE_HUGE, res);
 	}
 
-	page_flags = folio->flags;
+	page_flags = folio->flags.f;
 
 	if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
 		folio_unlock(folio);
@@ -2384,7 +2384,7 @@ int memory_failure(unsigned long pfn, int flags)
 	 * folio_remove_rmap_*() in try_to_unmap_one(). So to determine page
 	 * status correctly, we save a copy of the page flags at this time.
 	 */
-	page_flags = folio->flags;
+	page_flags = folio->flags.f;
 
 	/*
 	 * __munlock_folio() may clear a writeback folio's LRU flag without
@@ -2730,13 +2730,13 @@ static int soft_offline_in_use_page(struct page *page)
 				putback_movable_pages(&pagelist);
 
 			pr_info("%#lx: %s migration failed %ld, type %pGp\n",
-				pfn, msg_page[huge], ret, &page->flags);
+				pfn, msg_page[huge], ret, &page->flags.f);
 			if (ret > 0)
 				ret = -EBUSY;
 		}
 	} else {
 		pr_info("%#lx: %s isolation failed, page count %d, type %pGp\n",
-			pfn, msg_page[huge], page_count(page), &page->flags);
+			pfn, msg_page[huge], page_count(page), &page->flags.f);
 		ret = -EBUSY;
 	}
 	return ret;
diff --git a/mm/mmzone.c b/mm/mmzone.c
index f9baa8882fbf..0c8f181d9d50 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -99,14 +99,14 @@ int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
 	unsigned long old_flags, flags;
 	int last_cpupid;
 
-	old_flags = READ_ONCE(folio->flags);
+	old_flags = READ_ONCE(folio->flags.f);
 	do {
 		flags = old_flags;
 		last_cpupid = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
 
 		flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
 		flags |= (cpupid & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
-	} while (unlikely(!try_cmpxchg(&folio->flags, &old_flags, flags)));
+	} while (unlikely(!try_cmpxchg(&folio->flags.f, &old_flags, flags)));
 
 	return last_cpupid;
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d1d037f97c5f..b6c040f7be85 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -950,7 +950,7 @@ static inline void __free_one_page(struct page *page,
 	bool to_tail;
 
 	VM_BUG_ON(!zone_is_initialized(zone));
-	VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
+	VM_BUG_ON_PAGE(page->flags.f & PAGE_FLAGS_CHECK_AT_PREP, page);
 
 	VM_BUG_ON(migratetype == -1);
 	VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
@@ -1043,7 +1043,7 @@ static inline bool page_expected_state(struct page *page,
 			page->memcg_data |
 #endif
 			page_pool_page_is_pp(page) |
-			(page->flags & check_flags)))
+			(page->flags.f & check_flags)))
 		return false;
 
 	return true;
@@ -1059,7 +1059,7 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
 		bad_reason = "non-NULL mapping";
 	if (unlikely(page_ref_count(page) != 0))
 		bad_reason = "nonzero _refcount";
-	if (unlikely(page->flags & flags)) {
+	if (unlikely(page->flags.f & flags)) {
 		if (flags == PAGE_FLAGS_CHECK_AT_PREP)
 			bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag(s) set";
 		else
@@ -1358,7 +1358,7 @@ __always_inline bool free_pages_prepare(struct page *page,
 		int i;
 
 		if (compound) {
-			page[1].flags &= ~PAGE_FLAGS_SECOND;
+			page[1].flags.f &= ~PAGE_FLAGS_SECOND;
 #ifdef NR_PAGES_IN_LARGE_FOLIO
 			folio->_nr_pages = 0;
 #endif
@@ -1372,7 +1372,7 @@ __always_inline bool free_pages_prepare(struct page *page,
 					continue;
 				}
 			}
-			(page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
+			(page + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
 		}
 	}
 	if (folio_test_anon(folio)) {
@@ -1391,7 +1391,7 @@ __always_inline bool free_pages_prepare(struct page *page,
 	}
 
 	page_cpupid_reset_last(page);
-	page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
+	page->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
 	reset_page_owner(page, order);
 	page_table_check_free(page, order);
 	pgalloc_tag_sub(page, 1 << order);
diff --git a/mm/swap.c b/mm/swap.c
index 3632dd061beb..d2a23aa8d5ac 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -387,14 +387,14 @@ static void __lru_cache_activate_folio(struct folio *folio)
 
 static void lru_gen_inc_refs(struct folio *folio)
 {
-	unsigned long new_flags, old_flags = READ_ONCE(folio->flags);
+	unsigned long new_flags, old_flags = READ_ONCE(folio->flags.f);
 
 	if (folio_test_unevictable(folio))
 		return;
 
 	/* see the comment on LRU_REFS_FLAGS */
 	if (!folio_test_referenced(folio)) {
-		set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced));
+		set_mask_bits(&folio->flags.f, LRU_REFS_MASK, BIT(PG_referenced));
 		return;
 	}
 
@@ -406,7 +406,7 @@ static void lru_gen_inc_refs(struct folio *folio)
 		}
 
 		new_flags = old_flags + BIT(LRU_REFS_PGOFF);
-	} while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
+	} while (!try_cmpxchg(&folio->flags.f, &old_flags, new_flags));
 }
 
 static bool lru_gen_clear_refs(struct folio *folio)
@@ -418,7 +418,7 @@ static bool lru_gen_clear_refs(struct folio *folio)
 	if (gen < 0)
 		return true;
 
-	set_mask_bits(&folio->flags, LRU_REFS_FLAGS | BIT(PG_workingset), 0);
+	set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS | BIT(PG_workingset), 0);
 
 	lrugen = &folio_lruvec(folio)->lrugen;
 	/* whether can do without shuffling under the LRU lock */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7de11524a936..edb3c992b117 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -888,11 +888,11 @@ static bool lru_gen_set_refs(struct folio *folio)
 {
 	/* see the comment on LRU_REFS_FLAGS */
 	if (!folio_test_referenced(folio) && !folio_test_workingset(folio)) {
-		set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced));
+		set_mask_bits(&folio->flags.f, LRU_REFS_MASK, BIT(PG_referenced));
 		return false;
 	}
 
-	set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_workingset));
+	set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS, BIT(PG_workingset));
 	return true;
 }
 #else
@@ -3257,13 +3257,13 @@ static bool positive_ctrl_err(struct ctrl_pos *sp, struct ctrl_pos *pv)
 /* promote pages accessed through page tables */
 static int folio_update_gen(struct folio *folio, int gen)
 {
-	unsigned long new_flags, old_flags = READ_ONCE(folio->flags);
+	unsigned long new_flags, old_flags = READ_ONCE(folio->flags.f);
 
 	VM_WARN_ON_ONCE(gen >= MAX_NR_GENS);
 
 	/* see the comment on LRU_REFS_FLAGS */
 	if (!folio_test_referenced(folio) && !folio_test_workingset(folio)) {
-		set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced));
+		set_mask_bits(&folio->flags.f, LRU_REFS_MASK, BIT(PG_referenced));
 		return -1;
 	}
 
@@ -3274,7 +3274,7 @@ static int folio_update_gen(struct folio *folio, int gen)
 
 		new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_FLAGS);
 		new_flags |= ((gen + 1UL) << LRU_GEN_PGOFF) | BIT(PG_workingset);
-	} while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
+	} while (!try_cmpxchg(&folio->flags.f, &old_flags, new_flags));
 
 	return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
 }
@@ -3285,7 +3285,7 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai
 	int type = folio_is_file_lru(folio);
 	struct lru_gen_folio *lrugen = &lruvec->lrugen;
 	int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]);
-	unsigned long new_flags, old_flags = READ_ONCE(folio->flags);
+	unsigned long new_flags, old_flags = READ_ONCE(folio->flags.f);
 
 	VM_WARN_ON_ONCE_FOLIO(!(old_flags & LRU_GEN_MASK), folio);
 
@@ -3302,7 +3302,7 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai
 		/* for folio_end_writeback() */
 		if (reclaiming)
 			new_flags |= BIT(PG_reclaim);
-	} while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
+	} while (!try_cmpxchg(&folio->flags.f, &old_flags, new_flags));
 
 	lru_gen_update_size(lruvec, folio, old_gen, new_gen);
 
@@ -4553,7 +4553,7 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca
 
 	/* see the comment on LRU_REFS_FLAGS */
 	if (!folio_test_referenced(folio))
-		set_mask_bits(&folio->flags, LRU_REFS_MASK, 0);
+		set_mask_bits(&folio->flags.f, LRU_REFS_MASK, 0);
 
 	/* for shrink_folio_list() */
 	folio_clear_reclaim(folio);
@@ -4766,7 +4766,7 @@ static int evict_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 		/* don't add rejected folios to the oldest generation */
 		if (lru_gen_folio_seq(lruvec, folio, false) == min_seq[type])
-			set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_active));
+			set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS, BIT(PG_active));
 	}
 
 	spin_lock_irq(&lruvec->lru_lock);
diff --git a/mm/workingset.c b/mm/workingset.c
index 6e7f4cb1b9a7..68a76a91111f 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -318,7 +318,7 @@ static void lru_gen_refault(struct folio *folio, void *shadow)
 		folio_set_workingset(folio);
 		mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta);
 	} else
-		set_mask_bits(&folio->flags, LRU_REFS_MASK, (refs - 1UL) << LRU_REFS_PGOFF);
+		set_mask_bits(&folio->flags.f, LRU_REFS_MASK, (refs - 1UL) << LRU_REFS_PGOFF);
 unlock:
 	rcu_read_unlock();
 }
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 02/11] mm: Convert page_to_section() to memdesc_section()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
  2025-08-05 17:22 ` [PATCH 01/11] mm: Introduce memdesc_flags_t Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 18:31   ` Zi Yan
  2025-08-05 17:22 ` [PATCH 03/11] mm: Introduce memdesc_nid() Matthew Wilcox (Oracle)
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

Pass in the memdesc_flags_t instead of a pointer to the page.  This will
allow us to remove a few conversions to struct page in upcoming patches.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/asm-generic/memory_model.h | 2 +-
 include/linux/mm.h                 | 4 ++--
 mm/sparse.c                        | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
index 74d0077cc5fa..efa6610acbc7 100644
--- a/include/asm-generic/memory_model.h
+++ b/include/asm-generic/memory_model.h
@@ -53,7 +53,7 @@ static inline int pfn_valid(unsigned long pfn)
  */
 #define __page_to_pfn(pg)					\
 ({	const struct page *__pg = (pg);				\
-	int __sec = page_to_section(__pg);			\
+	int __sec = memdesc_section(__pg->flags);		\
 	(unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec)));	\
 })
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 779822a829a9..bfdec5ad3afb 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1757,9 +1757,9 @@ static inline void set_page_section(struct page *page, unsigned long section)
 	page->flags.f |= (section & SECTIONS_MASK) << SECTIONS_PGSHIFT;
 }
 
-static inline unsigned long page_to_section(const struct page *page)
+static inline unsigned long memdesc_section(memdesc_flags_t mdf)
 {
-	return (page->flags.f >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
+	return (mdf.f >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
 }
 #endif
 
diff --git a/mm/sparse.c b/mm/sparse.c
index 3c012cf83cc2..6c1d400f8962 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -45,7 +45,7 @@ static u16 section_to_node_table[NR_MEM_SECTIONS] __cacheline_aligned;
 
 int page_to_nid(const struct page *page)
 {
-	return section_to_node_table[page_to_section(page)];
+	return section_to_node_table[memdesc_section(page->flags)];
 }
 EXPORT_SYMBOL(page_to_nid);
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 03/11] mm: Introduce memdesc_nid()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
  2025-08-05 17:22 ` [PATCH 01/11] mm: Introduce memdesc_flags_t Matthew Wilcox (Oracle)
  2025-08-05 17:22 ` [PATCH 02/11] mm: Convert page_to_section() to memdesc_section() Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 18:47   ` Zi Yan
  2025-08-05 17:22 ` [PATCH 04/11] mm: Introduce memdesc_zonenum() Matthew Wilcox (Oracle)
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

Remove a conversion from folio to page by passing the folio->flags
(which are a copy of the page->flags) to the new memdesc_nid() function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mm.h | 21 +++++++++++++--------
 mm/sparse.c        |  6 +++---
 2 files changed, 16 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index bfdec5ad3afb..c64423869b30 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1507,17 +1507,22 @@ static inline int page_zone_id(struct page *page)
 }
 
 #ifdef NODE_NOT_IN_PAGE_FLAGS
-int page_to_nid(const struct page *page);
+int memdesc_nid(memdesc_flags_t mdf);
 #else
-static inline int page_to_nid(const struct page *page)
+static inline int memdesc_nid(memdesc_flags_t mdf)
 {
-	return (PF_POISONED_CHECK(page)->flags.f >> NODES_PGSHIFT) & NODES_MASK;
+	return (mdf.f >> NODES_PGSHIFT) & NODES_MASK;
 }
 #endif
 
+static inline int page_to_nid(const struct page *page)
+{
+	return memdesc_nid(PF_POISONED_CHECK(page)->flags);
+}
+
 static inline int folio_nid(const struct folio *folio)
 {
-	return page_to_nid(&folio->page);
+	return memdesc_nid(folio->flags);
 }
 
 #ifdef CONFIG_NUMA_BALANCING
@@ -1740,14 +1745,14 @@ static inline pg_data_t *page_pgdat(const struct page *page)
 	return NODE_DATA(page_to_nid(page));
 }
 
-static inline struct zone *folio_zone(const struct folio *folio)
+static inline pg_data_t *folio_pgdat(const struct folio *folio)
 {
-	return page_zone(&folio->page);
+	return NODE_DATA(folio_nid(folio));
 }
 
-static inline pg_data_t *folio_pgdat(const struct folio *folio)
+static inline struct zone *folio_zone(const struct folio *folio)
 {
-	return page_pgdat(&folio->page);
+	return &folio_pgdat(folio)->node_zones[folio_zonenum(folio)];
 }
 
 #ifdef SECTION_IN_PAGE_FLAGS
diff --git a/mm/sparse.c b/mm/sparse.c
index 6c1d400f8962..549d0501be47 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -43,11 +43,11 @@ static u8 section_to_node_table[NR_MEM_SECTIONS] __cacheline_aligned;
 static u16 section_to_node_table[NR_MEM_SECTIONS] __cacheline_aligned;
 #endif
 
-int page_to_nid(const struct page *page)
+int memdesc_nid(memdesc_flags_t mdf)
 {
-	return section_to_node_table[memdesc_section(page->flags)];
+	return section_to_node_table[memdesc_section(mdf)];
 }
-EXPORT_SYMBOL(page_to_nid);
+EXPORT_SYMBOL(memdesc_nid);
 
 static void set_section_nid(unsigned long section_nr, int nid)
 {
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 04/11] mm: Introduce memdesc_zonenum()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2025-08-05 17:22 ` [PATCH 03/11] mm: Introduce memdesc_nid() Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 18:57   ` Zi Yan
  2025-08-05 17:22 ` [PATCH 05/11] slab: Use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

Remove a conversion from folio to page by passing the folio->flags
(which are a copy of the page->flags) to the new memdesc_zonenum() function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mmzone.h | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b4852269da0e..001a696756df 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1169,15 +1169,20 @@ static inline bool zone_is_empty(struct zone *zone)
 #define KASAN_TAG_MASK		((1UL << KASAN_TAG_WIDTH) - 1)
 #define ZONEID_MASK		((1UL << ZONEID_SHIFT) - 1)
 
+static inline enum zone_type memdesc_zonenum(memdesc_flags_t flags)
+{
+	ASSERT_EXCLUSIVE_BITS(flags.f, ZONES_MASK << ZONES_PGSHIFT);
+	return (flags.f >> ZONES_PGSHIFT) & ZONES_MASK;
+}
+
 static inline enum zone_type page_zonenum(const struct page *page)
 {
-	ASSERT_EXCLUSIVE_BITS(page->flags, ZONES_MASK << ZONES_PGSHIFT);
-	return (page->flags.f >> ZONES_PGSHIFT) & ZONES_MASK;
+	return memdesc_zonenum(page->flags);
 }
 
 static inline enum zone_type folio_zonenum(const struct folio *folio)
 {
-	return page_zonenum(&folio->page);
+	return memdesc_zonenum(folio->flags);
 }
 
 #ifdef CONFIG_ZONE_DEVICE
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 05/11] slab: Use memdesc_flags_t
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2025-08-05 17:22 ` [PATCH 04/11] mm: Introduce memdesc_zonenum() Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 19:16   ` Zi Yan
  2025-08-05 17:22 ` [PATCH 06/11] slab: Use memdesc_nid() Matthew Wilcox (Oracle)
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

The slab flags are memdesc flags and contain the same information in the
upper bits as the other memdescs (like node ID).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/slab.h |  2 +-
 mm/slub.c | 18 +++++++++---------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 248b34c839b7..7757331e7c80 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -50,7 +50,7 @@ typedef union {
 
 /* Reuses the bits in struct page */
 struct slab {
-	unsigned long flags;
+	memdesc_flags_t flags;
 
 	struct kmem_cache *slab_cache;
 	union {
diff --git a/mm/slub.c b/mm/slub.c
index cf7c6032d5fd..0160af3b3943 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -657,17 +657,17 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
  */
 static inline bool slab_test_pfmemalloc(const struct slab *slab)
 {
-	return test_bit(SL_pfmemalloc, &slab->flags);
+	return test_bit(SL_pfmemalloc, &slab->flags.f);
 }
 
 static inline void slab_set_pfmemalloc(struct slab *slab)
 {
-	set_bit(SL_pfmemalloc, &slab->flags);
+	set_bit(SL_pfmemalloc, &slab->flags.f);
 }
 
 static inline void __slab_clear_pfmemalloc(struct slab *slab)
 {
-	__clear_bit(SL_pfmemalloc, &slab->flags);
+	__clear_bit(SL_pfmemalloc, &slab->flags.f);
 }
 
 /*
@@ -675,12 +675,12 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab)
  */
 static __always_inline void slab_lock(struct slab *slab)
 {
-	bit_spin_lock(SL_locked, &slab->flags);
+	bit_spin_lock(SL_locked, &slab->flags.f);
 }
 
 static __always_inline void slab_unlock(struct slab *slab)
 {
-	bit_spin_unlock(SL_locked, &slab->flags);
+	bit_spin_unlock(SL_locked, &slab->flags.f);
 }
 
 static inline bool
@@ -1046,7 +1046,7 @@ static void print_slab_info(const struct slab *slab)
 {
 	pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n",
 	       slab, slab->objects, slab->inuse, slab->freelist,
-	       &slab->flags);
+	       &slab->flags.f);
 }
 
 void skip_orig_size_check(struct kmem_cache *s, const void *object)
@@ -2755,17 +2755,17 @@ static void discard_slab(struct kmem_cache *s, struct slab *slab)
 
 static inline bool slab_test_node_partial(const struct slab *slab)
 {
-	return test_bit(SL_partial, &slab->flags);
+	return test_bit(SL_partial, &slab->flags.f);
 }
 
 static inline void slab_set_node_partial(struct slab *slab)
 {
-	set_bit(SL_partial, &slab->flags);
+	set_bit(SL_partial, &slab->flags.f);
 }
 
 static inline void slab_clear_node_partial(struct slab *slab)
 {
-	clear_bit(SL_partial, &slab->flags);
+	clear_bit(SL_partial, &slab->flags.f);
 }
 
 /*
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 06/11] slab: Use memdesc_nid()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (4 preceding siblings ...)
  2025-08-05 17:22 ` [PATCH 05/11] slab: Use memdesc_flags_t Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 19:17   ` Zi Yan
  2025-08-05 17:22 ` [PATCH 07/11] mm: Introduce memdesc_is_zone_device() Matthew Wilcox (Oracle)
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

We no longer need to convert from slab to folio to get the nid, we can
ask memdesc_nid() for the nid directly.
---
 mm/slab.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 7757331e7c80..c41a512dd07c 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -174,12 +174,12 @@ static inline void *slab_address(const struct slab *slab)
 
 static inline int slab_nid(const struct slab *slab)
 {
-	return folio_nid(slab_folio(slab));
+	return memdesc_nid(slab->flags);
 }
 
 static inline pg_data_t *slab_pgdat(const struct slab *slab)
 {
-	return folio_pgdat(slab_folio(slab));
+	return NODE_DATA(slab_nid(slab));
 }
 
 static inline struct slab *virt_to_slab(const void *addr)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 07/11] mm: Introduce memdesc_is_zone_device()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (5 preceding siblings ...)
  2025-08-05 17:22 ` [PATCH 06/11] slab: Use memdesc_nid() Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 19:22   ` Zi Yan
  2025-08-05 17:22 ` [PATCH 08/11] mm: Reimplement folio_is_device_private() Matthew Wilcox (Oracle)
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

Remove the conversion from folio to page in folio_is_zone_device()
by introducing memdesc_is_zone_device() which takes a memdesc_flags_t
from either a page or a folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/mmzone.h | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 001a696756df..0210f7eea825 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1186,14 +1186,14 @@ static inline enum zone_type folio_zonenum(const struct folio *folio)
 }
 
 #ifdef CONFIG_ZONE_DEVICE
-static inline bool is_zone_device_page(const struct page *page)
+static inline bool memdesc_is_zone_device(memdesc_flags_t mdf)
 {
-	return page_zonenum(page) == ZONE_DEVICE;
+	return memdesc_zonenum(mdf) == ZONE_DEVICE;
 }
 
 static inline struct dev_pagemap *page_pgmap(const struct page *page)
 {
-	VM_WARN_ON_ONCE_PAGE(!is_zone_device_page(page), page);
+	VM_WARN_ON_ONCE_PAGE(!memdesc_is_zone_device(page->flags), page);
 	return page_folio(page)->pgmap;
 }
 
@@ -1208,9 +1208,9 @@ static inline struct dev_pagemap *page_pgmap(const struct page *page)
 static inline bool zone_device_pages_have_same_pgmap(const struct page *a,
 						     const struct page *b)
 {
-	if (is_zone_device_page(a) != is_zone_device_page(b))
+	if (memdesc_is_zone_device(a->flags) != memdesc_is_zone_device(b->flags))
 		return false;
-	if (!is_zone_device_page(a))
+	if (!memdesc_is_zone_device(a->flags))
 		return true;
 	return page_pgmap(a) == page_pgmap(b);
 }
@@ -1218,7 +1218,7 @@ static inline bool zone_device_pages_have_same_pgmap(const struct page *a,
 extern void memmap_init_zone_device(struct zone *, unsigned long,
 				    unsigned long, struct dev_pagemap *);
 #else
-static inline bool is_zone_device_page(const struct page *page)
+static inline bool memdesc_is_zone_device(memdesc_flags_t mdf)
 {
 	return false;
 }
@@ -1233,9 +1233,14 @@ static inline struct dev_pagemap *page_pgmap(const struct page *page)
 }
 #endif
 
+static inline bool is_zone_device_page(const struct page *page)
+{
+	return memdesc_is_zone_device(page->flags);
+}
+
 static inline bool folio_is_zone_device(const struct folio *folio)
 {
-	return is_zone_device_page(&folio->page);
+	return memdesc_is_zone_device(folio->flags);
 }
 
 static inline bool is_zone_movable_page(const struct page *page)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 08/11] mm: Reimplement folio_is_device_private()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (6 preceding siblings ...)
  2025-08-05 17:22 ` [PATCH 07/11] mm: Introduce memdesc_is_zone_device() Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 19:25   ` Zi Yan
  2025-08-05 17:22 ` [PATCH 09/11] mm: Reimplement folio_is_device_coherent() Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

For callers of folio_is_device_private(), we save a folio->page->folio
conversion.  Callers of is_device_private_page() simply move the
conversion of page->folio from the implementation of page_pgmap()
to is_device_private_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/memremap.h | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 4aa151914eab..5d18cb7a70e5 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -157,16 +157,17 @@ static inline unsigned long pgmap_vmemmap_nr(struct dev_pagemap *pgmap)
 	return 1 << pgmap->vmemmap_shift;
 }
 
-static inline bool is_device_private_page(const struct page *page)
+static inline bool folio_is_device_private(const struct folio *folio)
 {
 	return IS_ENABLED(CONFIG_DEVICE_PRIVATE) &&
-		is_zone_device_page(page) &&
-		page_pgmap(page)->type == MEMORY_DEVICE_PRIVATE;
+		folio_is_zone_device(folio) &&
+		folio->pgmap->type == MEMORY_DEVICE_PRIVATE;
 }
 
-static inline bool folio_is_device_private(const struct folio *folio)
+static inline bool is_device_private_page(const struct page *page)
 {
-	return is_device_private_page(&folio->page);
+	return IS_ENABLED(CONFIG_DEVICE_PRIVATE) &&
+		folio_is_device_private(page_folio(page));
 }
 
 static inline bool is_pci_p2pdma_page(const struct page *page)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 09/11] mm: Reimplement folio_is_device_coherent()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (7 preceding siblings ...)
  2025-08-05 17:22 ` [PATCH 08/11] mm: Reimplement folio_is_device_private() Matthew Wilcox (Oracle)
@ 2025-08-05 17:22 ` Matthew Wilcox (Oracle)
  2025-08-06 19:27   ` Zi Yan
  2025-08-05 17:23 ` [PATCH 10/11] mm: Reimplement folio_is_fsdax() Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

For callers of folio_is_device_coherent(), we save a folio->page->folio
conversion.  Callers of is_device_coherent_page() simply move the
conversion of page->folio from the implementation of page_pgmap()
to is_device_coherent_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/memremap.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 5d18cb7a70e5..06d29794abe6 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -177,15 +177,15 @@ static inline bool is_pci_p2pdma_page(const struct page *page)
 		page_pgmap(page)->type == MEMORY_DEVICE_PCI_P2PDMA;
 }
 
-static inline bool is_device_coherent_page(const struct page *page)
+static inline bool folio_is_device_coherent(const struct folio *folio)
 {
-	return is_zone_device_page(page) &&
-		page_pgmap(page)->type == MEMORY_DEVICE_COHERENT;
+	return folio_is_zone_device(folio) &&
+		folio->pgmap->type == MEMORY_DEVICE_COHERENT;
 }
 
-static inline bool folio_is_device_coherent(const struct folio *folio)
+static inline bool is_device_coherent_page(const struct page *page)
 {
-	return is_device_coherent_page(&folio->page);
+	return folio_is_device_coherent(page_folio(page));
 }
 
 static inline bool is_fsdax_page(const struct page *page)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 10/11] mm: Reimplement folio_is_fsdax()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (8 preceding siblings ...)
  2025-08-05 17:22 ` [PATCH 09/11] mm: Reimplement folio_is_device_coherent() Matthew Wilcox (Oracle)
@ 2025-08-05 17:23 ` Matthew Wilcox (Oracle)
  2025-08-06 19:27   ` Zi Yan
  2025-08-05 17:23 ` [PATCH 11/11] mm: Add folio_is_pci_p2pdma() Matthew Wilcox (Oracle)
  2025-08-05 21:40 ` [PATCH 00/11] Add and use memdesc_flags_t Shakeel Butt
  11 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

For callers of folio_is_fsdax(), we save a folio->page->folio conversion.
Callers of is_fsdax_page() simply move the conversion of page->folio
from the implementation of page_pgmap() to is_fsdax_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/memremap.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 06d29794abe6..450d4bb6835c 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -188,15 +188,15 @@ static inline bool is_device_coherent_page(const struct page *page)
 	return folio_is_device_coherent(page_folio(page));
 }
 
-static inline bool is_fsdax_page(const struct page *page)
+static inline bool folio_is_fsdax(const struct folio *folio)
 {
-	return is_zone_device_page(page) &&
-		page_pgmap(page)->type == MEMORY_DEVICE_FS_DAX;
+	return folio_is_zone_device(folio) &&
+		folio->pgmap->type == MEMORY_DEVICE_FS_DAX;
 }
 
-static inline bool folio_is_fsdax(const struct folio *folio)
+static inline bool is_fsdax_page(const struct page *page)
 {
-	return is_fsdax_page(&folio->page);
+	return folio_is_fsdax(page_folio(page));
 }
 
 #ifdef CONFIG_ZONE_DEVICE
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 11/11] mm: Add folio_is_pci_p2pdma()
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (9 preceding siblings ...)
  2025-08-05 17:23 ` [PATCH 10/11] mm: Reimplement folio_is_fsdax() Matthew Wilcox (Oracle)
@ 2025-08-05 17:23 ` Matthew Wilcox (Oracle)
  2025-08-05 21:40 ` [PATCH 00/11] Add and use memdesc_flags_t Shakeel Butt
  11 siblings, 0 replies; 29+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-08-05 17:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

Reimplement is_pci_p2pdma_page() in terms of folio_is_pci_p2pdma().  Moves
the page_folio() call from inside page_pgmap() to is_pci_p2pdma_page().
This removes a page_folio() call from try_grab_folio() which already
has a folio and can pass it in.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/memremap.h | 10 ++++++++--
 mm/gup.c                 |  2 +-
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 450d4bb6835c..aa1b6aa877a0 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -170,11 +170,17 @@ static inline bool is_device_private_page(const struct page *page)
 		folio_is_device_private(page_folio(page));
 }
 
+static inline bool folio_is_pci_p2pdma(const struct folio *folio)
+{
+	return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
+		folio_is_zone_device(folio) &&
+		folio->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
+}
+
 static inline bool is_pci_p2pdma_page(const struct page *page)
 {
 	return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
-		is_zone_device_page(page) &&
-		page_pgmap(page)->type == MEMORY_DEVICE_PCI_P2PDMA;
+		folio_is_pci_p2pdma(page_folio(page));
 }
 
 static inline bool folio_is_device_coherent(const struct folio *folio)
diff --git a/mm/gup.c b/mm/gup.c
index adffe663594d..e02f8dc641df 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -148,7 +148,7 @@ int __must_check try_grab_folio(struct folio *folio, int refs,
 	if (WARN_ON_ONCE(folio_ref_count(folio) <= 0))
 		return -ENOMEM;
 
-	if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(&folio->page)))
+	if (unlikely(!(flags & FOLL_PCI_P2PDMA) && folio_is_pci_p2pdma(folio)))
 		return -EREMOTEIO;
 
 	if (flags & FOLL_GET)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/11] Add and use memdesc_flags_t
  2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
                   ` (10 preceding siblings ...)
  2025-08-05 17:23 ` [PATCH 11/11] mm: Add folio_is_pci_p2pdma() Matthew Wilcox (Oracle)
@ 2025-08-05 21:40 ` Shakeel Butt
  2025-08-06 12:55   ` Matthew Wilcox
  11 siblings, 1 reply; 29+ messages in thread
From: Shakeel Butt @ 2025-08-05 21:40 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On Tue, Aug 05, 2025 at 06:22:50PM +0100, Matthew Wilcox (Oracle) wrote:
>   mm: Introduce memdesc_nid()
...
>   slab: Use memdesc_flags_t
>   slab: Use memdesc_nid()

The above three patches are missing in the email chain. Though I do see
them in your pagecache.git repo.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/11] Add and use memdesc_flags_t
  2025-08-05 21:40 ` [PATCH 00/11] Add and use memdesc_flags_t Shakeel Butt
@ 2025-08-06 12:55   ` Matthew Wilcox
  0 siblings, 0 replies; 29+ messages in thread
From: Matthew Wilcox @ 2025-08-06 12:55 UTC (permalink / raw)
  To: Shakeel Butt; +Cc: Andrew Morton, linux-mm

On Tue, Aug 05, 2025 at 02:40:11PM -0700, Shakeel Butt wrote:
> On Tue, Aug 05, 2025 at 06:22:50PM +0100, Matthew Wilcox (Oracle) wrote:
> >   mm: Introduce memdesc_nid()
> ...
> >   slab: Use memdesc_flags_t
> >   slab: Use memdesc_nid()
> 
> The above three patches are missing in the email chain. Though I do see
> them in your pagecache.git repo.

Thanks; I resent them and they made it through this time.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/11] mm: Introduce memdesc_flags_t
  2025-08-05 17:22 ` [PATCH 01/11] mm: Introduce memdesc_flags_t Matthew Wilcox (Oracle)
@ 2025-08-06 18:24   ` Zi Yan
  2025-08-19 17:49   ` Kairui Song
  1 sibling, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 18:24 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> Wrap the unsigned long flags in a typedef.  In upcoming patches, this
> will provide a strong hint that you can't just pass a random unsigned
> long to functions which take this as an argument.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  arch/x86/mm/pat/memtype.c       |  6 ++---
>  fs/fuse/dev.c                   |  2 +-
>  fs/gfs2/glops.c                 |  2 +-
>  fs/jffs2/file.c                 |  4 ++--
>  fs/nilfs2/page.c                |  2 +-
>  fs/proc/page.c                  |  4 ++--
>  fs/ubifs/file.c                 |  6 ++---
>  include/linux/mm.h              | 32 +++++++++++++-------------
>  include/linux/mm_inline.h       | 12 +++++-----
>  include/linux/mm_types.h        |  8 +++++--
>  include/linux/mmzone.h          |  2 +-
>  include/linux/page-flags.h      | 40 ++++++++++++++++-----------------
>  include/linux/pgalloc_tag.h     |  7 +++---
>  include/trace/events/page_ref.h |  4 ++--
>  mm/filemap.c                    |  8 +++----
>  mm/huge_memory.c                |  4 ++--
>  mm/memory-failure.c             | 12 +++++-----
>  mm/mmzone.c                     |  4 ++--
>  mm/page_alloc.c                 | 12 +++++-----
>  mm/swap.c                       |  8 +++----
>  mm/vmscan.c                     | 18 +++++++--------
>  mm/workingset.c                 |  2 +-
>  22 files changed, 102 insertions(+), 97 deletions(-)

LGTM. The change also compiles.

Acked-by: Zi Yan <ziy@nvidia.com>


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 02/11] mm: Convert page_to_section() to memdesc_section()
  2025-08-05 17:22 ` [PATCH 02/11] mm: Convert page_to_section() to memdesc_section() Matthew Wilcox (Oracle)
@ 2025-08-06 18:31   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 18:31 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> Pass in the memdesc_flags_t instead of a pointer to the page.  This will
> allow us to remove a few conversions to struct page in upcoming patches.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/asm-generic/memory_model.h | 2 +-
>  include/linux/mm.h                 | 4 ++--
>  mm/sparse.c                        | 2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
>

LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 03/11] mm: Introduce memdesc_nid()
  2025-08-05 17:22 ` [PATCH 03/11] mm: Introduce memdesc_nid() Matthew Wilcox (Oracle)
@ 2025-08-06 18:47   ` Zi Yan
  2025-08-06 19:04     ` Matthew Wilcox
  0 siblings, 1 reply; 29+ messages in thread
From: Zi Yan @ 2025-08-06 18:47 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> Remove a conversion from folio to page by passing the folio->flags
> (which are a copy of the page->flags) to the new memdesc_nid() function.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/mm.h | 21 +++++++++++++--------
>  mm/sparse.c        |  6 +++---
>  2 files changed, 16 insertions(+), 11 deletions(-)
>
>  }
> -EXPORT_SYMBOL(page_to_nid);
> +EXPORT_SYMBOL(memdesc_nid);

page_to_nid() no longer needs to be exported, since it is a wrapper of
memdesc_nid() and in mm.h.

LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 04/11] mm: Introduce memdesc_zonenum()
  2025-08-05 17:22 ` [PATCH 04/11] mm: Introduce memdesc_zonenum() Matthew Wilcox (Oracle)
@ 2025-08-06 18:57   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 18:57 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> Remove a conversion from folio to page by passing the folio->flags
> (which are a copy of the page->flags) to the new memdesc_zonenum() function.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/mmzone.h | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
>

Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 03/11] mm: Introduce memdesc_nid()
  2025-08-06 18:47   ` Zi Yan
@ 2025-08-06 19:04     ` Matthew Wilcox
  2025-08-06 19:07       ` Zi Yan
  0 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2025-08-06 19:04 UTC (permalink / raw)
  To: Zi Yan; +Cc: Andrew Morton, linux-mm

On Wed, Aug 06, 2025 at 02:47:19PM -0400, Zi Yan wrote:
> On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:
> 
> > Remove a conversion from folio to page by passing the folio->flags
> > (which are a copy of the page->flags) to the new memdesc_nid() function.
> >
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> >  include/linux/mm.h | 21 +++++++++++++--------
> >  mm/sparse.c        |  6 +++---
> >  2 files changed, 16 insertions(+), 11 deletions(-)
> >
> >  }
> > -EXPORT_SYMBOL(page_to_nid);
> > +EXPORT_SYMBOL(memdesc_nid);
> 
> page_to_nid() no longer needs to be exported, since it is a wrapper of
> memdesc_nid() and in mm.h.

That's right.  Did you want some verbiage about that in the commit
message?

> LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>

Thanks!


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 03/11] mm: Introduce memdesc_nid()
  2025-08-06 19:04     ` Matthew Wilcox
@ 2025-08-06 19:07       ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 19:07 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm

On 6 Aug 2025, at 15:04, Matthew Wilcox wrote:

> On Wed, Aug 06, 2025 at 02:47:19PM -0400, Zi Yan wrote:
>> On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:
>>
>>> Remove a conversion from folio to page by passing the folio->flags
>>> (which are a copy of the page->flags) to the new memdesc_nid() function.
>>>
>>> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>>> ---
>>>  include/linux/mm.h | 21 +++++++++++++--------
>>>  mm/sparse.c        |  6 +++---
>>>  2 files changed, 16 insertions(+), 11 deletions(-)
>>>
>>>  }
>>> -EXPORT_SYMBOL(page_to_nid);
>>> +EXPORT_SYMBOL(memdesc_nid);
>>
>> page_to_nid() no longer needs to be exported, since it is a wrapper of
>> memdesc_nid() and in mm.h.
>
> That's right.  Did you want some verbiage about that in the commit
> message?

The comment is mainly for my understanding, since initially I was wondering
if page_to_nid export is removed, what current user would do.

It does not hurt to add it to the commit message. :)

>
>> LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>
>
> Thanks!


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 05/11] slab: Use memdesc_flags_t
  2025-08-05 17:22 ` [PATCH 05/11] slab: Use memdesc_flags_t Matthew Wilcox (Oracle)
@ 2025-08-06 19:16   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 19:16 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> The slab flags are memdesc flags and contain the same information in the
> upper bits as the other memdescs (like node ID).

Yeah, SLAB_MATCH(flags, flags) checks for that. Ideally, we might want to
use the same type for the shared part and different types for struct specific
fields. But that would fiddle with struct variable bit width.

Something like

typedef struct {
	unsigned long loc : SECTIONS_WIDTH + NODES_WIDTH + ZONES_WIDTH + ...;
    union {
		folio_flags_t folio_flags : FOLIO_FLAGS_WIDTH;
		slab_flags_t slab_flags : SLAB_FLAGS_WIDTH;
		...
    };
} memdesc_flags_t;

Hmm, seems very complicated. Never mind. ;)

>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/slab.h |  2 +-
>  mm/slub.c | 18 +++++++++---------
>  2 files changed, 10 insertions(+), 10 deletions(-)
>

Reviewed-by: Zi Yan <ziy@nvidia.com>


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 06/11] slab: Use memdesc_nid()
  2025-08-05 17:22 ` [PATCH 06/11] slab: Use memdesc_nid() Matthew Wilcox (Oracle)
@ 2025-08-06 19:17   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 19:17 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> We no longer need to convert from slab to folio to get the nid, we can
> ask memdesc_nid() for the nid directly.
> ---
>  mm/slab.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>

Great cleanup. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 07/11] mm: Introduce memdesc_is_zone_device()
  2025-08-05 17:22 ` [PATCH 07/11] mm: Introduce memdesc_is_zone_device() Matthew Wilcox (Oracle)
@ 2025-08-06 19:22   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 19:22 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> Remove the conversion from folio to page in folio_is_zone_device()
> by introducing memdesc_is_zone_device() which takes a memdesc_flags_t
> from either a page or a folio.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/mmzone.h | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
>

LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 08/11] mm: Reimplement folio_is_device_private()
  2025-08-05 17:22 ` [PATCH 08/11] mm: Reimplement folio_is_device_private() Matthew Wilcox (Oracle)
@ 2025-08-06 19:25   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 19:25 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> For callers of folio_is_device_private(), we save a folio->page->folio
> conversion.  Callers of is_device_private_page() simply move the
> conversion of page->folio from the implementation of page_pgmap()
> to is_device_private_page().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/memremap.h | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
>

LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 09/11] mm: Reimplement folio_is_device_coherent()
  2025-08-05 17:22 ` [PATCH 09/11] mm: Reimplement folio_is_device_coherent() Matthew Wilcox (Oracle)
@ 2025-08-06 19:27   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 19:27 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:22, Matthew Wilcox (Oracle) wrote:

> For callers of folio_is_device_coherent(), we save a folio->page->folio
> conversion.  Callers of is_device_coherent_page() simply move the
> conversion of page->folio from the implementation of page_pgmap()
> to is_device_coherent_page().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/memremap.h | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>

Similar change as folio_is_device_private() change.

Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 10/11] mm: Reimplement folio_is_fsdax()
  2025-08-05 17:23 ` [PATCH 10/11] mm: Reimplement folio_is_fsdax() Matthew Wilcox (Oracle)
@ 2025-08-06 19:27   ` Zi Yan
  0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2025-08-06 19:27 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On 5 Aug 2025, at 13:23, Matthew Wilcox (Oracle) wrote:

> For callers of folio_is_fsdax(), we save a folio->page->folio conversion.
> Callers of is_fsdax_page() simply move the conversion of page->folio
> from the implementation of page_pgmap() to is_fsdax_page().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/memremap.h | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>

Another similar change. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/11] mm: Introduce memdesc_flags_t
  2025-08-05 17:22 ` [PATCH 01/11] mm: Introduce memdesc_flags_t Matthew Wilcox (Oracle)
  2025-08-06 18:24   ` Zi Yan
@ 2025-08-19 17:49   ` Kairui Song
  2025-08-19 17:58     ` Matthew Wilcox
  1 sibling, 1 reply; 29+ messages in thread
From: Kairui Song @ 2025-08-19 17:49 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Andrew Morton, linux-mm

On Wed, Aug 6, 2025 at 2:20 AM Matthew Wilcox (Oracle)
<willy@infradead.org> wrote:
>
> Wrap the unsigned long flags in a typedef.  In upcoming patches, this
> will provide a strong hint that you can't just pass a random unsigned
> long to functions which take this as an argument.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  arch/x86/mm/pat/memtype.c       |  6 ++---
>  fs/fuse/dev.c                   |  2 +-
>  fs/gfs2/glops.c                 |  2 +-
>  fs/jffs2/file.c                 |  4 ++--
>  fs/nilfs2/page.c                |  2 +-
>  fs/proc/page.c                  |  4 ++--
>  fs/ubifs/file.c                 |  6 ++---
>  include/linux/mm.h              | 32 +++++++++++++-------------
>  include/linux/mm_inline.h       | 12 +++++-----
>  include/linux/mm_types.h        |  8 +++++--
>  include/linux/mmzone.h          |  2 +-
>  include/linux/page-flags.h      | 40 ++++++++++++++++-----------------
>  include/linux/pgalloc_tag.h     |  7 +++---
>  include/trace/events/page_ref.h |  4 ++--
>  mm/filemap.c                    |  8 +++----
>  mm/huge_memory.c                |  4 ++--
>  mm/memory-failure.c             | 12 +++++-----
>  mm/mmzone.c                     |  4 ++--
>  mm/page_alloc.c                 | 12 +++++-----
>  mm/swap.c                       |  8 +++----
>  mm/vmscan.c                     | 18 +++++++--------
>  mm/workingset.c                 |  2 +-
>  22 files changed, 102 insertions(+), 97 deletions(-)
>
> diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> index c09284302dd3..b68200a0e0c6 100644
> --- a/arch/x86/mm/pat/memtype.c
> +++ b/arch/x86/mm/pat/memtype.c
> @@ -126,7 +126,7 @@ __setup("debugpat", pat_debug_setup);
>
>  static inline enum page_cache_mode get_page_memtype(struct page *pg)
>  {
> -       unsigned long pg_flags = pg->flags & _PGMT_MASK;
> +       unsigned long pg_flags = pg->flags.f & _PGMT_MASK;
>
>         if (pg_flags == _PGMT_WB)
>                 return _PAGE_CACHE_MODE_WB;
> @@ -161,10 +161,10 @@ static inline void set_page_memtype(struct page *pg,
>                 break;
>         }
>
> -       old_flags = READ_ONCE(pg->flags);
> +       old_flags = READ_ONCE(pg->flags.f);
>         do {
>                 new_flags = (old_flags & _PGMT_CLEAR_MASK) | memtype_flags;
> -       } while (!try_cmpxchg(&pg->flags, &old_flags, new_flags));
> +       } while (!try_cmpxchg(&pg->flags.f, &old_flags, new_flags));
>  }
>  #else
>  static inline enum page_cache_mode get_page_memtype(struct page *pg)
> diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
> index e80cd8f2c049..8a89f0aa1d4d 100644
> --- a/fs/fuse/dev.c
> +++ b/fs/fuse/dev.c
> @@ -935,7 +935,7 @@ static int fuse_check_folio(struct folio *folio)
>  {
>         if (folio_mapped(folio) ||
>             folio->mapping != NULL ||
> -           (folio->flags & PAGE_FLAGS_CHECK_AT_PREP &
> +           (folio->flags.f & PAGE_FLAGS_CHECK_AT_PREP &
>              ~(1 << PG_locked |
>                1 << PG_referenced |
>                1 << PG_lru |
> diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
> index fe0faad4892f..0c0a80b3baca 100644
> --- a/fs/gfs2/glops.c
> +++ b/fs/gfs2/glops.c
> @@ -40,7 +40,7 @@ static void gfs2_ail_error(struct gfs2_glock *gl, const struct buffer_head *bh)
>                "AIL buffer %p: blocknr %llu state 0x%08lx mapping %p page "
>                "state 0x%lx\n",
>                bh, (unsigned long long)bh->b_blocknr, bh->b_state,
> -              bh->b_folio->mapping, bh->b_folio->flags);
> +              bh->b_folio->mapping, bh->b_folio->flags.f);
>         fs_err(sdp, "AIL glock %u:%llu mapping %p\n",
>                gl->gl_name.ln_type, gl->gl_name.ln_number,
>                gfs2_glock2aspace(gl));
> diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
> index dd3dff95cb24..b697f3c259ef 100644
> --- a/fs/jffs2/file.c
> +++ b/fs/jffs2/file.c
> @@ -230,7 +230,7 @@ static int jffs2_write_begin(const struct kiocb *iocb,
>                         goto release_sem;
>                 }
>         }
> -       jffs2_dbg(1, "end write_begin(). folio->flags %lx\n", folio->flags);
> +       jffs2_dbg(1, "end write_begin(). folio->flags %lx\n", folio->flags.f);
>
>  release_sem:
>         mutex_unlock(&c->alloc_sem);
> @@ -259,7 +259,7 @@ static int jffs2_write_end(const struct kiocb *iocb,
>
>         jffs2_dbg(1, "%s(): ino #%lu, page at 0x%llx, range %d-%d, flags %lx\n",
>                   __func__, inode->i_ino, folio_pos(folio),
> -                 start, end, folio->flags);
> +                 start, end, folio->flags.f);
>
>         /* We need to avoid deadlock with page_cache_read() in
>            jffs2_garbage_collect_pass(). So the folio must be
> diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
> index 806b056d2260..56c4da417b6a 100644
> --- a/fs/nilfs2/page.c
> +++ b/fs/nilfs2/page.c
> @@ -167,7 +167,7 @@ void nilfs_folio_bug(struct folio *folio)
>         printk(KERN_CRIT "NILFS_FOLIO_BUG(%p): cnt=%d index#=%llu flags=0x%lx "
>                "mapping=%p ino=%lu\n",
>                folio, folio_ref_count(folio),
> -              (unsigned long long)folio->index, folio->flags, m, ino);
> +              (unsigned long long)folio->index, folio->flags.f, m, ino);
>
>         head = folio_buffers(folio);
>         if (head) {
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index ba3568e97fd1..771e0b6bc630 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -163,7 +163,7 @@ u64 stable_page_flags(const struct page *page)
>         snapshot_page(&ps, page);
>         folio = &ps.folio_snapshot;
>
> -       k = folio->flags;
> +       k = folio->flags.f;
>         mapping = (unsigned long)folio->mapping;
>         is_anon = mapping & FOLIO_MAPPING_ANON;
>
> @@ -238,7 +238,7 @@ u64 stable_page_flags(const struct page *page)
>         if (u & (1 << KPF_HUGE))
>                 u |= kpf_copy_bit(k, KPF_HWPOISON,      PG_hwpoison);
>         else
> -               u |= kpf_copy_bit(ps.page_snapshot.flags, KPF_HWPOISON, PG_hwpoison);
> +               u |= kpf_copy_bit(ps.page_snapshot.flags.f, KPF_HWPOISON, PG_hwpoison);
>  #endif
>
>         u |= kpf_copy_bit(k, KPF_RESERVED,      PG_reserved);
> diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
> index e75a6cec67be..ca41ce8208c4 100644
> --- a/fs/ubifs/file.c
> +++ b/fs/ubifs/file.c
> @@ -107,7 +107,7 @@ static int do_readpage(struct folio *folio)
>         size_t offset = 0;
>
>         dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx",
> -               inode->i_ino, folio->index, i_size, folio->flags);
> +               inode->i_ino, folio->index, i_size, folio->flags.f);
>         ubifs_assert(c, !folio_test_checked(folio));
>         ubifs_assert(c, !folio->private);
>
> @@ -600,7 +600,7 @@ static int populate_page(struct ubifs_info *c, struct folio *folio,
>         pgoff_t end_index;
>
>         dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx",
> -               inode->i_ino, folio->index, i_size, folio->flags);
> +               inode->i_ino, folio->index, i_size, folio->flags.f);
>
>         end_index = (i_size - 1) >> PAGE_SHIFT;
>         if (!i_size || folio->index > end_index) {
> @@ -988,7 +988,7 @@ static int ubifs_writepage(struct folio *folio, struct writeback_control *wbc)
>         int err, len = folio_size(folio);
>
>         dbg_gen("ino %lu, pg %lu, pg flags %#lx",
> -               inode->i_ino, folio->index, folio->flags);
> +               inode->i_ino, folio->index, folio->flags.f);
>         ubifs_assert(c, folio->private != NULL);
>
>         /* Is the folio fully outside @i_size? (truncate in progress) */
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 349f0d9aad22..779822a829a9 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -973,7 +973,7 @@ static inline unsigned int compound_order(struct page *page)
>  {
>         struct folio *folio = (struct folio *)page;
>
> -       if (!test_bit(PG_head, &folio->flags))
> +       if (!test_bit(PG_head, &folio->flags.f))
>                 return 0;
>         return folio_large_order(folio);
>  }
> @@ -1503,7 +1503,7 @@ static inline bool is_nommu_shared_mapping(vm_flags_t flags)
>   */
>  static inline int page_zone_id(struct page *page)
>  {
> -       return (page->flags >> ZONEID_PGSHIFT) & ZONEID_MASK;
> +       return (page->flags.f >> ZONEID_PGSHIFT) & ZONEID_MASK;
>  }
>
>  #ifdef NODE_NOT_IN_PAGE_FLAGS
> @@ -1511,7 +1511,7 @@ int page_to_nid(const struct page *page);
>  #else
>  static inline int page_to_nid(const struct page *page)
>  {
> -       return (PF_POISONED_CHECK(page)->flags >> NODES_PGSHIFT) & NODES_MASK;
> +       return (PF_POISONED_CHECK(page)->flags.f >> NODES_PGSHIFT) & NODES_MASK;
>  }
>  #endif
>
> @@ -1586,14 +1586,14 @@ static inline void page_cpupid_reset_last(struct page *page)
>  #else
>  static inline int folio_last_cpupid(struct folio *folio)
>  {
> -       return (folio->flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
> +       return (folio->flags.f >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
>  }
>
>  int folio_xchg_last_cpupid(struct folio *folio, int cpupid);
>
>  static inline void page_cpupid_reset_last(struct page *page)
>  {
> -       page->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT;
> +       page->flags.f |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT;
>  }
>  #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */
>
> @@ -1689,7 +1689,7 @@ static inline u8 page_kasan_tag(const struct page *page)
>         u8 tag = KASAN_TAG_KERNEL;
>
>         if (kasan_enabled()) {
> -               tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> +               tag = (page->flags.f >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
>                 tag ^= 0xff;
>         }
>
> @@ -1704,12 +1704,12 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
>                 return;
>
>         tag ^= 0xff;
> -       old_flags = READ_ONCE(page->flags);
> +       old_flags = READ_ONCE(page->flags.f);
>         do {
>                 flags = old_flags;
>                 flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
>                 flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
> -       } while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
> +       } while (unlikely(!try_cmpxchg(&page->flags.f, &old_flags, flags)));
>  }
>
>  static inline void page_kasan_tag_reset(struct page *page)
> @@ -1753,13 +1753,13 @@ static inline pg_data_t *folio_pgdat(const struct folio *folio)
>  #ifdef SECTION_IN_PAGE_FLAGS
>  static inline void set_page_section(struct page *page, unsigned long section)
>  {
> -       page->flags &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT);
> -       page->flags |= (section & SECTIONS_MASK) << SECTIONS_PGSHIFT;
> +       page->flags.f &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT);
> +       page->flags.f |= (section & SECTIONS_MASK) << SECTIONS_PGSHIFT;
>  }
>
>  static inline unsigned long page_to_section(const struct page *page)
>  {
> -       return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
> +       return (page->flags.f >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
>  }
>  #endif
>
> @@ -1964,14 +1964,14 @@ static inline bool folio_is_longterm_pinnable(struct folio *folio)
>
>  static inline void set_page_zone(struct page *page, enum zone_type zone)
>  {
> -       page->flags &= ~(ZONES_MASK << ZONES_PGSHIFT);
> -       page->flags |= (zone & ZONES_MASK) << ZONES_PGSHIFT;
> +       page->flags.f &= ~(ZONES_MASK << ZONES_PGSHIFT);
> +       page->flags.f |= (zone & ZONES_MASK) << ZONES_PGSHIFT;
>  }
>
>  static inline void set_page_node(struct page *page, unsigned long node)
>  {
> -       page->flags &= ~(NODES_MASK << NODES_PGSHIFT);
> -       page->flags |= (node & NODES_MASK) << NODES_PGSHIFT;
> +       page->flags.f &= ~(NODES_MASK << NODES_PGSHIFT);
> +       page->flags.f |= (node & NODES_MASK) << NODES_PGSHIFT;
>  }
>
>  static inline void set_page_links(struct page *page, enum zone_type zone,
> @@ -2013,7 +2013,7 @@ static inline long compound_nr(struct page *page)
>  {
>         struct folio *folio = (struct folio *)page;
>
> -       if (!test_bit(PG_head, &folio->flags))
> +       if (!test_bit(PG_head, &folio->flags.f))
>                 return 1;
>         return folio_large_nr_pages(folio);
>  }
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index 89b518ff097e..150302b4a905 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -143,7 +143,7 @@ static inline int lru_tier_from_refs(int refs, bool workingset)
>
>  static inline int folio_lru_refs(struct folio *folio)
>  {
> -       unsigned long flags = READ_ONCE(folio->flags);
> +       unsigned long flags = READ_ONCE(folio->flags.f);
>
>         if (!(flags & BIT(PG_referenced)))
>                 return 0;
> @@ -156,7 +156,7 @@ static inline int folio_lru_refs(struct folio *folio)
>
>  static inline int folio_lru_gen(struct folio *folio)
>  {
> -       unsigned long flags = READ_ONCE(folio->flags);
> +       unsigned long flags = READ_ONCE(folio->flags.f);
>
>         return ((flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
>  }
> @@ -268,7 +268,7 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio,
>         gen = lru_gen_from_seq(seq);
>         flags = (gen + 1UL) << LRU_GEN_PGOFF;
>         /* see the comment on MIN_NR_GENS about PG_active */
> -       set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags);
> +       set_mask_bits(&folio->flags.f, LRU_GEN_MASK | BIT(PG_active), flags);
>
>         lru_gen_update_size(lruvec, folio, -1, gen);
>         /* for folio_rotate_reclaimable() */
> @@ -293,7 +293,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
>
>         /* for folio_migrate_flags() */
>         flags = !reclaiming && lru_gen_is_active(lruvec, gen) ? BIT(PG_active) : 0;
> -       flags = set_mask_bits(&folio->flags, LRU_GEN_MASK, flags);
> +       flags = set_mask_bits(&folio->flags.f, LRU_GEN_MASK, flags);
>         gen = ((flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
>
>         lru_gen_update_size(lruvec, folio, gen, -1);
> @@ -304,9 +304,9 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
>
>  static inline void folio_migrate_refs(struct folio *new, struct folio *old)
>  {
> -       unsigned long refs = READ_ONCE(old->flags) & LRU_REFS_MASK;
> +       unsigned long refs = READ_ONCE(old->flags.f) & LRU_REFS_MASK;
>
> -       set_mask_bits(&new->flags, LRU_REFS_MASK, refs);
> +       set_mask_bits(&new->flags.f, LRU_REFS_MASK, refs);
>  }
>  #else /* !CONFIG_LRU_GEN */
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 08bc2442db93..15bb1c3738c0 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -33,6 +33,10 @@ struct address_space;
>  struct futex_private_hash;
>  struct mem_cgroup;
>
> +typedef struct {
> +       unsigned long f;
> +} memdesc_flags_t;
> +
>  /*
>   * Each physical page in the system has a struct page associated with
>   * it to keep track of whatever it is we are using the page for at the
> @@ -71,7 +75,7 @@ struct mem_cgroup;
>  #endif
>
>  struct page {
> -       unsigned long flags;            /* Atomic flags, some possibly
> +       memdesc_flags_t flags;          /* Atomic flags, some possibly
>                                          * updated asynchronously */
>         /*
>          * Five words (20/40 bytes) are available in this union.
> @@ -382,7 +386,7 @@ struct folio {
>         union {
>                 struct {
>         /* public: */
> -                       unsigned long flags;
> +                       memdesc_flags_t flags;
>                         union {
>                                 struct list_head lru;
>         /* private: avoid cluttering the output */
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 0c5da9141983..b4852269da0e 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1172,7 +1172,7 @@ static inline bool zone_is_empty(struct zone *zone)
>  static inline enum zone_type page_zonenum(const struct page *page)
>  {
>         ASSERT_EXCLUSIVE_BITS(page->flags, ZONES_MASK << ZONES_PGSHIFT);
> -       return (page->flags >> ZONES_PGSHIFT) & ZONES_MASK;
> +       return (page->flags.f >> ZONES_PGSHIFT) & ZONES_MASK;
>  }
>
>  static inline enum zone_type folio_zonenum(const struct folio *folio)
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 8e4d6eda8a8d..822b3ba48163 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -217,7 +217,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
>          * cold cacheline in some cases.
>          */
>         if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
> -           test_bit(PG_head, &page->flags)) {
> +           test_bit(PG_head, &page->flags.f)) {
>                 /*
>                  * We can safely access the field of the @page[1] with PG_head
>                  * because the @page is a compound page composed with at least
> @@ -325,14 +325,14 @@ static __always_inline int PageTail(const struct page *page)
>
>  static __always_inline int PageCompound(const struct page *page)
>  {
> -       return test_bit(PG_head, &page->flags) ||
> +       return test_bit(PG_head, &page->flags.f) ||
>                READ_ONCE(page->compound_head) & 1;
>  }
>
>  #define        PAGE_POISON_PATTERN     -1l
>  static inline int PagePoisoned(const struct page *page)
>  {
> -       return READ_ONCE(page->flags) == PAGE_POISON_PATTERN;
> +       return READ_ONCE(page->flags.f) == PAGE_POISON_PATTERN;
>  }
>
>  #ifdef CONFIG_DEBUG_VM
> @@ -349,8 +349,8 @@ static const unsigned long *const_folio_flags(const struct folio *folio,
>         const struct page *page = &folio->page;
>
>         VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
> -       VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
> -       return &page[n].flags;
> +       VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page);
> +       return &page[n].flags.f;
>  }
>
>  static unsigned long *folio_flags(struct folio *folio, unsigned n)
> @@ -358,8 +358,8 @@ static unsigned long *folio_flags(struct folio *folio, unsigned n)
>         struct page *page = &folio->page;
>
>         VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
> -       VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
> -       return &page[n].flags;
> +       VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page);
> +       return &page[n].flags.f;
>  }
>
>  /*
> @@ -449,37 +449,37 @@ FOLIO_CLEAR_FLAG(name, page)
>  #define TESTPAGEFLAG(uname, lname, policy)                             \
>  FOLIO_TEST_FLAG(lname, FOLIO_##policy)                                 \
>  static __always_inline int Page##uname(const struct page *page)                \
> -{ return test_bit(PG_##lname, &policy(page, 0)->flags); }
> +{ return test_bit(PG_##lname, &policy(page, 0)->flags.f); }
>
>  #define SETPAGEFLAG(uname, lname, policy)                              \
>  FOLIO_SET_FLAG(lname, FOLIO_##policy)                                  \
>  static __always_inline void SetPage##uname(struct page *page)          \
> -{ set_bit(PG_##lname, &policy(page, 1)->flags); }
> +{ set_bit(PG_##lname, &policy(page, 1)->flags.f); }
>
>  #define CLEARPAGEFLAG(uname, lname, policy)                            \
>  FOLIO_CLEAR_FLAG(lname, FOLIO_##policy)                                        \
>  static __always_inline void ClearPage##uname(struct page *page)                \
> -{ clear_bit(PG_##lname, &policy(page, 1)->flags); }
> +{ clear_bit(PG_##lname, &policy(page, 1)->flags.f); }
>
>  #define __SETPAGEFLAG(uname, lname, policy)                            \
>  __FOLIO_SET_FLAG(lname, FOLIO_##policy)                                        \
>  static __always_inline void __SetPage##uname(struct page *page)                \
> -{ __set_bit(PG_##lname, &policy(page, 1)->flags); }
> +{ __set_bit(PG_##lname, &policy(page, 1)->flags.f); }
>
>  #define __CLEARPAGEFLAG(uname, lname, policy)                          \
>  __FOLIO_CLEAR_FLAG(lname, FOLIO_##policy)                              \
>  static __always_inline void __ClearPage##uname(struct page *page)      \
> -{ __clear_bit(PG_##lname, &policy(page, 1)->flags); }
> +{ __clear_bit(PG_##lname, &policy(page, 1)->flags.f); }
>
>  #define TESTSETFLAG(uname, lname, policy)                              \
>  FOLIO_TEST_SET_FLAG(lname, FOLIO_##policy)                             \
>  static __always_inline int TestSetPage##uname(struct page *page)       \
> -{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); }
> +{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags.f); }
>
>  #define TESTCLEARFLAG(uname, lname, policy)                            \
>  FOLIO_TEST_CLEAR_FLAG(lname, FOLIO_##policy)                           \
>  static __always_inline int TestClearPage##uname(struct page *page)     \
> -{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); }
> +{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags.f); }
>
>  #define PAGEFLAG(uname, lname, policy)                                 \
>         TESTPAGEFLAG(uname, lname, policy)                              \
> @@ -848,7 +848,7 @@ static __always_inline bool folio_test_head(const struct folio *folio)
>  static __always_inline int PageHead(const struct page *page)
>  {
>         PF_POISONED_CHECK(page);
> -       return test_bit(PG_head, &page->flags) && !page_is_fake_head(page);
> +       return test_bit(PG_head, &page->flags.f) && !page_is_fake_head(page);
>  }
>
>  __SETPAGEFLAG(Head, head, PF_ANY)
> @@ -1172,28 +1172,28 @@ static __always_inline int PageAnonExclusive(const struct page *page)
>          */
>         if (PageHuge(page))
>                 page = compound_head(page);
> -       return test_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
> +       return test_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
>  }
>
>  static __always_inline void SetPageAnonExclusive(struct page *page)
>  {
>         VM_BUG_ON_PGFLAGS(!PageAnonNotKsm(page), page);
>         VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
> -       set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
> +       set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
>  }
>
>  static __always_inline void ClearPageAnonExclusive(struct page *page)
>  {
>         VM_BUG_ON_PGFLAGS(!PageAnonNotKsm(page), page);
>         VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
> -       clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
> +       clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
>  }
>
>  static __always_inline void __ClearPageAnonExclusive(struct page *page)
>  {
>         VM_BUG_ON_PGFLAGS(!PageAnon(page), page);
>         VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
> -       __clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
> +       __clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
>  }
>
>  #ifdef CONFIG_MMU
> @@ -1243,7 +1243,7 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page)
>   */
>  static inline int folio_has_private(const struct folio *folio)
>  {
> -       return !!(folio->flags & PAGE_FLAGS_PRIVATE);
> +       return !!(folio->flags.f & PAGE_FLAGS_PRIVATE);
>  }
>
>  #undef PF_ANY
> diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
> index 8a7f4f802c57..38a82d65e58e 100644
> --- a/include/linux/pgalloc_tag.h
> +++ b/include/linux/pgalloc_tag.h
> @@ -107,7 +107,8 @@ static inline bool get_page_tag_ref(struct page *page, union codetag_ref *ref,
>         if (static_key_enabled(&mem_profiling_compressed)) {
>                 pgalloc_tag_idx idx;
>
> -               idx = (page->flags >> alloc_tag_ref_offs) & alloc_tag_ref_mask;
> +               idx = (page->flags.f >> alloc_tag_ref_offs) &
> +                       alloc_tag_ref_mask;
>                 idx_to_ref(idx, ref);
>                 handle->page = page;
>         } else {
> @@ -149,11 +150,11 @@ static inline void update_page_tag_ref(union pgtag_ref_handle handle, union code
>                 idx = (unsigned long)ref_to_idx(ref);
>                 idx = (idx & alloc_tag_ref_mask) << alloc_tag_ref_offs;
>                 do {
> -                       old_flags = READ_ONCE(page->flags);
> +                       old_flags = READ_ONCE(page->flags.f);
>                         flags = old_flags;
>                         flags &= ~(alloc_tag_ref_mask << alloc_tag_ref_offs);
>                         flags |= idx;
> -               } while (unlikely(!try_cmpxchg(&page->flags, &old_flags, flags)));
> +               } while (unlikely(!try_cmpxchg(&page->flags.f, &old_flags, flags)));
>         } else {
>                 if (WARN_ON(!handle.ref || !ref))
>                         return;
> diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
> index fe33a255b7d0..ea6b5c4baf3d 100644
> --- a/include/trace/events/page_ref.h
> +++ b/include/trace/events/page_ref.h
> @@ -28,7 +28,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
>
>         TP_fast_assign(
>                 __entry->pfn = page_to_pfn(page);
> -               __entry->flags = page->flags;
> +               __entry->flags = page->flags.f;
>                 __entry->count = page_ref_count(page);
>                 __entry->mapcount = atomic_read(&page->_mapcount);
>                 __entry->mapping = page->mapping;
> @@ -77,7 +77,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
>
>         TP_fast_assign(
>                 __entry->pfn = page_to_pfn(page);
> -               __entry->flags = page->flags;
> +               __entry->flags = page->flags.f;
>                 __entry->count = page_ref_count(page);
>                 __entry->mapcount = atomic_read(&page->_mapcount);
>                 __entry->mapping = page->mapping;
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 751838ef05e5..2e63f98c9520 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1140,10 +1140,10 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync,
>          */
>         flags = wait->flags;
>         if (flags & WQ_FLAG_EXCLUSIVE) {
> -               if (test_bit(key->bit_nr, &key->folio->flags))
> +               if (test_bit(key->bit_nr, &key->folio->flags.f))
>                         return -1;
>                 if (flags & WQ_FLAG_CUSTOM) {
> -                       if (test_and_set_bit(key->bit_nr, &key->folio->flags))
> +                       if (test_and_set_bit(key->bit_nr, &key->folio->flags.f))
>                                 return -1;
>                         flags |= WQ_FLAG_DONE;
>                 }
> @@ -1226,9 +1226,9 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr,
>                                         struct wait_queue_entry *wait)
>  {
>         if (wait->flags & WQ_FLAG_EXCLUSIVE) {
> -               if (test_and_set_bit(bit_nr, &folio->flags))
> +               if (test_and_set_bit(bit_nr, &folio->flags.f))
>                         return false;
> -       } else if (test_bit(bit_nr, &folio->flags))
> +       } else if (test_bit(bit_nr, &folio->flags.f))
>                 return false;
>
>         wait->flags |= WQ_FLAG_WOKEN | WQ_FLAG_DONE;
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 9c38a95e9f09..6b5f8b0db6c4 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3310,8 +3310,8 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>                  * unreferenced sub-pages of an anonymous THP: we can simply drop
>                  * PG_anon_exclusive (-> PG_mappedtodisk) for these here.
>                  */
> -               new_folio->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> -               new_folio->flags |= (folio->flags &
> +               new_folio->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> +               new_folio->flags.f |= (folio->flags.f &
>                                 ((1L << PG_referenced) |
>                                  (1L << PG_swapbacked) |
>                                  (1L << PG_swapcache) |
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 3047b9ac667e..718eb37bd077 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1693,10 +1693,10 @@ static int identify_page_state(unsigned long pfn, struct page *p,
>          * carried out only if the first check can't determine the page status.
>          */
>         for (ps = error_states;; ps++)
> -               if ((p->flags & ps->mask) == ps->res)
> +               if ((p->flags.f & ps->mask) == ps->res)
>                         break;
>
> -       page_flags |= (p->flags & (1UL << PG_dirty));
> +       page_flags |= (p->flags.f & (1UL << PG_dirty));
>
>         if (!ps->mask)
>                 for (ps = error_states;; ps++)
> @@ -2123,7 +2123,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb
>                 return action_result(pfn, MF_MSG_FREE_HUGE, res);
>         }
>
> -       page_flags = folio->flags;
> +       page_flags = folio->flags.f;
>
>         if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
>                 folio_unlock(folio);
> @@ -2384,7 +2384,7 @@ int memory_failure(unsigned long pfn, int flags)
>          * folio_remove_rmap_*() in try_to_unmap_one(). So to determine page
>          * status correctly, we save a copy of the page flags at this time.
>          */
> -       page_flags = folio->flags;
> +       page_flags = folio->flags.f;
>
>         /*
>          * __munlock_folio() may clear a writeback folio's LRU flag without
> @@ -2730,13 +2730,13 @@ static int soft_offline_in_use_page(struct page *page)
>                                 putback_movable_pages(&pagelist);
>
>                         pr_info("%#lx: %s migration failed %ld, type %pGp\n",
> -                               pfn, msg_page[huge], ret, &page->flags);
> +                               pfn, msg_page[huge], ret, &page->flags.f);
>                         if (ret > 0)
>                                 ret = -EBUSY;
>                 }
>         } else {
>                 pr_info("%#lx: %s isolation failed, page count %d, type %pGp\n",
> -                       pfn, msg_page[huge], page_count(page), &page->flags);
> +                       pfn, msg_page[huge], page_count(page), &page->flags.f);
>                 ret = -EBUSY;
>         }
>         return ret;
> diff --git a/mm/mmzone.c b/mm/mmzone.c
> index f9baa8882fbf..0c8f181d9d50 100644
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -99,14 +99,14 @@ int folio_xchg_last_cpupid(struct folio *folio, int cpupid)
>         unsigned long old_flags, flags;
>         int last_cpupid;
>
> -       old_flags = READ_ONCE(folio->flags);
> +       old_flags = READ_ONCE(folio->flags.f);
>         do {
>                 flags = old_flags;
>                 last_cpupid = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
>
>                 flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
>                 flags |= (cpupid & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
> -       } while (unlikely(!try_cmpxchg(&folio->flags, &old_flags, flags)));
> +       } while (unlikely(!try_cmpxchg(&folio->flags.f, &old_flags, flags)));
>
>         return last_cpupid;
>  }
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d1d037f97c5f..b6c040f7be85 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -950,7 +950,7 @@ static inline void __free_one_page(struct page *page,
>         bool to_tail;
>
>         VM_BUG_ON(!zone_is_initialized(zone));
> -       VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> +       VM_BUG_ON_PAGE(page->flags.f & PAGE_FLAGS_CHECK_AT_PREP, page);
>
>         VM_BUG_ON(migratetype == -1);
>         VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
> @@ -1043,7 +1043,7 @@ static inline bool page_expected_state(struct page *page,
>                         page->memcg_data |
>  #endif
>                         page_pool_page_is_pp(page) |
> -                       (page->flags & check_flags)))
> +                       (page->flags.f & check_flags)))
>                 return false;
>
>         return true;
> @@ -1059,7 +1059,7 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
>                 bad_reason = "non-NULL mapping";
>         if (unlikely(page_ref_count(page) != 0))
>                 bad_reason = "nonzero _refcount";
> -       if (unlikely(page->flags & flags)) {
> +       if (unlikely(page->flags.f & flags)) {
>                 if (flags == PAGE_FLAGS_CHECK_AT_PREP)
>                         bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag(s) set";
>                 else
> @@ -1358,7 +1358,7 @@ __always_inline bool free_pages_prepare(struct page *page,
>                 int i;
>
>                 if (compound) {
> -                       page[1].flags &= ~PAGE_FLAGS_SECOND;
> +                       page[1].flags.f &= ~PAGE_FLAGS_SECOND;
>  #ifdef NR_PAGES_IN_LARGE_FOLIO
>                         folio->_nr_pages = 0;
>  #endif
> @@ -1372,7 +1372,7 @@ __always_inline bool free_pages_prepare(struct page *page,
>                                         continue;
>                                 }
>                         }
> -                       (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> +                       (page + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>                 }
>         }
>         if (folio_test_anon(folio)) {
> @@ -1391,7 +1391,7 @@ __always_inline bool free_pages_prepare(struct page *page,
>         }
>
>         page_cpupid_reset_last(page);
> -       page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> +       page->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>         reset_page_owner(page, order);
>         page_table_check_free(page, order);
>         pgalloc_tag_sub(page, 1 << order);
> diff --git a/mm/swap.c b/mm/swap.c
> index 3632dd061beb..d2a23aa8d5ac 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -387,14 +387,14 @@ static void __lru_cache_activate_folio(struct folio *folio)
>
>  static void lru_gen_inc_refs(struct folio *folio)
>  {
> -       unsigned long new_flags, old_flags = READ_ONCE(folio->flags);
> +       unsigned long new_flags, old_flags = READ_ONCE(folio->flags.f);
>
>         if (folio_test_unevictable(folio))
>                 return;
>
>         /* see the comment on LRU_REFS_FLAGS */
>         if (!folio_test_referenced(folio)) {
> -               set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced));
> +               set_mask_bits(&folio->flags.f, LRU_REFS_MASK, BIT(PG_referenced));
>                 return;
>         }
>
> @@ -406,7 +406,7 @@ static void lru_gen_inc_refs(struct folio *folio)
>                 }
>
>                 new_flags = old_flags + BIT(LRU_REFS_PGOFF);
> -       } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
> +       } while (!try_cmpxchg(&folio->flags.f, &old_flags, new_flags));
>  }
>
>  static bool lru_gen_clear_refs(struct folio *folio)
> @@ -418,7 +418,7 @@ static bool lru_gen_clear_refs(struct folio *folio)
>         if (gen < 0)
>                 return true;
>
> -       set_mask_bits(&folio->flags, LRU_REFS_FLAGS | BIT(PG_workingset), 0);
> +       set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS | BIT(PG_workingset), 0);
>
>         lrugen = &folio_lruvec(folio)->lrugen;
>         /* whether can do without shuffling under the LRU lock */
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7de11524a936..edb3c992b117 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -888,11 +888,11 @@ static bool lru_gen_set_refs(struct folio *folio)
>  {
>         /* see the comment on LRU_REFS_FLAGS */
>         if (!folio_test_referenced(folio) && !folio_test_workingset(folio)) {
> -               set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced));
> +               set_mask_bits(&folio->flags.f, LRU_REFS_MASK, BIT(PG_referenced));
>                 return false;
>         }
>
> -       set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_workingset));
> +       set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS, BIT(PG_workingset));
>         return true;
>  }
>  #else
> @@ -3257,13 +3257,13 @@ static bool positive_ctrl_err(struct ctrl_pos *sp, struct ctrl_pos *pv)
>  /* promote pages accessed through page tables */
>  static int folio_update_gen(struct folio *folio, int gen)
>  {
> -       unsigned long new_flags, old_flags = READ_ONCE(folio->flags);
> +       unsigned long new_flags, old_flags = READ_ONCE(folio->flags.f);
>
>         VM_WARN_ON_ONCE(gen >= MAX_NR_GENS);
>
>         /* see the comment on LRU_REFS_FLAGS */
>         if (!folio_test_referenced(folio) && !folio_test_workingset(folio)) {
> -               set_mask_bits(&folio->flags, LRU_REFS_MASK, BIT(PG_referenced));
> +               set_mask_bits(&folio->flags.f, LRU_REFS_MASK, BIT(PG_referenced));
>                 return -1;
>         }
>
> @@ -3274,7 +3274,7 @@ static int folio_update_gen(struct folio *folio, int gen)
>
>                 new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_FLAGS);
>                 new_flags |= ((gen + 1UL) << LRU_GEN_PGOFF) | BIT(PG_workingset);
> -       } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
> +       } while (!try_cmpxchg(&folio->flags.f, &old_flags, new_flags));
>
>         return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
>  }
> @@ -3285,7 +3285,7 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai
>         int type = folio_is_file_lru(folio);
>         struct lru_gen_folio *lrugen = &lruvec->lrugen;
>         int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]);
> -       unsigned long new_flags, old_flags = READ_ONCE(folio->flags);
> +       unsigned long new_flags, old_flags = READ_ONCE(folio->flags.f);
>
>         VM_WARN_ON_ONCE_FOLIO(!(old_flags & LRU_GEN_MASK), folio);
>
> @@ -3302,7 +3302,7 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai
>                 /* for folio_end_writeback() */
>                 if (reclaiming)
>                         new_flags |= BIT(PG_reclaim);
> -       } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
> +       } while (!try_cmpxchg(&folio->flags.f, &old_flags, new_flags));
>
>         lru_gen_update_size(lruvec, folio, old_gen, new_gen);
>
> @@ -4553,7 +4553,7 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca
>
>         /* see the comment on LRU_REFS_FLAGS */
>         if (!folio_test_referenced(folio))
> -               set_mask_bits(&folio->flags, LRU_REFS_MASK, 0);
> +               set_mask_bits(&folio->flags.f, LRU_REFS_MASK, 0);
>
>         /* for shrink_folio_list() */
>         folio_clear_reclaim(folio);
> @@ -4766,7 +4766,7 @@ static int evict_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>
>                 /* don't add rejected folios to the oldest generation */
>                 if (lru_gen_folio_seq(lruvec, folio, false) == min_seq[type])
> -                       set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_active));
> +                       set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS, BIT(PG_active));
>         }
>
>         spin_lock_irq(&lruvec->lru_lock);
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 6e7f4cb1b9a7..68a76a91111f 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
> @@ -318,7 +318,7 @@ static void lru_gen_refault(struct folio *folio, void *shadow)
>                 folio_set_workingset(folio);
>                 mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta);
>         } else
> -               set_mask_bits(&folio->flags, LRU_REFS_MASK, (refs - 1UL) << LRU_REFS_PGOFF);
> +               set_mask_bits(&folio->flags.f, LRU_REFS_MASK, (refs - 1UL) << LRU_REFS_PGOFF);
>  unlock:
>         rcu_read_unlock();
>  }
> --
> 2.47.2
>
>

Hi.

I'm rebasing on mm-new, and seeing below build error after this patch:

./arch/arm64/include/asm/mte.h:207:2: error: operand of type 'typeof
(_Generic((*&folio->flags), char: (char)0, unsigned char: (unsigned
char)0, signed char: (signed char)0, unsigned short:
 (unsigned short)0, short: (short)0, unsigned int: (unsigned int)0,
int: (int)0, unsigned long: (unsigned long)0, long: (long)0, unsigned
long long: (unsigned long long)0, long long: (long
long)0, default: (*&folio->flags)))' (aka 'memdesc_flags_t') where
arithmetic or pointer type is required
  207 |         smp_cond_load_acquire(&folio->flags, VAL & (1UL <<
PG_mte_tagged));
      |
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./arch/arm64/include/asm/barrier.h:217:3: note: expanded from macro
'smp_cond_load_acquire'
  217 |                 __cmpwait_relaxed(__PTR, VAL);
         \
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./arch/arm64/include/asm/cmpxchg.h:262:34: note: expanded from macro
'__cmpwait_relaxed'
  262 |         __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))
      |                                         ^~~~~

Error is reproducible with:
clang --version
clang version 20.1.8 (Fedora 20.1.8-3.fc43)
Target: aarch64-redhat-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Configuration file: /etc/clang/aarch64-redhat-linux-gnu-clang.cfg


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/11] mm: Introduce memdesc_flags_t
  2025-08-19 17:49   ` Kairui Song
@ 2025-08-19 17:58     ` Matthew Wilcox
  2025-08-19 18:03       ` Kairui Song
  0 siblings, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2025-08-19 17:58 UTC (permalink / raw)
  To: Kairui Song; +Cc: Andrew Morton, linux-mm

On Wed, Aug 20, 2025 at 01:49:54AM +0800, Kairui Song wrote:
> On Wed, Aug 6, 2025 at 2:20 AM Matthew Wilcox (Oracle)
> <willy@infradead.org> wrote:

[you really didn't need to quote the entire 900 lines of patch]

> I'm rebasing on mm-new, and seeing below build error after this patch:

See https://lore.kernel.org/linux-mm/aKMgPRLD-WnkPxYm@casper.infradead.org/


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/11] mm: Introduce memdesc_flags_t
  2025-08-19 17:58     ` Matthew Wilcox
@ 2025-08-19 18:03       ` Kairui Song
  0 siblings, 0 replies; 29+ messages in thread
From: Kairui Song @ 2025-08-19 18:03 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andrew Morton, linux-mm

On Wed, Aug 20, 2025 at 1:58 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Aug 20, 2025 at 01:49:54AM +0800, Kairui Song wrote:
> > On Wed, Aug 6, 2025 at 2:20 AM Matthew Wilcox (Oracle)
> > <willy@infradead.org> wrote:
>
> [you really didn't need to quote the entire 900 lines of patch]

Sorry, forgot to truncate that.

>
> > I'm rebasing on mm-new, and seeing below build error after this patch:
>
> See https://lore.kernel.org/linux-mm/aKMgPRLD-WnkPxYm@casper.infradead.org/

Thanks, this worked for me.


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2025-08-19 18:03 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-05 17:22 [PATCH 00/11] Add and use memdesc_flags_t Matthew Wilcox (Oracle)
2025-08-05 17:22 ` [PATCH 01/11] mm: Introduce memdesc_flags_t Matthew Wilcox (Oracle)
2025-08-06 18:24   ` Zi Yan
2025-08-19 17:49   ` Kairui Song
2025-08-19 17:58     ` Matthew Wilcox
2025-08-19 18:03       ` Kairui Song
2025-08-05 17:22 ` [PATCH 02/11] mm: Convert page_to_section() to memdesc_section() Matthew Wilcox (Oracle)
2025-08-06 18:31   ` Zi Yan
2025-08-05 17:22 ` [PATCH 03/11] mm: Introduce memdesc_nid() Matthew Wilcox (Oracle)
2025-08-06 18:47   ` Zi Yan
2025-08-06 19:04     ` Matthew Wilcox
2025-08-06 19:07       ` Zi Yan
2025-08-05 17:22 ` [PATCH 04/11] mm: Introduce memdesc_zonenum() Matthew Wilcox (Oracle)
2025-08-06 18:57   ` Zi Yan
2025-08-05 17:22 ` [PATCH 05/11] slab: Use memdesc_flags_t Matthew Wilcox (Oracle)
2025-08-06 19:16   ` Zi Yan
2025-08-05 17:22 ` [PATCH 06/11] slab: Use memdesc_nid() Matthew Wilcox (Oracle)
2025-08-06 19:17   ` Zi Yan
2025-08-05 17:22 ` [PATCH 07/11] mm: Introduce memdesc_is_zone_device() Matthew Wilcox (Oracle)
2025-08-06 19:22   ` Zi Yan
2025-08-05 17:22 ` [PATCH 08/11] mm: Reimplement folio_is_device_private() Matthew Wilcox (Oracle)
2025-08-06 19:25   ` Zi Yan
2025-08-05 17:22 ` [PATCH 09/11] mm: Reimplement folio_is_device_coherent() Matthew Wilcox (Oracle)
2025-08-06 19:27   ` Zi Yan
2025-08-05 17:23 ` [PATCH 10/11] mm: Reimplement folio_is_fsdax() Matthew Wilcox (Oracle)
2025-08-06 19:27   ` Zi Yan
2025-08-05 17:23 ` [PATCH 11/11] mm: Add folio_is_pci_p2pdma() Matthew Wilcox (Oracle)
2025-08-05 21:40 ` [PATCH 00/11] Add and use memdesc_flags_t Shakeel Butt
2025-08-06 12:55   ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).