* [PATCH 0/4] Increase the number of bits available in page_type
@ 2024-08-21 17:39 Matthew Wilcox (Oracle)
2024-08-21 17:39 ` [PATCH 1/4] printf: Remove %pGt support Matthew Wilcox (Oracle)
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-08-21 17:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), David Hildenbrand, Hyeonggon Yoo,
linux-mm, Kent Overstreet
Kent wants more than 16 bits in page_type, so I resurrected this old patch
and expanded it a bit. It's a bit more efficient than our current scheme
(1 4-byte insn vs 3 insns of 13 bytes total) to test a single page type.
Survives a simple smoke test, but I haven't done any specific testing
of these changes.
Matthew Wilcox (Oracle) (4):
printf: Remove %pGt support
mm: Introduce page_mapcount_is_type()
mm: Support only one page_type per page
zsmalloc: Use all available 24 bits of page_type
Documentation/core-api/printk-formats.rst | 4 +-
drivers/block/zram/Kconfig | 1 -
fs/proc/internal.h | 3 +-
include/linux/mm.h | 3 +-
include/linux/page-flags.h | 76 +++++++++++------------
include/trace/events/mmflags.h | 10 ---
kernel/vmcore_info.c | 8 +--
lib/test_printf.c | 26 --------
lib/vsprintf.c | 21 -------
mm/Kconfig | 10 +--
mm/debug.c | 31 ++++++---
mm/internal.h | 1 -
mm/zsmalloc.c | 15 ++---
13 files changed, 72 insertions(+), 137 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 1/4] printf: Remove %pGt support
2024-08-21 17:39 [PATCH 0/4] Increase the number of bits available in page_type Matthew Wilcox (Oracle)
@ 2024-08-21 17:39 ` Matthew Wilcox (Oracle)
2024-08-21 21:00 ` David Hildenbrand
2024-08-21 17:39 ` [PATCH 2/4] mm: Introduce page_mapcount_is_type() Matthew Wilcox (Oracle)
` (3 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-08-21 17:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), David Hildenbrand, Hyeonggon Yoo,
linux-mm, Kent Overstreet
An upcoming patch will convert page type from being a bitfield to
a single bit, so we will not be able to use %pG to print the page
type any more. The printing of the symbolic name will be restored
in that patch.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
Documentation/core-api/printk-formats.rst | 4 +---
include/trace/events/mmflags.h | 10 ---------
lib/test_printf.c | 26 -----------------------
lib/vsprintf.c | 21 ------------------
mm/debug.c | 2 +-
mm/internal.h | 1 -
6 files changed, 2 insertions(+), 62 deletions(-)
diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst
index 4451ef501936..14e093da3ccd 100644
--- a/Documentation/core-api/printk-formats.rst
+++ b/Documentation/core-api/printk-formats.rst
@@ -576,13 +576,12 @@ The field width is passed by value, the bitmap is passed by reference.
Helper macros cpumask_pr_args() and nodemask_pr_args() are available to ease
printing cpumask and nodemask.
-Flags bitfields such as page flags, page_type, gfp_flags
+Flags bitfields such as page flags and gfp_flags
--------------------------------------------------------
::
%pGp 0x17ffffc0002036(referenced|uptodate|lru|active|private|node=0|zone=2|lastcpupid=0x1fffff)
- %pGt 0xffffff7f(buddy)
%pGg GFP_USER|GFP_DMA32|GFP_NOWARN
%pGv read|exec|mayread|maywrite|mayexec|denywrite
@@ -591,7 +590,6 @@ would construct the value. The type of flags is given by the third
character. Currently supported are:
- p - [p]age flags, expects value of type (``unsigned long *``)
- - t - page [t]ype, expects value of type (``unsigned int *``)
- v - [v]ma_flags, expects value of type (``unsigned long *``)
- g - [g]fp_flags, expects value of type (``gfp_t *``)
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index b5c4da370a50..c151cc21d367 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -130,16 +130,6 @@ IF_HAVE_PG_ARCH_X(arch_3)
__def_pageflag_names \
) : "none"
-#define DEF_PAGETYPE_NAME(_name) { PG_##_name, __stringify(_name) }
-
-#define __def_pagetype_names \
- DEF_PAGETYPE_NAME(slab), \
- DEF_PAGETYPE_NAME(hugetlb), \
- DEF_PAGETYPE_NAME(offline), \
- DEF_PAGETYPE_NAME(guard), \
- DEF_PAGETYPE_NAME(table), \
- DEF_PAGETYPE_NAME(buddy)
-
#if defined(CONFIG_X86)
#define __VM_ARCH_SPECIFIC_1 {VM_PAT, "pat" }
#elif defined(CONFIG_PPC)
diff --git a/lib/test_printf.c b/lib/test_printf.c
index 965cb6f28527..8448b6d02bd9 100644
--- a/lib/test_printf.c
+++ b/lib/test_printf.c
@@ -641,26 +641,12 @@ page_flags_test(int section, int node, int zone, int last_cpupid,
test(cmp_buf, "%pGp", &flags);
}
-static void __init page_type_test(unsigned int page_type, const char *name,
- char *cmp_buf)
-{
- unsigned long size;
-
- size = scnprintf(cmp_buf, BUF_SIZE, "%#x(", page_type);
- if (page_type_has_type(page_type))
- size += scnprintf(cmp_buf + size, BUF_SIZE - size, "%s", name);
-
- snprintf(cmp_buf + size, BUF_SIZE - size, ")");
- test(cmp_buf, "%pGt", &page_type);
-}
-
static void __init
flags(void)
{
unsigned long flags;
char *cmp_buffer;
gfp_t gfp;
- unsigned int page_type;
cmp_buffer = kmalloc(BUF_SIZE, GFP_KERNEL);
if (!cmp_buffer)
@@ -700,18 +686,6 @@ flags(void)
gfp |= __GFP_HIGH;
test(cmp_buffer, "%pGg", &gfp);
- page_type = ~0;
- page_type_test(page_type, "", cmp_buffer);
-
- page_type = 10;
- page_type_test(page_type, "", cmp_buffer);
-
- page_type = ~PG_buddy;
- page_type_test(page_type, "buddy", cmp_buffer);
-
- page_type = ~(PG_table | PG_buddy);
- page_type_test(page_type, "table|buddy", cmp_buffer);
-
kfree(cmp_buffer);
}
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 2d71b1115916..09f022ba1c05 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -2054,25 +2054,6 @@ char *format_page_flags(char *buf, char *end, unsigned long flags)
return buf;
}
-static
-char *format_page_type(char *buf, char *end, unsigned int page_type)
-{
- buf = number(buf, end, page_type, default_flag_spec);
-
- if (buf < end)
- *buf = '(';
- buf++;
-
- if (page_type_has_type(page_type))
- buf = format_flags(buf, end, ~page_type, pagetype_names);
-
- if (buf < end)
- *buf = ')';
- buf++;
-
- return buf;
-}
-
static noinline_for_stack
char *flags_string(char *buf, char *end, void *flags_ptr,
struct printf_spec spec, const char *fmt)
@@ -2086,8 +2067,6 @@ char *flags_string(char *buf, char *end, void *flags_ptr,
switch (fmt[1]) {
case 'p':
return format_page_flags(buf, end, *(unsigned long *)flags_ptr);
- case 't':
- return format_page_type(buf, end, *(unsigned int *)flags_ptr);
case 'v':
flags = *(unsigned long *)flags_ptr;
names = vmaflag_names;
diff --git a/mm/debug.c b/mm/debug.c
index 69e524c3e601..9f8e34537957 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -92,7 +92,7 @@ static void __dump_folio(struct folio *folio, struct page *page,
pr_warn("%sflags: %pGp%s\n", type, &folio->flags,
is_migrate_cma_folio(folio, pfn) ? " CMA" : "");
if (page_has_type(&folio->page))
- pr_warn("page_type: %pGt\n", &folio->page.page_type);
+ pr_warn("page_type: %x\n", folio->page.page_type);
print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32,
sizeof(unsigned long), page,
diff --git a/mm/internal.h b/mm/internal.h
index cde62b16b71c..a70547d33e0e 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1153,7 +1153,6 @@ static inline void flush_tlb_batched_pending(struct mm_struct *mm)
#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
extern const struct trace_print_flags pageflag_names[];
-extern const struct trace_print_flags pagetype_names[];
extern const struct trace_print_flags vmaflag_names[];
extern const struct trace_print_flags gfpflag_names[];
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 2/4] mm: Introduce page_mapcount_is_type()
2024-08-21 17:39 [PATCH 0/4] Increase the number of bits available in page_type Matthew Wilcox (Oracle)
2024-08-21 17:39 ` [PATCH 1/4] printf: Remove %pGt support Matthew Wilcox (Oracle)
@ 2024-08-21 17:39 ` Matthew Wilcox (Oracle)
2024-08-21 21:01 ` David Hildenbrand
2024-08-21 17:39 ` [PATCH 3/4] mm: Support only one page_type per page Matthew Wilcox (Oracle)
` (2 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-08-21 17:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), David Hildenbrand, Hyeonggon Yoo,
linux-mm, Kent Overstreet
Resolve the awkward "and add one to this opaque constant" test into a
self-documenting inline function.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/proc/internal.h | 3 +--
include/linux/mm.h | 3 +--
include/linux/page-flags.h | 12 +++++++++---
3 files changed, 11 insertions(+), 7 deletions(-)
diff --git a/fs/proc/internal.h b/fs/proc/internal.h
index a8a8576d8592..cc520168f8b6 100644
--- a/fs/proc/internal.h
+++ b/fs/proc/internal.h
@@ -166,8 +166,7 @@ static inline int folio_precise_page_mapcount(struct folio *folio,
{
int mapcount = atomic_read(&page->_mapcount) + 1;
- /* Handle page_has_type() pages */
- if (mapcount < PAGE_MAPCOUNT_RESERVE + 1)
+ if (page_mapcount_is_type(mapcount))
mapcount = 0;
if (folio_test_large(folio))
mapcount += folio_entire_mapcount(folio);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 00501f85f45f..47aa932075a8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1232,8 +1232,7 @@ static inline int folio_mapcount(const struct folio *folio)
if (likely(!folio_test_large(folio))) {
mapcount = atomic_read(&folio->_mapcount) + 1;
- /* Handle page_has_type() pages */
- if (mapcount < PAGE_MAPCOUNT_RESERVE + 1)
+ if (page_mapcount_is_type(mapcount))
mapcount = 0;
return mapcount;
}
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 17175a328e6b..998a99441e4f 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -956,12 +956,18 @@ enum pagetype {
#define folio_test_type(folio, flag) \
((READ_ONCE(folio->page.page_type) & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
-static inline int page_type_has_type(unsigned int page_type)
+static inline bool page_type_has_type(int page_type)
{
- return (int)page_type < PAGE_MAPCOUNT_RESERVE;
+ return page_type < PAGE_MAPCOUNT_RESERVE;
}
-static inline int page_has_type(const struct page *page)
+/* This takes a mapcount which is one more than page->_mapcount */
+static inline bool page_mapcount_is_type(unsigned int mapcount)
+{
+ return page_type_has_type(mapcount - 1);
+}
+
+static inline bool page_has_type(const struct page *page)
{
return page_type_has_type(READ_ONCE(page->page_type));
}
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 3/4] mm: Support only one page_type per page
2024-08-21 17:39 [PATCH 0/4] Increase the number of bits available in page_type Matthew Wilcox (Oracle)
2024-08-21 17:39 ` [PATCH 1/4] printf: Remove %pGt support Matthew Wilcox (Oracle)
2024-08-21 17:39 ` [PATCH 2/4] mm: Introduce page_mapcount_is_type() Matthew Wilcox (Oracle)
@ 2024-08-21 17:39 ` Matthew Wilcox (Oracle)
2024-08-21 21:05 ` David Hildenbrand
2024-08-28 3:35 ` Kefeng Wang
2024-08-21 17:39 ` [PATCH 4/4] zsmalloc: Use all available 24 bits of page_type Matthew Wilcox (Oracle)
2024-08-21 21:15 ` [PATCH 0/4] Increase the number of bits available in page_type Kent Overstreet
4 siblings, 2 replies; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-08-21 17:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), David Hildenbrand, Hyeonggon Yoo,
linux-mm, Kent Overstreet
By using a few values in the top byte, users of page_type can store up
to 24 bits of additional data in page_type. It also reduces the code
size as (with replacement of READ_ONCE() with data_race()), the kernel
can check just a single byte. eg:
ffffffff811e3a79: 8b 47 30 mov 0x30(%rdi),%eax
ffffffff811e3a7c: 55 push %rbp
ffffffff811e3a7d: 48 89 e5 mov %rsp,%rbp
ffffffff811e3a80: 25 00 00 00 82 and $0x82000000,%eax
ffffffff811e3a85: 3d 00 00 00 80 cmp $0x80000000,%eax
ffffffff811e3a8a: 74 4d je ffffffff811e3ad9 <folio_mapping+0x69>
becomes:
ffffffff811e3a69: 80 7f 33 f5 cmpb $0xf5,0x33(%rdi)
ffffffff811e3a6d: 55 push %rbp
ffffffff811e3a6e: 48 89 e5 mov %rsp,%rbp
ffffffff811e3a71: 74 4d je ffffffff811e3ac0 <folio_mapping+0x60>
replacing three instructions with one.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/page-flags.h | 68 ++++++++++++++++----------------------
kernel/vmcore_info.c | 8 ++---
mm/debug.c | 31 +++++++++++++----
3 files changed, 56 insertions(+), 51 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 998a99441e4f..0c738bda5d98 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -923,42 +923,29 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned)
#endif
/*
- * For pages that are never mapped to userspace,
- * page_type may be used. Because it is initialised to -1, we invert the
- * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and
- * __ClearPageFoo *sets* the bit used for PageFoo. We reserve a few high and
- * low bits so that an underflow or overflow of _mapcount won't be
- * mistaken for a page type value.
+ * For pages that do not use mapcount, page_type may be used.
+ * The low 24 bits of pagetype may be used for your own purposes, as long
+ * as you are careful to not affect the top 8 bits. The low bits of
+ * pagetype will be overwritten when you clear the page_type from the page.
*/
-
enum pagetype {
- PG_buddy = 0x40000000,
- PG_offline = 0x20000000,
- PG_table = 0x10000000,
- PG_guard = 0x08000000,
- PG_hugetlb = 0x04000000,
- PG_slab = 0x02000000,
- PG_zsmalloc = 0x01000000,
- PG_unaccepted = 0x00800000,
-
- PAGE_TYPE_BASE = 0x80000000,
-
- /*
- * Reserve 0xffff0000 - 0xfffffffe to catch _mapcount underflows and
- * allow owners that set a type to reuse the lower 16 bit for their own
- * purposes.
- */
- PAGE_MAPCOUNT_RESERVE = ~0x0000ffff,
+ /* 0x00-0x7f are positive numbers, ie mapcount */
+ /* Reserve 0x80-0xef for mapcount overflow. */
+ PGTY_buddy = 0xf0,
+ PGTY_offline = 0xf1,
+ PGTY_table = 0xf2,
+ PGTY_guard = 0xf3,
+ PGTY_hugetlb = 0xf4,
+ PGTY_slab = 0xf5,
+ PGTY_zsmalloc = 0xf6,
+ PGTY_unaccepted = 0xf7,
+
+ PGTY_mapcount_underflow = 0xff
};
-#define PageType(page, flag) \
- ((READ_ONCE(page->page_type) & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
-#define folio_test_type(folio, flag) \
- ((READ_ONCE(folio->page.page_type) & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
-
static inline bool page_type_has_type(int page_type)
{
- return page_type < PAGE_MAPCOUNT_RESERVE;
+ return page_type < (PGTY_mapcount_underflow << 24);
}
/* This takes a mapcount which is one more than page->_mapcount */
@@ -969,40 +956,41 @@ static inline bool page_mapcount_is_type(unsigned int mapcount)
static inline bool page_has_type(const struct page *page)
{
- return page_type_has_type(READ_ONCE(page->page_type));
+ return page_mapcount_is_type(data_race(page->page_type));
}
#define FOLIO_TYPE_OPS(lname, fname) \
-static __always_inline bool folio_test_##fname(const struct folio *folio)\
+static __always_inline bool folio_test_##fname(const struct folio *folio) \
{ \
- return folio_test_type(folio, PG_##lname); \
+ return data_race(folio->page.page_type >> 24) == PGTY_##lname; \
} \
static __always_inline void __folio_set_##fname(struct folio *folio) \
{ \
- VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio); \
- folio->page.page_type &= ~PG_##lname; \
+ VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX, \
+ folio); \
+ folio->page.page_type = PGTY_##lname << 24; \
} \
static __always_inline void __folio_clear_##fname(struct folio *folio) \
{ \
VM_BUG_ON_FOLIO(!folio_test_##fname(folio), folio); \
- folio->page.page_type |= PG_##lname; \
+ folio->page.page_type = UINT_MAX; \
}
#define PAGE_TYPE_OPS(uname, lname, fname) \
FOLIO_TYPE_OPS(lname, fname) \
static __always_inline int Page##uname(const struct page *page) \
{ \
- return PageType(page, PG_##lname); \
+ return data_race(page->page_type >> 24) == PGTY_##lname; \
} \
static __always_inline void __SetPage##uname(struct page *page) \
{ \
- VM_BUG_ON_PAGE(!PageType(page, 0), page); \
- page->page_type &= ~PG_##lname; \
+ VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page); \
+ page->page_type = PGTY_##lname << 24; \
} \
static __always_inline void __ClearPage##uname(struct page *page) \
{ \
VM_BUG_ON_PAGE(!Page##uname(page), page); \
- page->page_type |= PG_##lname; \
+ page->page_type = UINT_MAX; \
}
/*
diff --git a/kernel/vmcore_info.c b/kernel/vmcore_info.c
index 8b4f8cc2e0ec..1fec61603ef3 100644
--- a/kernel/vmcore_info.c
+++ b/kernel/vmcore_info.c
@@ -198,17 +198,17 @@ static int __init crash_save_vmcoreinfo_init(void)
VMCOREINFO_NUMBER(PG_private);
VMCOREINFO_NUMBER(PG_swapcache);
VMCOREINFO_NUMBER(PG_swapbacked);
-#define PAGE_SLAB_MAPCOUNT_VALUE (~PG_slab)
+#define PAGE_SLAB_MAPCOUNT_VALUE (PGTY_slab << 24)
VMCOREINFO_NUMBER(PAGE_SLAB_MAPCOUNT_VALUE);
#ifdef CONFIG_MEMORY_FAILURE
VMCOREINFO_NUMBER(PG_hwpoison);
#endif
VMCOREINFO_NUMBER(PG_head_mask);
-#define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy)
+#define PAGE_BUDDY_MAPCOUNT_VALUE (PGTY_buddy << 24)
VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE);
-#define PAGE_HUGETLB_MAPCOUNT_VALUE (~PG_hugetlb)
+#define PAGE_HUGETLB_MAPCOUNT_VALUE (PGTY_hugetlb << 24)
VMCOREINFO_NUMBER(PAGE_HUGETLB_MAPCOUNT_VALUE);
-#define PAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline)
+#define PAGE_OFFLINE_MAPCOUNT_VALUE (PGTY_offline << 24)
VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE);
#ifdef CONFIG_KALLSYMS
diff --git a/mm/debug.c b/mm/debug.c
index 9f8e34537957..aa57d3ffd4ed 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -36,11 +36,6 @@ const struct trace_print_flags pageflag_names[] = {
{0, NULL}
};
-const struct trace_print_flags pagetype_names[] = {
- __def_pagetype_names,
- {0, NULL}
-};
-
const struct trace_print_flags gfpflag_names[] = {
__def_gfpflag_names,
{0, NULL}
@@ -51,6 +46,27 @@ const struct trace_print_flags vmaflag_names[] = {
{0, NULL}
};
+#define DEF_PAGETYPE_NAME(_name) [PGTY_##_name - 0xf0] = __stringify(_name)
+
+static const char *page_type_names[] = {
+ DEF_PAGETYPE_NAME(slab),
+ DEF_PAGETYPE_NAME(hugetlb),
+ DEF_PAGETYPE_NAME(offline),
+ DEF_PAGETYPE_NAME(guard),
+ DEF_PAGETYPE_NAME(table),
+ DEF_PAGETYPE_NAME(buddy),
+ DEF_PAGETYPE_NAME(unaccepted),
+};
+
+static const char *page_type_name(unsigned int page_type)
+{
+ unsigned i = (page_type >> 24) - 0xf0;
+
+ if (i >= ARRAY_SIZE(page_type_names))
+ return "unknown";
+ return page_type_names[i];
+}
+
static void __dump_folio(struct folio *folio, struct page *page,
unsigned long pfn, unsigned long idx)
{
@@ -58,7 +74,7 @@ static void __dump_folio(struct folio *folio, struct page *page,
int mapcount = atomic_read(&page->_mapcount);
char *type = "";
- mapcount = page_type_has_type(mapcount) ? 0 : mapcount + 1;
+ mapcount = page_mapcount_is_type(mapcount) ? 0 : mapcount + 1;
pr_warn("page: refcount:%d mapcount:%d mapping:%p index:%#lx pfn:%#lx\n",
folio_ref_count(folio), mapcount, mapping,
folio->index + idx, pfn);
@@ -92,7 +108,8 @@ static void __dump_folio(struct folio *folio, struct page *page,
pr_warn("%sflags: %pGp%s\n", type, &folio->flags,
is_migrate_cma_folio(folio, pfn) ? " CMA" : "");
if (page_has_type(&folio->page))
- pr_warn("page_type: %x\n", folio->page.page_type);
+ pr_warn("page_type: %x(%s)\n", folio->page.page_type >> 24,
+ page_type_name(folio->page.page_type));
print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32,
sizeof(unsigned long), page,
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 4/4] zsmalloc: Use all available 24 bits of page_type
2024-08-21 17:39 [PATCH 0/4] Increase the number of bits available in page_type Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2024-08-21 17:39 ` [PATCH 3/4] mm: Support only one page_type per page Matthew Wilcox (Oracle)
@ 2024-08-21 17:39 ` Matthew Wilcox (Oracle)
2024-08-21 21:06 ` David Hildenbrand
2024-08-21 21:15 ` [PATCH 0/4] Increase the number of bits available in page_type Kent Overstreet
4 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-08-21 17:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), David Hildenbrand, Hyeonggon Yoo,
linux-mm, Kent Overstreet
Now that we have an extra 8 bits, we don't need to limit ourselves to
supporting a 64KiB page size. I'm sure both Hexagon users are grateful,
but it does reduce complexity a little. We can also remove
reset_first_obj_offset() as calling __ClearPageZsmalloc() will now reset
all 32 bits of page_type.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
drivers/block/zram/Kconfig | 1 -
mm/Kconfig | 10 ++--------
mm/zsmalloc.c | 15 ++++-----------
3 files changed, 6 insertions(+), 20 deletions(-)
diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
index 40e035468de2..6aea609b795c 100644
--- a/drivers/block/zram/Kconfig
+++ b/drivers/block/zram/Kconfig
@@ -2,7 +2,6 @@
config ZRAM
tristate "Compressed RAM block device support"
depends on BLOCK && SYSFS && MMU
- depends on HAVE_ZSMALLOC
select ZSMALLOC
help
Creates virtual block devices called /dev/zramX (X = 0, 1, ...).
diff --git a/mm/Kconfig b/mm/Kconfig
index 3936fe4d26d9..5946dcdcaeda 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -128,7 +128,7 @@ config ZSWAP_COMPRESSOR_DEFAULT
choice
prompt "Default allocator"
depends on ZSWAP
- default ZSWAP_ZPOOL_DEFAULT_ZSMALLOC if HAVE_ZSMALLOC
+ default ZSWAP_ZPOOL_DEFAULT_ZSMALLOC if MMU
default ZSWAP_ZPOOL_DEFAULT_ZBUD
help
Selects the default allocator for the compressed cache for
@@ -154,7 +154,6 @@ config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
bool "zsmalloc"
- depends on HAVE_ZSMALLOC
select ZSMALLOC
help
Use the zsmalloc allocator as the default allocator.
@@ -187,15 +186,10 @@ config Z3FOLD
page. It is a ZBUD derivative so the simplicity and determinism are
still there.
-config HAVE_ZSMALLOC
- def_bool y
- depends on MMU
- depends on PAGE_SIZE_LESS_THAN_256KB # we want <= 64 KiB
-
config ZSMALLOC
tristate
prompt "N:1 compression allocator (zsmalloc)" if ZSWAP
- depends on HAVE_ZSMALLOC
+ depends on MMU
help
zsmalloc is a slab-based memory allocator designed to store
pages of various compression levels efficiently. It achieves
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 2d3163e4da96..73a3ec5b21ad 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -20,7 +20,7 @@
* page->index: links together all component pages of a zspage
* For the huge page, this is always 0, so we use this field
* to store handle.
- * page->page_type: PG_zsmalloc, lower 16 bit locate the first object
+ * page->page_type: PGTY_zsmalloc, lower 24 bits locate the first object
* offset in a subpage of a zspage
*
* Usage of struct page flags:
@@ -452,13 +452,7 @@ static inline struct page *get_first_page(struct zspage *zspage)
return first_page;
}
-#define FIRST_OBJ_PAGE_TYPE_MASK 0xffff
-
-static inline void reset_first_obj_offset(struct page *page)
-{
- VM_WARN_ON_ONCE(!PageZsmalloc(page));
- page->page_type |= FIRST_OBJ_PAGE_TYPE_MASK;
-}
+#define FIRST_OBJ_PAGE_TYPE_MASK 0xffffff
static inline unsigned int get_first_obj_offset(struct page *page)
{
@@ -468,8 +462,8 @@ static inline unsigned int get_first_obj_offset(struct page *page)
static inline void set_first_obj_offset(struct page *page, unsigned int offset)
{
- /* With 16 bit available, we can support offsets into 64 KiB pages. */
- BUILD_BUG_ON(PAGE_SIZE > SZ_64K);
+ /* With 24 bits available, we can support offsets into 16 MiB pages. */
+ BUILD_BUG_ON(PAGE_SIZE > SZ_16M);
VM_WARN_ON_ONCE(!PageZsmalloc(page));
VM_WARN_ON_ONCE(offset & ~FIRST_OBJ_PAGE_TYPE_MASK);
page->page_type &= ~FIRST_OBJ_PAGE_TYPE_MASK;
@@ -808,7 +802,6 @@ static void reset_page(struct page *page)
ClearPagePrivate(page);
set_page_private(page, 0);
page->index = 0;
- reset_first_obj_offset(page);
__ClearPageZsmalloc(page);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] printf: Remove %pGt support
2024-08-21 17:39 ` [PATCH 1/4] printf: Remove %pGt support Matthew Wilcox (Oracle)
@ 2024-08-21 21:00 ` David Hildenbrand
2024-08-21 21:08 ` Matthew Wilcox
0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2024-08-21 21:00 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton
Cc: Hyeonggon Yoo, linux-mm, Kent Overstreet
On 21.08.24 19:39, Matthew Wilcox (Oracle) wrote:
> An upcoming patch will convert page type from being a bitfield to
> a single bit
Not sure what you are trying to say with the "a single bit". Did you
mean "single byte" ?
In any case
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/4] mm: Introduce page_mapcount_is_type()
2024-08-21 17:39 ` [PATCH 2/4] mm: Introduce page_mapcount_is_type() Matthew Wilcox (Oracle)
@ 2024-08-21 21:01 ` David Hildenbrand
0 siblings, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2024-08-21 21:01 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton
Cc: Hyeonggon Yoo, linux-mm, Kent Overstreet
On 21.08.24 19:39, Matthew Wilcox (Oracle) wrote:
> Resolve the awkward "and add one to this opaque constant" test into a
> self-documenting inline function.
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] mm: Support only one page_type per page
2024-08-21 17:39 ` [PATCH 3/4] mm: Support only one page_type per page Matthew Wilcox (Oracle)
@ 2024-08-21 21:05 ` David Hildenbrand
2024-08-28 3:35 ` Kefeng Wang
1 sibling, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2024-08-21 21:05 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton
Cc: Hyeonggon Yoo, linux-mm, Kent Overstreet
On 21.08.24 19:39, Matthew Wilcox (Oracle) wrote:
> By using a few values in the top byte, users of page_type can store up
> to 24 bits of additional data in page_type. It also reduces the code
> size as (with replacement of READ_ONCE() with data_race()), the kernel
> can check just a single byte. eg:
>
> ffffffff811e3a79: 8b 47 30 mov 0x30(%rdi),%eax
> ffffffff811e3a7c: 55 push %rbp
> ffffffff811e3a7d: 48 89 e5 mov %rsp,%rbp
> ffffffff811e3a80: 25 00 00 00 82 and $0x82000000,%eax
> ffffffff811e3a85: 3d 00 00 00 80 cmp $0x80000000,%eax
> ffffffff811e3a8a: 74 4d je ffffffff811e3ad9 <folio_mapping+0x69>
>
> becomes:
>
> ffffffff811e3a69: 80 7f 33 f5 cmpb $0xf5,0x33(%rdi)
> ffffffff811e3a6d: 55 push %rbp
> ffffffff811e3a6e: 48 89 e5 mov %rsp,%rbp
> ffffffff811e3a71: 74 4d je ffffffff811e3ac0 <folio_mapping+0x60>
>
> replacing three instructions with one.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 4/4] zsmalloc: Use all available 24 bits of page_type
2024-08-21 17:39 ` [PATCH 4/4] zsmalloc: Use all available 24 bits of page_type Matthew Wilcox (Oracle)
@ 2024-08-21 21:06 ` David Hildenbrand
0 siblings, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2024-08-21 21:06 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton
Cc: Hyeonggon Yoo, linux-mm, Kent Overstreet
On 21.08.24 19:39, Matthew Wilcox (Oracle) wrote:
> Now that we have an extra 8 bits, we don't need to limit ourselves to
> supporting a 64KiB page size. I'm sure both Hexagon users are grateful,
> but it does reduce complexity a little. We can also remove
> reset_first_obj_offset() as calling __ClearPageZsmalloc() will now reset
> all 32 bits of page_type.
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] printf: Remove %pGt support
2024-08-21 21:00 ` David Hildenbrand
@ 2024-08-21 21:08 ` Matthew Wilcox
0 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox @ 2024-08-21 21:08 UTC (permalink / raw)
To: David Hildenbrand; +Cc: Andrew Morton, Hyeonggon Yoo, linux-mm, Kent Overstreet
On Wed, Aug 21, 2024 at 11:00:23PM +0200, David Hildenbrand wrote:
> On 21.08.24 19:39, Matthew Wilcox (Oracle) wrote:
> > An upcoming patch will convert page type from being a bitfield to
> > a single bit
>
> Not sure what you are trying to say with the "a single bit". Did you mean
> "single byte" ?
... I did ...
> In any case
>
> Acked-by: David Hildenbrand <david@redhat.com>
Thanks!
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 0/4] Increase the number of bits available in page_type
2024-08-21 17:39 [PATCH 0/4] Increase the number of bits available in page_type Matthew Wilcox (Oracle)
` (3 preceding siblings ...)
2024-08-21 17:39 ` [PATCH 4/4] zsmalloc: Use all available 24 bits of page_type Matthew Wilcox (Oracle)
@ 2024-08-21 21:15 ` Kent Overstreet
4 siblings, 0 replies; 17+ messages in thread
From: Kent Overstreet @ 2024-08-21 21:15 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Andrew Morton, David Hildenbrand, Hyeonggon Yoo, linux-mm
On Wed, Aug 21, 2024 at 06:39:08PM GMT, Matthew Wilcox (Oracle) wrote:
> Kent wants more than 16 bits in page_type, so I resurrected this old patch
> and expanded it a bit. It's a bit more efficient than our current scheme
> (1 4-byte insn vs 3 insns of 13 bytes total) to test a single page type.
> Survives a simple smoke test, but I haven't done any specific testing
> of these changes.
not page_type itself, but - if we can get 24 bits for that field in
struct slab, we can add the object size there and make __ksize() faster,
which we can use to get actual numbers on amount of memory stranded by
RCU (and faster __ksize() will no doubt be useful in other places...)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] mm: Support only one page_type per page
2024-08-21 17:39 ` [PATCH 3/4] mm: Support only one page_type per page Matthew Wilcox (Oracle)
2024-08-21 21:05 ` David Hildenbrand
@ 2024-08-28 3:35 ` Kefeng Wang
2025-08-01 2:43 ` Matthew Wilcox
1 sibling, 1 reply; 17+ messages in thread
From: Kefeng Wang @ 2024-08-28 3:35 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton
Cc: David Hildenbrand, Hyeonggon Yoo, linux-mm, Kent Overstreet
Hi Matthew,
On 2024/8/22 1:39, Matthew Wilcox (Oracle) wrote:
> By using a few values in the top byte, users of page_type can store up
> to 24 bits of additional data in page_type. It also reduces the code
> size as (with replacement of READ_ONCE() with data_race()), the kernel
> can check just a single byte. eg:
>
> ffffffff811e3a79: 8b 47 30 mov 0x30(%rdi),%eax
> ffffffff811e3a7c: 55 push %rbp
> ffffffff811e3a7d: 48 89 e5 mov %rsp,%rbp
> ffffffff811e3a80: 25 00 00 00 82 and $0x82000000,%eax
> ffffffff811e3a85: 3d 00 00 00 80 cmp $0x80000000,%eax
> ffffffff811e3a8a: 74 4d je ffffffff811e3ad9 <folio_mapping+0x69>
>
> becomes:
>
> ffffffff811e3a69: 80 7f 33 f5 cmpb $0xf5,0x33(%rdi)
> ffffffff811e3a6d: 55 push %rbp
> ffffffff811e3a6e: 48 89 e5 mov %rsp,%rbp
> ffffffff811e3a71: 74 4d je ffffffff811e3ac0 <folio_mapping+0x60>
>
> replacing three instructions with one.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> include/linux/page-flags.h | 68 ++++++++++++++++----------------------
> kernel/vmcore_info.c | 8 ++---
> mm/debug.c | 31 +++++++++++++----
> 3 files changed, 56 insertions(+), 51 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 998a99441e4f..0c738bda5d98 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -923,42 +923,29 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned)
> #endif
>
> /*
> - * For pages that are never mapped to userspace,
> - * page_type may be used. Because it is initialised to -1, we invert the
> - * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and
> - * __ClearPageFoo *sets* the bit used for PageFoo. We reserve a few high and
> - * low bits so that an underflow or overflow of _mapcount won't be
> - * mistaken for a page type value.
> + * For pages that do not use mapcount, page_type may be used.
> + * The low 24 bits of pagetype may be used for your own purposes, as long
> + * as you are careful to not affect the top 8 bits. The low bits of
> + * pagetype will be overwritten when you clear the page_type from the page.
> */
> -
> enum pagetype {
> - PG_buddy = 0x40000000,
> - PG_offline = 0x20000000,
> - PG_table = 0x10000000,
> - PG_guard = 0x08000000,
> - PG_hugetlb = 0x04000000,
> - PG_slab = 0x02000000,
> - PG_zsmalloc = 0x01000000,
> - PG_unaccepted = 0x00800000,
> -
> - PAGE_TYPE_BASE = 0x80000000,
> -
> - /*
> - * Reserve 0xffff0000 - 0xfffffffe to catch _mapcount underflows and
> - * allow owners that set a type to reuse the lower 16 bit for their own
> - * purposes.
> - */
> - PAGE_MAPCOUNT_RESERVE = ~0x0000ffff,
> + /* 0x00-0x7f are positive numbers, ie mapcount */
> + /* Reserve 0x80-0xef for mapcount overflow. */
> + PGTY_buddy = 0xf0,
> + PGTY_offline = 0xf1,
> + PGTY_table = 0xf2,
> + PGTY_guard = 0xf3,
> + PGTY_hugetlb = 0xf4,
> + PGTY_slab = 0xf5,
> + PGTY_zsmalloc = 0xf6,
> + PGTY_unaccepted = 0xf7,
> +
> + PGTY_mapcount_underflow = 0xff
> };
>
> -#define PageType(page, flag) \
> - ((READ_ONCE(page->page_type) & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
> -#define folio_test_type(folio, flag) \
> - ((READ_ONCE(folio->page.page_type) & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
> -
> static inline bool page_type_has_type(int page_type)
> {
> - return page_type < PAGE_MAPCOUNT_RESERVE;
> + return page_type < (PGTY_mapcount_underflow << 24);
> }
>
> /* This takes a mapcount which is one more than page->_mapcount */
> @@ -969,40 +956,41 @@ static inline bool page_mapcount_is_type(unsigned int mapcount)
>
> static inline bool page_has_type(const struct page *page)
> {
> - return page_type_has_type(READ_ONCE(page->page_type));
> + return page_mapcount_is_type(data_race(page->page_type));
> }
>
> #define FOLIO_TYPE_OPS(lname, fname) \
> -static __always_inline bool folio_test_##fname(const struct folio *folio)\
> +static __always_inline bool folio_test_##fname(const struct folio *folio) \
> { \
> - return folio_test_type(folio, PG_##lname); \
> + return data_race(folio->page.page_type >> 24) == PGTY_##lname; \
> } \
> static __always_inline void __folio_set_##fname(struct folio *folio) \
> { \
> - VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio); \
> - folio->page.page_type &= ~PG_##lname; \
> + VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX, \
> + folio); \
> + folio->page.page_type = PGTY_##lname << 24; \
> } \
> static __always_inline void __folio_clear_##fname(struct folio *folio) \
> { \
> VM_BUG_ON_FOLIO(!folio_test_##fname(folio), folio); \
> - folio->page.page_type |= PG_##lname; \
> + folio->page.page_type = UINT_MAX; \
> }
>
> #define PAGE_TYPE_OPS(uname, lname, fname) \
> FOLIO_TYPE_OPS(lname, fname) \
> static __always_inline int Page##uname(const struct page *page) \
> { \
> - return PageType(page, PG_##lname); \
> + return data_race(page->page_type >> 24) == PGTY_##lname; \
> } \
> static __always_inline void __SetPage##uname(struct page *page) \
> { \
> - VM_BUG_ON_PAGE(!PageType(page, 0), page); \
> - page->page_type &= ~PG_##lname; \
> + VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page); \
> + page->page_type = PGTY_##lname << 24; \
> } \
> static __always_inline void __ClearPage##uname(struct page *page) \
> { \
> VM_BUG_ON_PAGE(!Page##uname(page), page); \
> - page->page_type |= PG_##lname; \
> + page->page_type = UINT_MAX; \
> }
>
There are some UBSAN warning about __folio_set_##fname/__SetPage##uname,
UBSAN: shift-out-of-bounds in ../include/linux/page-flags.h:998:1
left shift of 240 by 24 places cannot be represented in type 'int'
...
Changing significant bit to unsigned to fix it.
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 28d2337525d1..2175ebceb41c 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -966,7 +966,7 @@ static __always_inline void
__folio_set_##fname(struct folio *folio) \
{ \
VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX, \
folio); \
- folio->page.page_type = PGTY_##lname << 24; \
+ folio->page.page_type = (unsigned int)PGTY_##lname << 24; \
} \
static __always_inline void __folio_clear_##fname(struct folio *folio) \
{ \
@@ -983,7 +983,7 @@ static __always_inline int Page##uname(const struct
page *page) \
static __always_inline void __SetPage##uname(struct page *page)
\
{ \
VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page); \
- page->page_type = PGTY_##lname << 24; \
+ page->page_type = (unsigned int)PGTY_##lname << 24; \
} \
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] mm: Support only one page_type per page
2024-08-28 3:35 ` Kefeng Wang
@ 2025-08-01 2:43 ` Matthew Wilcox
2025-08-01 2:49 ` Kent Overstreet
2025-08-01 8:13 ` Kefeng Wang
0 siblings, 2 replies; 17+ messages in thread
From: Matthew Wilcox @ 2025-08-01 2:43 UTC (permalink / raw)
To: Kefeng Wang
Cc: Andrew Morton, David Hildenbrand, Hyeonggon Yoo, linux-mm,
Kent Overstreet
On Wed, Aug 28, 2024 at 11:35:28AM +0800, Kefeng Wang wrote:
> There are some UBSAN warning about __folio_set_##fname/__SetPage##uname,
>
> UBSAN: shift-out-of-bounds in ../include/linux/page-flags.h:998:1
> left shift of 240 by 24 places cannot be represented in type 'int'
I can't reproduce this. I know Andrew merged this patch in, but I
tried backing it out and enabling UBSAN and it doesn't show up for me.
Relevant part of .config:
CONFIG_ARCH_HAS_UBSAN=y
CONFIG_UBSAN=y
CONFIG_UBSAN_TRAP=y
CONFIG_CC_HAS_UBSAN_BOUNDS_STRICT=y
CONFIG_UBSAN_BOUNDS=y
CONFIG_UBSAN_BOUNDS_STRICT=y
CONFIG_UBSAN_SHIFT=y
# CONFIG_UBSAN_DIV_ZERO is not set
CONFIG_UBSAN_BOOL=y
CONFIG_UBSAN_ENUM=y
# CONFIG_TEST_UBSAN is not set
(I tried CONFIG_UBSAN_TRAP both on and off)
> Changing significant bit to unsigned to fix it.
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 28d2337525d1..2175ebceb41c 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -966,7 +966,7 @@ static __always_inline void __folio_set_##fname(struct
> folio *folio) \
> { \
> VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX, \
> folio); \
> - folio->page.page_type = PGTY_##lname << 24; \
> + folio->page.page_type = (unsigned int)PGTY_##lname << 24; \
> } \
> static __always_inline void __folio_clear_##fname(struct folio *folio) \
> { \
> @@ -983,7 +983,7 @@ static __always_inline int Page##uname(const struct page
> *page) \
> static __always_inline void __SetPage##uname(struct page *page) \
> { \
> VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page); \
> - page->page_type = PGTY_##lname << 24; \
> + page->page_type = (unsigned int)PGTY_##lname << 24; \
> } \
>
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] mm: Support only one page_type per page
2025-08-01 2:43 ` Matthew Wilcox
@ 2025-08-01 2:49 ` Kent Overstreet
2025-08-01 8:13 ` Kefeng Wang
1 sibling, 0 replies; 17+ messages in thread
From: Kent Overstreet @ 2025-08-01 2:49 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Kefeng Wang, Andrew Morton, David Hildenbrand, Hyeonggon Yoo,
linux-mm
On Fri, Aug 01, 2025 at 03:43:47AM +0100, Matthew Wilcox wrote:
> On Wed, Aug 28, 2024 at 11:35:28AM +0800, Kefeng Wang wrote:
> > There are some UBSAN warning about __folio_set_##fname/__SetPage##uname,
> >
> > UBSAN: shift-out-of-bounds in ../include/linux/page-flags.h:998:1
> > left shift of 240 by 24 places cannot be represented in type 'int'
>
> I can't reproduce this. I know Andrew merged this patch in, but I
> tried backing it out and enabling UBSAN and it doesn't show up for me.
> Relevant part of .config:
>
> CONFIG_ARCH_HAS_UBSAN=y
> CONFIG_UBSAN=y
> CONFIG_UBSAN_TRAP=y
> CONFIG_CC_HAS_UBSAN_BOUNDS_STRICT=y
> CONFIG_UBSAN_BOUNDS=y
> CONFIG_UBSAN_BOUNDS_STRICT=y
> CONFIG_UBSAN_SHIFT=y
> # CONFIG_UBSAN_DIV_ZERO is not set
> CONFIG_UBSAN_BOOL=y
> CONFIG_UBSAN_ENUM=y
> # CONFIG_TEST_UBSAN is not set
>
> (I tried CONFIG_UBSAN_TRAP both on and off)
It's an integer promotion bug, so arch dependent?
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] mm: Support only one page_type per page
2025-08-01 2:43 ` Matthew Wilcox
2025-08-01 2:49 ` Kent Overstreet
@ 2025-08-01 8:13 ` Kefeng Wang
2025-08-01 14:27 ` Matthew Wilcox
1 sibling, 1 reply; 17+ messages in thread
From: Kefeng Wang @ 2025-08-01 8:13 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, David Hildenbrand, Hyeonggon Yoo, linux-mm,
Kent Overstreet
On 2025/8/1 10:43, Matthew Wilcox wrote:
> On Wed, Aug 28, 2024 at 11:35:28AM +0800, Kefeng Wang wrote:
>> There are some UBSAN warning about __folio_set_##fname/__SetPage##uname,
>>
>> UBSAN: shift-out-of-bounds in ../include/linux/page-flags.h:998:1
>> left shift of 240 by 24 places cannot be represented in type 'int'
>
> I can't reproduce this. I know Andrew merged this patch in, but I
> tried backing it out and enabling UBSAN and it doesn't show up for me.
> Relevant part of .config:
>
> CONFIG_ARCH_HAS_UBSAN=y
> CONFIG_UBSAN=y
> CONFIG_UBSAN_TRAP=y
> CONFIG_CC_HAS_UBSAN_BOUNDS_STRICT=y
> CONFIG_UBSAN_BOUNDS=y
> CONFIG_UBSAN_BOUNDS_STRICT=y
> CONFIG_UBSAN_SHIFT=y
> # CONFIG_UBSAN_DIV_ZERO is not set
> CONFIG_UBSAN_BOOL=y
> CONFIG_UBSAN_ENUM=y
> # CONFIG_TEST_UBSAN is not set
>
> (I tried CONFIG_UBSAN_TRAP both on and off)
Hi Matthrew, I work on arm64, so the above OOB should occur on arm64,
I remember the OOB appears on linux-next, but I tried last kernel and
some old kernel version, but can't reproduce this too, it was too long
ago, maybe gcc version related or some other changes...
>
>> Changing significant bit to unsigned to fix it.
>>
>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
>> index 28d2337525d1..2175ebceb41c 100644
>> --- a/include/linux/page-flags.h
>> +++ b/include/linux/page-flags.h
>> @@ -966,7 +966,7 @@ static __always_inline void __folio_set_##fname(struct
>> folio *folio) \
>> { \
>> VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX, \
>> folio); \
>> - folio->page.page_type = PGTY_##lname << 24; \
>> + folio->page.page_type = (unsigned int)PGTY_##lname << 24; \
>> } \
>> static __always_inline void __folio_clear_##fname(struct folio *folio) \
>> { \
>> @@ -983,7 +983,7 @@ static __always_inline int Page##uname(const struct page
>> *page) \
>> static __always_inline void __SetPage##uname(struct page *page) \
>> { \
>> VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page); \
>> - page->page_type = PGTY_##lname << 24; \
>> + page->page_type = (unsigned int)PGTY_##lname << 24; \
>> } \
>>
>>
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] mm: Support only one page_type per page
2025-08-01 8:13 ` Kefeng Wang
@ 2025-08-01 14:27 ` Matthew Wilcox
2025-08-02 2:00 ` Kefeng Wang
0 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2025-08-01 14:27 UTC (permalink / raw)
To: Kefeng Wang
Cc: Andrew Morton, David Hildenbrand, Hyeonggon Yoo, linux-mm,
Kent Overstreet
On Fri, Aug 01, 2025 at 04:13:33PM +0800, Kefeng Wang wrote:
> On 2025/8/1 10:43, Matthew Wilcox wrote:
> > On Wed, Aug 28, 2024 at 11:35:28AM +0800, Kefeng Wang wrote:
> > > There are some UBSAN warning about __folio_set_##fname/__SetPage##uname,
> > >
> > > UBSAN: shift-out-of-bounds in ../include/linux/page-flags.h:998:1
> > > left shift of 240 by 24 places cannot be represented in type 'int'
> >
> > I can't reproduce this. I know Andrew merged this patch in, but I
> > tried backing it out and enabling UBSAN and it doesn't show up for me.
> > Relevant part of .config:
> >
> > CONFIG_ARCH_HAS_UBSAN=y
> > CONFIG_UBSAN=y
> > CONFIG_UBSAN_TRAP=y
> > CONFIG_CC_HAS_UBSAN_BOUNDS_STRICT=y
> > CONFIG_UBSAN_BOUNDS=y
> > CONFIG_UBSAN_BOUNDS_STRICT=y
> > CONFIG_UBSAN_SHIFT=y
> > # CONFIG_UBSAN_DIV_ZERO is not set
> > CONFIG_UBSAN_BOOL=y
> > CONFIG_UBSAN_ENUM=y
> > # CONFIG_TEST_UBSAN is not set
> >
> > (I tried CONFIG_UBSAN_TRAP both on and off)
>
> Hi Matthrew, I work on arm64, so the above OOB should occur on arm64,
> I remember the OOB appears on linux-next, but I tried last kernel and
> some old kernel version, but can't reproduce this too, it was too long ago,
> maybe gcc version related or some other changes...
Yes, I can't reproduce it either with an arm64 build. I'll ignore this
issue in my refactoring, and if it pops up again, we'll deal with it.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] mm: Support only one page_type per page
2025-08-01 14:27 ` Matthew Wilcox
@ 2025-08-02 2:00 ` Kefeng Wang
0 siblings, 0 replies; 17+ messages in thread
From: Kefeng Wang @ 2025-08-02 2:00 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, David Hildenbrand, Hyeonggon Yoo, linux-mm,
Kent Overstreet
On 2025/8/1 22:27, Matthew Wilcox wrote:
> On Fri, Aug 01, 2025 at 04:13:33PM +0800, Kefeng Wang wrote:
>> On 2025/8/1 10:43, Matthew Wilcox wrote:
>>> On Wed, Aug 28, 2024 at 11:35:28AM +0800, Kefeng Wang wrote:
>>>> There are some UBSAN warning about __folio_set_##fname/__SetPage##uname,
>>>>
>>>> UBSAN: shift-out-of-bounds in ../include/linux/page-flags.h:998:1
>>>> left shift of 240 by 24 places cannot be represented in type 'int'
>>>
>>> I can't reproduce this. I know Andrew merged this patch in, but I
>>> tried backing it out and enabling UBSAN and it doesn't show up for me.
>>> Relevant part of .config:
>>>
>>> CONFIG_ARCH_HAS_UBSAN=y
>>> CONFIG_UBSAN=y
>>> CONFIG_UBSAN_TRAP=y
>>> CONFIG_CC_HAS_UBSAN_BOUNDS_STRICT=y
>>> CONFIG_UBSAN_BOUNDS=y
>>> CONFIG_UBSAN_BOUNDS_STRICT=y
>>> CONFIG_UBSAN_SHIFT=y
>>> # CONFIG_UBSAN_DIV_ZERO is not set
>>> CONFIG_UBSAN_BOOL=y
>>> CONFIG_UBSAN_ENUM=y
>>> # CONFIG_TEST_UBSAN is not set
>>>
>>> (I tried CONFIG_UBSAN_TRAP both on and off)
>>
>> Hi Matthrew, I work on arm64, so the above OOB should occur on arm64,
>> I remember the OOB appears on linux-next, but I tried last kernel and
>> some old kernel version, but can't reproduce this too, it was too long ago,
>> maybe gcc version related or some other changes...
>
> Yes, I can't reproduce it either with an arm64 build. I'll ignore this
> issue in my refactoring, and if it pops up again, we'll deal with it.
Agree, it is easy to be fixed once it pops up again.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-08-02 2:00 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-21 17:39 [PATCH 0/4] Increase the number of bits available in page_type Matthew Wilcox (Oracle)
2024-08-21 17:39 ` [PATCH 1/4] printf: Remove %pGt support Matthew Wilcox (Oracle)
2024-08-21 21:00 ` David Hildenbrand
2024-08-21 21:08 ` Matthew Wilcox
2024-08-21 17:39 ` [PATCH 2/4] mm: Introduce page_mapcount_is_type() Matthew Wilcox (Oracle)
2024-08-21 21:01 ` David Hildenbrand
2024-08-21 17:39 ` [PATCH 3/4] mm: Support only one page_type per page Matthew Wilcox (Oracle)
2024-08-21 21:05 ` David Hildenbrand
2024-08-28 3:35 ` Kefeng Wang
2025-08-01 2:43 ` Matthew Wilcox
2025-08-01 2:49 ` Kent Overstreet
2025-08-01 8:13 ` Kefeng Wang
2025-08-01 14:27 ` Matthew Wilcox
2025-08-02 2:00 ` Kefeng Wang
2024-08-21 17:39 ` [PATCH 4/4] zsmalloc: Use all available 24 bits of page_type Matthew Wilcox (Oracle)
2024-08-21 21:06 ` David Hildenbrand
2024-08-21 21:15 ` [PATCH 0/4] Increase the number of bits available in page_type Kent Overstreet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).