linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] Various slab improvements
@ 2025-06-06 22:22 Matthew Wilcox (Oracle)
  2025-06-06 22:22 ` [PATCH 01/10] doc: Move SLUB documentation to the admin guide Matthew Wilcox (Oracle)
                   ` (9 more replies)
  0 siblings, 10 replies; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

This started out as fixing up slab->__page_flags and then I started
checking the documentation build and kept finding more and more problems.
So this is now mostly documentation changes and getting rid of the last
mentions of PG_slab.

Matthew Wilcox (Oracle) (10):
  doc: Move SLUB documentation to the admin guide
  slab: Rename slab->__page_flags to slab->flags
  slab: Add SL_private flag
  slab: Add SL_pfmemalloc flag
  doc: Add slab internal kernel-doc
  vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb
  proc: Remove mention of PG_slab
  kfence: Remove mention of PG_slab
  memcg_slabinfo: Fix use of PG_slab
  slab: Fix MAINTAINERS entry

 Documentation/ABI/testing/sysfs-kernel-slab   |  4 +-
 .../admin-guide/kdump/vmcoreinfo.rst          |  8 ++--
 .../admin-guide/kernel-parameters.txt         | 12 ++---
 Documentation/admin-guide/mm/index.rst        |  1 +
 .../{mm/slub.rst => admin-guide/mm/slab.rst}  |  6 +--
 Documentation/mm/index.rst                    |  1 -
 Documentation/mm/slab.rst                     |  7 +++
 MAINTAINERS                                   |  6 ++-
 fs/proc/page.c                                |  5 +--
 mm/kfence/core.c                              |  4 +-
 mm/slab.h                                     | 44 ++++++++----------
 mm/slub.c                                     | 45 ++++++++++++-------
 tools/cgroup/memcg_slabinfo.py                |  4 +-
 13 files changed, 80 insertions(+), 67 deletions(-)
 rename Documentation/{mm/slub.rst => admin-guide/mm/slab.rst} (98%)

-- 
2.47.2



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 01/10] doc: Move SLUB documentation to the admin guide
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  1:42   ` Harry Yoo
  2025-06-09 12:13   ` Vlastimil Babka
  2025-06-06 22:22 ` [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags Matthew Wilcox (Oracle)
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

This section is supposed to be for internal documentation, while the
document is advice for sysadmins.  Move it to the appropriate place.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/ABI/testing/sysfs-kernel-slab          |  4 ++--
 Documentation/admin-guide/kernel-parameters.txt      | 12 +++++++-----
 Documentation/admin-guide/mm/index.rst               |  1 +
 .../{mm/slub.rst => admin-guide/mm/slab.rst}         |  6 ++----
 Documentation/mm/index.rst                           |  1 -
 5 files changed, 12 insertions(+), 12 deletions(-)
 rename Documentation/{mm/slub.rst => admin-guide/mm/slab.rst} (98%)

diff --git a/Documentation/ABI/testing/sysfs-kernel-slab b/Documentation/ABI/testing/sysfs-kernel-slab
index 658999be5164..355550edf431 100644
--- a/Documentation/ABI/testing/sysfs-kernel-slab
+++ b/Documentation/ABI/testing/sysfs-kernel-slab
@@ -37,7 +37,7 @@ Description:
 		The alloc_calls file is read-only and lists the kernel code
 		locations from which allocations for this cache were performed.
 		The alloc_calls file only contains information if debugging is
-		enabled for that cache (see Documentation/mm/slub.rst).
+		enabled for that cache (see Documentation/mm/slab.rst).
 
 What:		/sys/kernel/slab/<cache>/alloc_fastpath
 Date:		February 2008
@@ -219,7 +219,7 @@ Contact:	Pekka Enberg <penberg@cs.helsinki.fi>,
 Description:
 		The free_calls file is read-only and lists the locations of
 		object frees if slab debugging is enabled (see
-		Documentation/mm/slub.rst).
+		Documentation/mm/slab.rst).
 
 What:		/sys/kernel/slab/<cache>/free_fastpath
 Date:		February 2008
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a3ea40b22fb9..5f530edc4312 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -6555,14 +6555,14 @@
 			slab_debug can create guard zones around objects and
 			may poison objects when not in use. Also tracks the
 			last alloc / free. For more information see
-			Documentation/mm/slub.rst.
+			Documentation/admin-guide/mm/slab.rst.
 			(slub_debug legacy name also accepted for now)
 
 	slab_max_order= [MM]
 			Determines the maximum allowed order for slabs.
 			A high setting may cause OOMs due to memory
 			fragmentation. For more information see
-			Documentation/mm/slub.rst.
+			Documentation/admin-guide/mm/slab.rst.
 			(slub_max_order legacy name also accepted for now)
 
 	slab_merge	[MM]
@@ -6577,13 +6577,14 @@
 			the number of objects indicated. The higher the number
 			of objects the smaller the overhead of tracking slabs
 			and the less frequently locks need to be acquired.
-			For more information see Documentation/mm/slub.rst.
+			For more information see
+			Documentation/admin-guide/mm/slab.rst.
 			(slub_min_objects legacy name also accepted for now)
 
 	slab_min_order=	[MM]
 			Determines the minimum page order for slabs. Must be
 			lower or equal to slab_max_order. For more information see
-			Documentation/mm/slub.rst.
+			Documentation/admin-guide/mm/slab.rst.
 			(slub_min_order legacy name also accepted for now)
 
 	slab_nomerge	[MM]
@@ -6597,7 +6598,8 @@
 			cache (risks via metadata attacks are mostly
 			unchanged). Debug options disable merging on their
 			own.
-			For more information see Documentation/mm/slub.rst.
+			For more information see
+			Documentation/admin-guide/mm/slab.rst.
 			(slub_nomerge legacy name also accepted for now)
 
 	slab_strict_numa	[MM]
diff --git a/Documentation/admin-guide/mm/index.rst b/Documentation/admin-guide/mm/index.rst
index 2d2f6c222308..ebc83ca20fdc 100644
--- a/Documentation/admin-guide/mm/index.rst
+++ b/Documentation/admin-guide/mm/index.rst
@@ -37,6 +37,7 @@ the Linux memory management.
    numaperf
    pagemap
    shrinker_debugfs
+   slab
    soft-dirty
    swap_numa
    transhuge
diff --git a/Documentation/mm/slub.rst b/Documentation/admin-guide/mm/slab.rst
similarity index 98%
rename from Documentation/mm/slub.rst
rename to Documentation/admin-guide/mm/slab.rst
index 84ca1dc94e5e..16933b7b3377 100644
--- a/Documentation/mm/slub.rst
+++ b/Documentation/admin-guide/mm/slab.rst
@@ -1,10 +1,8 @@
 ==========================
-Short users guide for SLUB
+Short users guide for SLAB
 ==========================
 
-The basic philosophy of SLUB is very different from SLAB. SLAB
-requires rebuilding the kernel to activate debug options for all
-slab caches. SLUB always includes full debugging but it is off by default.
+SLUB always includes full debugging but it is off by default.
 SLUB can enable debugging only for selected slabs in order to avoid
 an impact on overall system performance which may make a bug more
 difficult to find.
diff --git a/Documentation/mm/index.rst b/Documentation/mm/index.rst
index d3ada3e45e10..fb45acba16ac 100644
--- a/Documentation/mm/index.rst
+++ b/Documentation/mm/index.rst
@@ -56,7 +56,6 @@ documentation, or deleted if it has served its purpose.
    page_owner
    page_table_check
    remap_file_pages
-   slub
    split_page_table_lock
    transhuge
    unevictable-lru
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
  2025-06-06 22:22 ` [PATCH 01/10] doc: Move SLUB documentation to the admin guide Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  2:15   ` Harry Yoo
  2025-06-09 13:12   ` Vlastimil Babka
  2025-06-06 22:22 ` [PATCH 03/10] slab: Add SL_private flag Matthew Wilcox (Oracle)
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

Slab has its own reasons for using flag bits; they aren't just
the page bits.  Maybe this won't be the ultimate solution, but
we should be clear that these bits are in use.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/slab.h | 16 ++++++++++++++--
 mm/slub.c |  6 +++---
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 05a21dc796e0..a25f12244b6c 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -50,7 +50,7 @@ typedef union {
 
 /* Reuses the bits in struct page */
 struct slab {
-	unsigned long __page_flags;
+	unsigned long flags;
 
 	struct kmem_cache *slab_cache;
 	union {
@@ -99,7 +99,7 @@ struct slab {
 
 #define SLAB_MATCH(pg, sl)						\
 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
-SLAB_MATCH(flags, __page_flags);
+SLAB_MATCH(flags, flags);
 SLAB_MATCH(compound_head, slab_cache);	/* Ensure bit 0 is clear */
 SLAB_MATCH(_refcount, __page_refcount);
 #ifdef CONFIG_MEMCG
@@ -113,6 +113,18 @@ static_assert(sizeof(struct slab) <= sizeof(struct page));
 static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)));
 #endif
 
+/**
+ * enum slab_flags - How the slab flags bits are used.
+ * @SL_locked: Is locked with slab_lock()
+ *
+ * The slab flags share space with the page flags but some bits have
+ * different interpretations.  The high bits are used for information
+ * like zone/node/section.
+ */
+enum slab_flags {
+	SL_locked,
+};
+
 /**
  * folio_slab - Converts from folio to slab.
  * @folio: The folio.
diff --git a/mm/slub.c b/mm/slub.c
index 31e11ef256f9..e9cbacee406d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -639,12 +639,12 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
  */
 static __always_inline void slab_lock(struct slab *slab)
 {
-	bit_spin_lock(PG_locked, &slab->__page_flags);
+	bit_spin_lock(SL_locked, &slab->flags);
 }
 
 static __always_inline void slab_unlock(struct slab *slab)
 {
-	bit_spin_unlock(PG_locked, &slab->__page_flags);
+	bit_spin_unlock(SL_locked, &slab->flags);
 }
 
 static inline bool
@@ -1010,7 +1010,7 @@ static void print_slab_info(const struct slab *slab)
 {
 	pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n",
 	       slab, slab->objects, slab->inuse, slab->freelist,
-	       &slab->__page_flags);
+	       &slab->flags);
 }
 
 void skip_orig_size_check(struct kmem_cache *s, const void *object)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 03/10] slab: Add SL_private flag
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
  2025-06-06 22:22 ` [PATCH 01/10] doc: Move SLUB documentation to the admin guide Matthew Wilcox (Oracle)
  2025-06-06 22:22 ` [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  2:25   ` Harry Yoo
  2025-06-06 22:22 ` [PATCH 04/10] slab: Add SL_pfmemalloc flag Matthew Wilcox (Oracle)
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

Give slab its own name for this flag.  Keep the PG_workingset alias
information in one place.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/slab.h |  2 ++
 mm/slub.c | 20 ++++++++------------
 2 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index a25f12244b6c..fca818011f7d 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -116,6 +116,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
 /**
  * enum slab_flags - How the slab flags bits are used.
  * @SL_locked: Is locked with slab_lock()
+ * @SL_partial: On the per-node partial list
  *
  * The slab flags share space with the page flags but some bits have
  * different interpretations.  The high bits are used for information
@@ -123,6 +124,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
  */
 enum slab_flags {
 	SL_locked,
+	SL_partial = PG_workingset,	/* Historical reasons for this bit */
 };
 
 /**
diff --git a/mm/slub.c b/mm/slub.c
index e9cbacee406d..804b39d06fa0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -91,14 +91,14 @@
  *   The partially empty slabs cached on the CPU partial list are used
  *   for performance reasons, which speeds up the allocation process.
  *   These slabs are not frozen, but are also exempt from list management,
- *   by clearing the PG_workingset flag when moving out of the node
+ *   by clearing the SL_partial flag when moving out of the node
  *   partial list. Please see __slab_free() for more details.
  *
  *   To sum up, the current scheme is:
- *   - node partial slab: PG_Workingset && !frozen
- *   - cpu partial slab: !PG_Workingset && !frozen
- *   - cpu slab: !PG_Workingset && frozen
- *   - full slab: !PG_Workingset && !frozen
+ *   - node partial slab: SL_partial && !frozen
+ *   - cpu partial slab: !SL_partial && !frozen
+ *   - cpu slab: !SL_partial && frozen
+ *   - full slab: !SL_partial && !frozen
  *
  *   list_lock
  *
@@ -2717,23 +2717,19 @@ static void discard_slab(struct kmem_cache *s, struct slab *slab)
 	free_slab(s, slab);
 }
 
-/*
- * SLUB reuses PG_workingset bit to keep track of whether it's on
- * the per-node partial list.
- */
 static inline bool slab_test_node_partial(const struct slab *slab)
 {
-	return folio_test_workingset(slab_folio(slab));
+	return test_bit(SL_partial, &slab->flags);
 }
 
 static inline void slab_set_node_partial(struct slab *slab)
 {
-	set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+	set_bit(SL_partial, &slab->flags);
 }
 
 static inline void slab_clear_node_partial(struct slab *slab)
 {
-	clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+	clear_bit(SL_partial, &slab->flags);
 }
 
 /*
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 04/10] slab: Add SL_pfmemalloc flag
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2025-06-06 22:22 ` [PATCH 03/10] slab: Add SL_private flag Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  2:27   ` Harry Yoo
  2025-06-06 22:22 ` [PATCH 05/10] doc: Add slab internal kernel-doc Matthew Wilcox (Oracle)
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

Give slab its own name for this flag.  Move the implementation from
slab.h to slub.c since it's only used inside slub.c.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/slab.h | 26 ++------------------------
 mm/slub.c | 19 +++++++++++++++++++
 2 files changed, 21 insertions(+), 24 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index fca818011f7d..aa991b1b059d 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -117,6 +117,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
  * enum slab_flags - How the slab flags bits are used.
  * @SL_locked: Is locked with slab_lock()
  * @SL_partial: On the per-node partial list
+ * @SL_pfmemalloc: Was allocated from PF_MEMALLOC reserves
  *
  * The slab flags share space with the page flags but some bits have
  * different interpretations.  The high bits are used for information
@@ -125,6 +126,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
 enum slab_flags {
 	SL_locked,
 	SL_partial = PG_workingset,	/* Historical reasons for this bit */
+	SL_pfmemalloc = PG_active,	/* Historical reasons for this bit */
 };
 
 /**
@@ -181,30 +183,6 @@ enum slab_flags {
  */
 #define slab_page(s) folio_page(slab_folio(s), 0)
 
-/*
- * If network-based swap is enabled, sl*b must keep track of whether pages
- * were allocated from pfmemalloc reserves.
- */
-static inline bool slab_test_pfmemalloc(const struct slab *slab)
-{
-	return folio_test_active(slab_folio(slab));
-}
-
-static inline void slab_set_pfmemalloc(struct slab *slab)
-{
-	folio_set_active(slab_folio(slab));
-}
-
-static inline void slab_clear_pfmemalloc(struct slab *slab)
-{
-	folio_clear_active(slab_folio(slab));
-}
-
-static inline void __slab_clear_pfmemalloc(struct slab *slab)
-{
-	__folio_clear_active(slab_folio(slab));
-}
-
 static inline void *slab_address(const struct slab *slab)
 {
 	return folio_address(slab_folio(slab));
diff --git a/mm/slub.c b/mm/slub.c
index 804b39d06fa0..bbd96431a50a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -634,6 +634,25 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
 }
 #endif /* CONFIG_SLUB_CPU_PARTIAL */
 
+/*
+ * If network-based swap is enabled, slub must keep track of whether memory
+ * were allocated from pfmemalloc reserves.
+ */
+static inline bool slab_test_pfmemalloc(const struct slab *slab)
+{
+	return test_bit(SL_pfmemalloc, &slab->flags);
+}
+
+static inline void slab_set_pfmemalloc(struct slab *slab)
+{
+	set_bit(SL_pfmemalloc, &slab->flags);
+}
+
+static inline void __slab_clear_pfmemalloc(struct slab *slab)
+{
+	__clear_bit(SL_pfmemalloc, &slab->flags);
+}
+
 /*
  * Per slab locking using the pagelock
  */
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 05/10] doc: Add slab internal kernel-doc
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2025-06-06 22:22 ` [PATCH 04/10] slab: Add SL_pfmemalloc flag Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  2:37   ` Harry Yoo
  2025-06-06 22:22 ` [PATCH 06/10] vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb Matthew Wilcox (Oracle)
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

We don't have much real internal documentation to extract yet, but
let's make sure that what we do have is available.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/mm/slab.rst | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/Documentation/mm/slab.rst b/Documentation/mm/slab.rst
index 87d5a5bb172f..2bcc58ada302 100644
--- a/Documentation/mm/slab.rst
+++ b/Documentation/mm/slab.rst
@@ -3,3 +3,10 @@
 ===============
 Slab Allocation
 ===============
+
+Functions and structures
+========================
+
+.. kernel-doc:: mm/slab.h
+.. kernel-doc:: mm/slub.c
+   :internal:
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 06/10] vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
                   ` (4 preceding siblings ...)
  2025-06-06 22:22 ` [PATCH 05/10] doc: Add slab internal kernel-doc Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  2:44   ` Harry Yoo
  2025-06-06 22:22 ` [PATCH 07/10] proc: Remove mention of PG_slab Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

The changes to kernel/vmcore_info.c were sadly not reflected in the
documentation.  Rectify that for both these flags as well as adding
PAGE_UNACCEPTED_MAPCOUNT_VALUE.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/admin-guide/kdump/vmcoreinfo.rst | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst
index 8cf4614385b7..56700d9a3e30 100644
--- a/Documentation/admin-guide/kdump/vmcoreinfo.rst
+++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst
@@ -325,14 +325,14 @@ NR_FREE_PAGES
 On linux-2.6.21 or later, the number of free pages is in
 vm_stat[NR_FREE_PAGES]. Used to get the number of free pages.
 
-PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_slab|PG_hwpoision|PG_head_mask|PG_hugetlb
------------------------------------------------------------------------------------------
+PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_hwpoision|PG_head_mask
+--------------------------------------------------------------------------
 
 Page attributes. These flags are used to filter various unnecessary for
 dumping pages.
 
-PAGE_BUDDY_MAPCOUNT_VALUE(~PG_buddy)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_offline)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_unaccepted)
--------------------------------------------------------------------------------------------------------------------------
+PAGE_SLAB_MAPCOUNT_VALUE|PAGE_BUDDY_MAPCOUNT_VALUE|PAGE_OFFLINE_MAPCOUNT_VALUE|PAGE_HUGETLB_MAPCOUNT_VALUE|PAGE_UNACCEPTED_MAPCOUNT_VALUE
+------------------------------------------------------------------------------------------------------------------------------------------
 
 More page attributes. These flags are used to filter various unnecessary for
 dumping pages.
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 07/10] proc: Remove mention of PG_slab
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
                   ` (5 preceding siblings ...)
  2025-06-06 22:22 ` [PATCH 06/10] vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-06 22:22 ` [PATCH 08/10] kfence: " Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

Improve the documentation of buddy pages while I'm editing this comment.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/proc/page.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/fs/proc/page.c b/fs/proc/page.c
index 999af26c7298..d7d0f0ca1e9b 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -183,10 +183,7 @@ u64 stable_page_flags(const struct page *page)
 		u |= 1 << KPF_ZERO_PAGE;
 	}
 
-	/*
-	 * Caveats on high order pages: PG_buddy and PG_slab will only be set
-	 * on the head page.
-	 */
+	/* KPF_BUDDY is only set on the first buddy page */
 	if (PageBuddy(page))
 		u |= 1 << KPF_BUDDY;
 	else if (page_count(page) == 0 && is_free_buddy_page(page))
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 08/10] kfence: Remove mention of PG_slab
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
                   ` (6 preceding siblings ...)
  2025-06-06 22:22 ` [PATCH 07/10] proc: Remove mention of PG_slab Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  3:42   ` Harry Yoo
  2025-06-09 13:33   ` Vlastimil Babka
  2025-06-06 22:22 ` [PATCH 09/10] memcg_slabinfo: Fix use " Matthew Wilcox (Oracle)
  2025-06-06 22:22 ` [PATCH 10/10] slab: Fix MAINTAINERS entry Matthew Wilcox (Oracle)
  9 siblings, 2 replies; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

Improve the documentation slightly, assuming I understood it correctly.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/kfence/core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 102048821c22..0ed3be100963 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -605,8 +605,8 @@ static unsigned long kfence_init_pool(void)
 	pages = virt_to_page(__kfence_pool);
 
 	/*
-	 * Set up object pages: they must have PG_slab set, to avoid freeing
-	 * these as real pages.
+	 * Set up object pages: they must have PGTY_slab set to avoid freeing
+	 * them as real pages.
 	 *
 	 * We also want to avoid inserting kfence_free() in the kfree()
 	 * fast-path in SLUB, and therefore need to ensure kfree() correctly
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 09/10] memcg_slabinfo: Fix use of PG_slab
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
                   ` (7 preceding siblings ...)
  2025-06-06 22:22 ` [PATCH 08/10] kfence: " Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  3:08   ` Harry Yoo
  2025-06-06 22:22 ` [PATCH 10/10] slab: Fix MAINTAINERS entry Matthew Wilcox (Oracle)
  9 siblings, 1 reply; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm, Roman Gushchin

Check PGTY_slab instead of PG_slab.

Fixes: 4ffca5a96678 (mm: support only one page_type per page)
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Tested-by: Roman Gushchin <roman.gushchin@linux.dev>
Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 tools/cgroup/memcg_slabinfo.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/cgroup/memcg_slabinfo.py b/tools/cgroup/memcg_slabinfo.py
index 270c28a0d098..6bf4bde77903 100644
--- a/tools/cgroup/memcg_slabinfo.py
+++ b/tools/cgroup/memcg_slabinfo.py
@@ -146,11 +146,11 @@ def detect_kernel_config():
 
 
 def for_each_slab(prog):
-    PGSlab = ~prog.constant('PG_slab')
+    slabtype = prog.constant('PGTY_slab')
 
     for page in for_each_page(prog):
         try:
-            if page.page_type.value_() == PGSlab:
+            if (page.page_type.value_() >> 24) == slabtype:
                 yield cast('struct slab *', page)
         except FaultError:
             pass
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 10/10] slab: Fix MAINTAINERS entry
  2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
                   ` (8 preceding siblings ...)
  2025-06-06 22:22 ` [PATCH 09/10] memcg_slabinfo: Fix use " Matthew Wilcox (Oracle)
@ 2025-06-06 22:22 ` Matthew Wilcox (Oracle)
  2025-06-09  3:21   ` Harry Yoo
                     ` (2 more replies)
  9 siblings, 3 replies; 31+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-06-06 22:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes,
	linux-mm

Add the two Documentation files to the maintainers entry.
Simplify the match for header files now that there is only slab.h.
Move Vlastimil to the top of the list of maintainers since he's
the one doing the pull requests.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 MAINTAINERS | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 2f13e1602ae6..daf3bed30a20 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22787,16 +22787,18 @@ F:	Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
 F:	drivers/nvmem/layouts/sl28vpd.c
 
 SLAB ALLOCATOR
+M:	Vlastimil Babka <vbabka@suse.cz>
 M:	Christoph Lameter <cl@gentwo.org>
 M:	David Rientjes <rientjes@google.com>
 M:	Andrew Morton <akpm@linux-foundation.org>
-M:	Vlastimil Babka <vbabka@suse.cz>
 R:	Roman Gushchin <roman.gushchin@linux.dev>
 R:	Harry Yoo <harry.yoo@oracle.com>
 L:	linux-mm@kvack.org
 S:	Maintained
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
-F:	include/linux/sl?b*.h
+F:	Documentation/admin-guide/mm/slab.rst
+F:	Documentation/mm/slab.rst
+F:	include/linux/slab.h
 F:	mm/sl?b*
 
 SLCAN CAN NETWORK DRIVER
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 01/10] doc: Move SLUB documentation to the admin guide
  2025-06-06 22:22 ` [PATCH 01/10] doc: Move SLUB documentation to the admin guide Matthew Wilcox (Oracle)
@ 2025-06-09  1:42   ` Harry Yoo
  2025-06-09 12:13   ` Vlastimil Babka
  1 sibling, 0 replies; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  1:42 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:03PM +0100, Matthew Wilcox (Oracle) wrote:
> This section is supposed to be for internal documentation, while the
> document is advice for sysadmins.  Move it to the appropriate place.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Acked-by: Harry Yoo <harry.yoo@oracle.com>

With a nit below:

>  Documentation/ABI/testing/sysfs-kernel-slab          |  4 ++--
>  Documentation/admin-guide/kernel-parameters.txt      | 12 +++++++-----
>  Documentation/admin-guide/mm/index.rst               |  1 +
>  .../{mm/slub.rst => admin-guide/mm/slab.rst}         |  6 ++----
>  Documentation/mm/index.rst                           |  1 -
>  5 files changed, 12 insertions(+), 12 deletions(-)
>  rename Documentation/{mm/slub.rst => admin-guide/mm/slab.rst} (98%)
> 
> diff --git a/Documentation/admin-guide/mm/index.rst b/Documentation/admin-guide/mm/index.rst
> index 2d2f6c222308..ebc83ca20fdc 100644
> --- a/Documentation/admin-guide/mm/index.rst
> +++ b/Documentation/admin-guide/mm/index.rst
> @@ -37,6 +37,7 @@ the Linux memory management.
>     numaperf
>     pagemap
>     shrinker_debugfs
> +   slab
>     soft-dirty
>     swap_numa
>     transhuge
> diff --git a/Documentation/mm/slub.rst b/Documentation/admin-guide/mm/slab.rst
> similarity index 98%
> rename from Documentation/mm/slub.rst
> rename to Documentation/admin-guide/mm/slab.rst
> index 84ca1dc94e5e..16933b7b3377 100644
> --- a/Documentation/mm/slub.rst
> +++ b/Documentation/admin-guide/mm/slab.rst
> @@ -1,10 +1,8 @@
>  ==========================
> -Short users guide for SLUB
> +Short users guide for SLAB

nit: Maybe keep the word "SLUB" or use "Slab" instead?

We've never used "SLAB" to refer to SLUB allocator before—it might be
confusing. We're not resurrecting the old SLAB allocator ;-)

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags
  2025-06-06 22:22 ` [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags Matthew Wilcox (Oracle)
@ 2025-06-09  2:15   ` Harry Yoo
  2025-06-09 12:45     ` Matthew Wilcox
  2025-06-09 13:12   ` Vlastimil Babka
  1 sibling, 1 reply; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  2:15 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:04PM +0100, Matthew Wilcox (Oracle) wrote:
> Slab has its own reasons for using flag bits; they aren't just
> the page bits.  Maybe this won't be the ultimate solution, but
> we should be clear that these bits are in use.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/slab.h | 16 ++++++++++++++--
>  mm/slub.c |  6 +++---
>  2 files changed, 17 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index 05a21dc796e0..a25f12244b6c 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -50,7 +50,7 @@ typedef union {
>  
>  /* Reuses the bits in struct page */
>  struct slab {
> -	unsigned long __page_flags;
> +	unsigned long flags;
>  
>  	struct kmem_cache *slab_cache;
>  	union {
> @@ -99,7 +99,7 @@ struct slab {
>  
>  #define SLAB_MATCH(pg, sl)						\
>  	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
> -SLAB_MATCH(flags, __page_flags);
> +SLAB_MATCH(flags, flags);
>  SLAB_MATCH(compound_head, slab_cache);	/* Ensure bit 0 is clear */
>  SLAB_MATCH(_refcount, __page_refcount);
>  #ifdef CONFIG_MEMCG
> @@ -113,6 +113,18 @@ static_assert(sizeof(struct slab) <= sizeof(struct page));
>  static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)));
>  #endif
>  
> +/**
> + * enum slab_flags - How the slab flags bits are used.
> + * @SL_locked: Is locked with slab_lock()
> + *
> + * The slab flags share space with the page flags but some bits have
> + * different interpretations.  The high bits are used for information
> + * like zone/node/section.
> + */
> +enum slab_flags {
> +	SL_locked,
> +};

I think we need to make sure SL_locked use the same bit as PG_locked
at least for now?

I'm not sure what prevents the MM code from checking page flags on a slab
and getting confused by the SL_locked bit is set.

Or the other way around—a slab might have a page flag (e.g., PG_head) set
and mistakenly interpret it as SL_locked.

> +
>  /**
>   * folio_slab - Converts from folio to slab.
>   * @folio: The folio.
> diff --git a/mm/slub.c b/mm/slub.c
> index 31e11ef256f9..e9cbacee406d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -639,12 +639,12 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
>   */
>  static __always_inline void slab_lock(struct slab *slab)
>  {
> -	bit_spin_lock(PG_locked, &slab->__page_flags);
> +	bit_spin_lock(SL_locked, &slab->flags);
>  }
>  
>  static __always_inline void slab_unlock(struct slab *slab)
>  {
> -	bit_spin_unlock(PG_locked, &slab->__page_flags);
> +	bit_spin_unlock(SL_locked, &slab->flags);
>  }
>  
>  static inline bool
> @@ -1010,7 +1010,7 @@ static void print_slab_info(const struct slab *slab)
>  {
>  	pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n",
>  	       slab, slab->objects, slab->inuse, slab->freelist,
> -	       &slab->__page_flags);
> +	       &slab->flags);
>  }
>  
>  void skip_orig_size_check(struct kmem_cache *s, const void *object)
> -- 
> 2.47.2
> 
> 

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/10] slab: Add SL_private flag
  2025-06-06 22:22 ` [PATCH 03/10] slab: Add SL_private flag Matthew Wilcox (Oracle)
@ 2025-06-09  2:25   ` Harry Yoo
  0 siblings, 0 replies; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  2:25 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:05PM +0100, Matthew Wilcox (Oracle) wrote:
> Give slab its own name for this flag.  Keep the PG_workingset alias
> information in one place.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

nit: subject should be "slab: Add SL_partial flag"

Otherwise looks good to me.

Acked-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon

>  mm/slab.h |  2 ++
>  mm/slub.c | 20 ++++++++------------
>  2 files changed, 10 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index a25f12244b6c..fca818011f7d 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -116,6 +116,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
>  /**
>   * enum slab_flags - How the slab flags bits are used.
>   * @SL_locked: Is locked with slab_lock()
> + * @SL_partial: On the per-node partial list
>   *
>   * The slab flags share space with the page flags but some bits have
>   * different interpretations.  The high bits are used for information
> @@ -123,6 +124,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
>   */
>  enum slab_flags {
>  	SL_locked,
> +	SL_partial = PG_workingset,	/* Historical reasons for this bit */
>  };
>  
>  /**
> diff --git a/mm/slub.c b/mm/slub.c
> index e9cbacee406d..804b39d06fa0 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -91,14 +91,14 @@
>   *   The partially empty slabs cached on the CPU partial list are used
>   *   for performance reasons, which speeds up the allocation process.
>   *   These slabs are not frozen, but are also exempt from list management,
> - *   by clearing the PG_workingset flag when moving out of the node
> + *   by clearing the SL_partial flag when moving out of the node
>   *   partial list. Please see __slab_free() for more details.
>   *
>   *   To sum up, the current scheme is:
> - *   - node partial slab: PG_Workingset && !frozen
> - *   - cpu partial slab: !PG_Workingset && !frozen
> - *   - cpu slab: !PG_Workingset && frozen
> - *   - full slab: !PG_Workingset && !frozen
> + *   - node partial slab: SL_partial && !frozen
> + *   - cpu partial slab: !SL_partial && !frozen
> + *   - cpu slab: !SL_partial && frozen
> + *   - full slab: !SL_partial && !frozen
>   *
>   *   list_lock
>   *
> @@ -2717,23 +2717,19 @@ static void discard_slab(struct kmem_cache *s, struct slab *slab)
>  	free_slab(s, slab);
>  }
>  
> -/*
> - * SLUB reuses PG_workingset bit to keep track of whether it's on
> - * the per-node partial list.
> - */
>  static inline bool slab_test_node_partial(const struct slab *slab)
>  {
> -	return folio_test_workingset(slab_folio(slab));
> +	return test_bit(SL_partial, &slab->flags);
>  }
>  
>  static inline void slab_set_node_partial(struct slab *slab)
>  {
> -	set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
> +	set_bit(SL_partial, &slab->flags);
>  }
>  
>  static inline void slab_clear_node_partial(struct slab *slab)
>  {
> -	clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
> +	clear_bit(SL_partial, &slab->flags);
>  }
>  
>  /*
> -- 
> 2.47.2
> 
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 04/10] slab: Add SL_pfmemalloc flag
  2025-06-06 22:22 ` [PATCH 04/10] slab: Add SL_pfmemalloc flag Matthew Wilcox (Oracle)
@ 2025-06-09  2:27   ` Harry Yoo
  0 siblings, 0 replies; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  2:27 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:06PM +0100, Matthew Wilcox (Oracle) wrote:
> Give slab its own name for this flag.  Move the implementation from
> slab.h to slub.c since it's only used inside slub.c.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Acked-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon

>  mm/slab.h | 26 ++------------------------
>  mm/slub.c | 19 +++++++++++++++++++
>  2 files changed, 21 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index fca818011f7d..aa991b1b059d 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -117,6 +117,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
>   * enum slab_flags - How the slab flags bits are used.
>   * @SL_locked: Is locked with slab_lock()
>   * @SL_partial: On the per-node partial list
> + * @SL_pfmemalloc: Was allocated from PF_MEMALLOC reserves
>   *
>   * The slab flags share space with the page flags but some bits have
>   * different interpretations.  The high bits are used for information
> @@ -125,6 +126,7 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)
>  enum slab_flags {
>  	SL_locked,
>  	SL_partial = PG_workingset,	/* Historical reasons for this bit */
> +	SL_pfmemalloc = PG_active,	/* Historical reasons for this bit */
>  };
>  
>  /**
> @@ -181,30 +183,6 @@ enum slab_flags {
>   */
>  #define slab_page(s) folio_page(slab_folio(s), 0)
>  
> -/*
> - * If network-based swap is enabled, sl*b must keep track of whether pages
> - * were allocated from pfmemalloc reserves.
> - */
> -static inline bool slab_test_pfmemalloc(const struct slab *slab)
> -{
> -	return folio_test_active(slab_folio(slab));
> -}
> -
> -static inline void slab_set_pfmemalloc(struct slab *slab)
> -{
> -	folio_set_active(slab_folio(slab));
> -}
> -
> -static inline void slab_clear_pfmemalloc(struct slab *slab)
> -{
> -	folio_clear_active(slab_folio(slab));
> -}
> -
> -static inline void __slab_clear_pfmemalloc(struct slab *slab)
> -{
> -	__folio_clear_active(slab_folio(slab));
> -}
> -
>  static inline void *slab_address(const struct slab *slab)
>  {
>  	return folio_address(slab_folio(slab));
> diff --git a/mm/slub.c b/mm/slub.c
> index 804b39d06fa0..bbd96431a50a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -634,6 +634,25 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
>  }
>  #endif /* CONFIG_SLUB_CPU_PARTIAL */
>  
> +/*
> + * If network-based swap is enabled, slub must keep track of whether memory
> + * were allocated from pfmemalloc reserves.
> + */
> +static inline bool slab_test_pfmemalloc(const struct slab *slab)
> +{
> +	return test_bit(SL_pfmemalloc, &slab->flags);
> +}
> +
> +static inline void slab_set_pfmemalloc(struct slab *slab)
> +{
> +	set_bit(SL_pfmemalloc, &slab->flags);
> +}
> +
> +static inline void __slab_clear_pfmemalloc(struct slab *slab)
> +{
> +	__clear_bit(SL_pfmemalloc, &slab->flags);
> +}
> +
>  /*
>   * Per slab locking using the pagelock
>   */
> -- 
> 2.47.2
> 
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 05/10] doc: Add slab internal kernel-doc
  2025-06-06 22:22 ` [PATCH 05/10] doc: Add slab internal kernel-doc Matthew Wilcox (Oracle)
@ 2025-06-09  2:37   ` Harry Yoo
  2025-06-09 15:22     ` Matthew Wilcox
  0 siblings, 1 reply; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  2:37 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:07PM +0100, Matthew Wilcox (Oracle) wrote:
> We don't have much real internal documentation to extract yet, but
> let's make sure that what we do have is available.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Acked-by: Harry Yoo <harry.yoo@oracle.com>

Yeah, we don't have much internal documentation... except some comments.

Or probably we can turn the top comment in mm/slub.c into DOC: and
expose it to the documentation?

-- 
Cheers,
Harry / Hyeonggon

>  Documentation/mm/slab.rst | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/Documentation/mm/slab.rst b/Documentation/mm/slab.rst
> index 87d5a5bb172f..2bcc58ada302 100644
> --- a/Documentation/mm/slab.rst
> +++ b/Documentation/mm/slab.rst
> @@ -3,3 +3,10 @@
>  ===============
>  Slab Allocation
>  ===============
> +
> +Functions and structures
> +========================
> +
> +.. kernel-doc:: mm/slab.h
> +.. kernel-doc:: mm/slub.c
> +   :internal:
> -- 
> 2.47.2
> 
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 06/10] vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb
  2025-06-06 22:22 ` [PATCH 06/10] vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb Matthew Wilcox (Oracle)
@ 2025-06-09  2:44   ` Harry Yoo
  0 siblings, 0 replies; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  2:44 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:08PM +0100, Matthew Wilcox (Oracle) wrote:
> The changes to kernel/vmcore_info.c were sadly not reflected in the
> documentation.  Rectify that for both these flags as well as adding
> PAGE_UNACCEPTED_MAPCOUNT_VALUE.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Acked-by: Harry Yoo <harry.yoo@oracle.com>

>  Documentation/admin-guide/kdump/vmcoreinfo.rst | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst
> index 8cf4614385b7..56700d9a3e30 100644
> --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst
> +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst
> @@ -325,14 +325,14 @@ NR_FREE_PAGES
>  On linux-2.6.21 or later, the number of free pages is in
>  vm_stat[NR_FREE_PAGES]. Used to get the number of free pages.
>  
> -PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_slab|PG_hwpoision|PG_head_mask|PG_hugetlb
> ------------------------------------------------------------------------------------------
> +PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_hwpoision|PG_head_mask
> +--------------------------------------------------------------------------

nit: maybe you want to fix PG_hwpoision typo as well!

>  
>  Page attributes. These flags are used to filter various unnecessary for
>  dumping pages.
>  
> -PAGE_BUDDY_MAPCOUNT_VALUE(~PG_buddy)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_offline)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_unaccepted)
> --------------------------------------------------------------------------------------------------------------------------
> +PAGE_SLAB_MAPCOUNT_VALUE|PAGE_BUDDY_MAPCOUNT_VALUE|PAGE_OFFLINE_MAPCOUNT_VALUE|PAGE_HUGETLB_MAPCOUNT_VALUE|PAGE_UNACCEPTED_MAPCOUNT_VALUE
> +------------------------------------------------------------------------------------------------------------------------------------------
>  
>  More page attributes. These flags are used to filter various unnecessary for
>  dumping pages.
> -- 
> 2.47.2
> 
> 

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 09/10] memcg_slabinfo: Fix use of PG_slab
  2025-06-06 22:22 ` [PATCH 09/10] memcg_slabinfo: Fix use " Matthew Wilcox (Oracle)
@ 2025-06-09  3:08   ` Harry Yoo
  0 siblings, 0 replies; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  3:08 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm,
	Roman Gushchin

On Fri, Jun 06, 2025 at 11:22:11PM +0100, Matthew Wilcox (Oracle) wrote:
> Check PGTY_slab instead of PG_slab.
> 
> Fixes: 4ffca5a96678 (mm: support only one page_type per page)
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Tested-by: Roman Gushchin <roman.gushchin@linux.dev>
> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
> ---

Reviewed-by: Harry Yoo <harry.yoo@oracle.com>

>  tools/cgroup/memcg_slabinfo.py | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/cgroup/memcg_slabinfo.py b/tools/cgroup/memcg_slabinfo.py
> index 270c28a0d098..6bf4bde77903 100644
> --- a/tools/cgroup/memcg_slabinfo.py
> +++ b/tools/cgroup/memcg_slabinfo.py
> @@ -146,11 +146,11 @@ def detect_kernel_config():
>  
>  
>  def for_each_slab(prog):
> -    PGSlab = ~prog.constant('PG_slab')
> +    slabtype = prog.constant('PGTY_slab')
>  
>      for page in for_each_page(prog):
>          try:
> -            if page.page_type.value_() == PGSlab:
> +            if (page.page_type.value_() >> 24) == slabtype:
>                  yield cast('struct slab *', page)
>          except FaultError:
>              pass
> -- 
> 2.47.2
> 
> 

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 10/10] slab: Fix MAINTAINERS entry
  2025-06-06 22:22 ` [PATCH 10/10] slab: Fix MAINTAINERS entry Matthew Wilcox (Oracle)
@ 2025-06-09  3:21   ` Harry Yoo
  2025-06-09 13:38   ` Vlastimil Babka
  2025-06-09 13:59   ` Lorenzo Stoakes
  2 siblings, 0 replies; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  3:21 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:12PM +0100, Matthew Wilcox (Oracle) wrote:
> Add the two Documentation files to the maintainers entry.
> Simplify the match for header files now that there is only slab.h.
> Move Vlastimil to the top of the list of maintainers since he's
> the one doing the pull requests.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Sounds fair to me.

Acked-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon

>  MAINTAINERS | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2f13e1602ae6..daf3bed30a20 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -22787,16 +22787,18 @@ F:	Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
>  F:	drivers/nvmem/layouts/sl28vpd.c
>  
>  SLAB ALLOCATOR
> +M:	Vlastimil Babka <vbabka@suse.cz>
>  M:	Christoph Lameter <cl@gentwo.org>
>  M:	David Rientjes <rientjes@google.com>
>  M:	Andrew Morton <akpm@linux-foundation.org>
> -M:	Vlastimil Babka <vbabka@suse.cz>
>  R:	Roman Gushchin <roman.gushchin@linux.dev>
>  R:	Harry Yoo <harry.yoo@oracle.com>
>  L:	linux-mm@kvack.org
>  S:	Maintained
>  T:	git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
> -F:	include/linux/sl?b*.h
> +F:	Documentation/admin-guide/mm/slab.rst
> +F:	Documentation/mm/slab.rst
> +F:	include/linux/slab.h
>  F:	mm/sl?b*
>  
>  SLCAN CAN NETWORK DRIVER
> -- 
> 2.47.2
> 
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 08/10] kfence: Remove mention of PG_slab
  2025-06-06 22:22 ` [PATCH 08/10] kfence: " Matthew Wilcox (Oracle)
@ 2025-06-09  3:42   ` Harry Yoo
  2025-06-09 13:33   ` Vlastimil Babka
  1 sibling, 0 replies; 31+ messages in thread
From: Harry Yoo @ 2025-06-09  3:42 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Fri, Jun 06, 2025 at 11:22:10PM +0100, Matthew Wilcox (Oracle) wrote:
> Improve the documentation slightly, assuming I understood it correctly.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Acked-by: Harry Yoo <harry.yoo@oracle.com>

>  mm/kfence/core.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 102048821c22..0ed3be100963 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -605,8 +605,8 @@ static unsigned long kfence_init_pool(void)
>  	pages = virt_to_page(__kfence_pool);
>  
>  	/*
> -	 * Set up object pages: they must have PG_slab set, to avoid freeing
> -	 * these as real pages.
> +	 * Set up object pages: they must have PGTY_slab set to avoid freeing
> +	 * them as real pages.
>  	 *
>  	 * We also want to avoid inserting kfence_free() in the kfree()
>  	 * fast-path in SLUB, and therefore need to ensure kfree() correctly
> -- 
> 2.47.2
> 
> 

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 01/10] doc: Move SLUB documentation to the admin guide
  2025-06-06 22:22 ` [PATCH 01/10] doc: Move SLUB documentation to the admin guide Matthew Wilcox (Oracle)
  2025-06-09  1:42   ` Harry Yoo
@ 2025-06-09 12:13   ` Vlastimil Babka
  1 sibling, 0 replies; 31+ messages in thread
From: Vlastimil Babka @ 2025-06-09 12:13 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Christoph Lameter, David Rientjes, linux-mm, Harry Yoo

On 6/7/25 00:22, Matthew Wilcox (Oracle) wrote:
> This section is supposed to be for internal documentation, while the
> document is advice for sysadmins.  Move it to the appropriate place.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Makes sense.

> --- a/Documentation/mm/slub.rst
> +++ b/Documentation/admin-guide/mm/slab.rst
> @@ -1,10 +1,8 @@
>  ==========================
> -Short users guide for SLUB
> +Short users guide for SLAB

Agreed with Harry. We could say "for the slab allocator".
Ideally all SLUB references in user-facing text would be removed at some
point, but we can get there later.

>  ==========================
>  
> -The basic philosophy of SLUB is very different from SLAB. SLAB
> -requires rebuilding the kernel to activate debug options for all
> -slab caches. SLUB always includes full debugging but it is off by default.
> +SLUB always includes full debugging but it is off by default.

How about:

The slab allocator includes full debugging support (when built with
CONFIG_SLUB_DEBUG=y) but it is off by default (unless built with
CONFIG_SLUB_DEBUG_ON=y).

>  SLUB can enable debugging only for selected slabs in order to avoid
>  an impact on overall system performance which may make a bug more
>  difficult to find.
> diff --git a/Documentation/mm/index.rst b/Documentation/mm/index.rst
> index d3ada3e45e10..fb45acba16ac 100644
> --- a/Documentation/mm/index.rst
> +++ b/Documentation/mm/index.rst
> @@ -56,7 +56,6 @@ documentation, or deleted if it has served its purpose.
>     page_owner
>     page_table_check
>     remap_file_pages
> -   slub
>     split_page_table_lock
>     transhuge
>     unevictable-lru



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags
  2025-06-09  2:15   ` Harry Yoo
@ 2025-06-09 12:45     ` Matthew Wilcox
  0 siblings, 0 replies; 31+ messages in thread
From: Matthew Wilcox @ 2025-06-09 12:45 UTC (permalink / raw)
  To: Harry Yoo; +Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Mon, Jun 09, 2025 at 11:15:07AM +0900, Harry Yoo wrote:
> > +/**
> > + * enum slab_flags - How the slab flags bits are used.
> > + * @SL_locked: Is locked with slab_lock()
> > + *
> > + * The slab flags share space with the page flags but some bits have
> > + * different interpretations.  The high bits are used for information
> > + * like zone/node/section.
> > + */
> > +enum slab_flags {
> > +	SL_locked,
> > +};
> 
> I think we need to make sure SL_locked use the same bit as PG_locked
> at least for now?
> 
> I'm not sure what prevents the MM code from checking page flags on a slab
> and getting confused by the SL_locked bit is set.

The lock bit is actually used very differently between page lock and
slab lock.  If there is any code inadvertently checking one when it's
looking for the other, it's already broken.  I can't think of anywhere
which would look for it; we're pretty fastidious about checking for
is_slab() before going anywhere near the lock bit.

> Or the other way around—a slab might have a page flag (e.g., PG_head) set
> and mistakenly interpret it as SL_locked.

PG_head is a good one, but I don't think we're going to run into it
before we finish the page diet.



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags
  2025-06-06 22:22 ` [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags Matthew Wilcox (Oracle)
  2025-06-09  2:15   ` Harry Yoo
@ 2025-06-09 13:12   ` Vlastimil Babka
  1 sibling, 0 replies; 31+ messages in thread
From: Vlastimil Babka @ 2025-06-09 13:12 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Christoph Lameter, David Rientjes, linux-mm, Harry Yoo

On 6/7/25 00:22, Matthew Wilcox (Oracle) wrote:
> Slab has its own reasons for using flag bits; they aren't just
> the page bits.  Maybe this won't be the ultimate solution, but
> we should be clear that these bits are in use.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/slab.h | 16 ++++++++++++++--
>  mm/slub.c |  6 +++---
>  2 files changed, 17 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index 05a21dc796e0..a25f12244b6c 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -50,7 +50,7 @@ typedef union {
>  
>  /* Reuses the bits in struct page */
>  struct slab {
> -	unsigned long __page_flags;
> +	unsigned long flags;
>  
>  	struct kmem_cache *slab_cache;
>  	union {
> @@ -99,7 +99,7 @@ struct slab {
>  
>  #define SLAB_MATCH(pg, sl)						\
>  	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
> -SLAB_MATCH(flags, __page_flags);
> +SLAB_MATCH(flags, flags);
>  SLAB_MATCH(compound_head, slab_cache);	/* Ensure bit 0 is clear */
>  SLAB_MATCH(_refcount, __page_refcount);
>  #ifdef CONFIG_MEMCG
> @@ -113,6 +113,18 @@ static_assert(sizeof(struct slab) <= sizeof(struct page));
>  static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t)));
>  #endif
>  
> +/**
> + * enum slab_flags - How the slab flags bits are used.
> + * @SL_locked: Is locked with slab_lock()
> + *
> + * The slab flags share space with the page flags but some bits have
> + * different interpretations.  The high bits are used for information
> + * like zone/node/section.
> + */
> +enum slab_flags {
> +	SL_locked,

Given how in 3 you do SL_partial = PG_workingset, we could just use
PG_locked here too, just as a flag known to be safe. I've read your
discussion with Harry but I'd simply do that for now?

Also I think the whole enum could be moved to mm/slub.c ? Patch 4 adds
SL_pfmemalloc but also moves all its uses to mm/slub.c

> +};
> +
>  /**
>   * folio_slab - Converts from folio to slab.
>   * @folio: The folio.
> diff --git a/mm/slub.c b/mm/slub.c
> index 31e11ef256f9..e9cbacee406d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -639,12 +639,12 @@ static inline unsigned int slub_get_cpu_partial(struct kmem_cache *s)
>   */
>  static __always_inline void slab_lock(struct slab *slab)
>  {
> -	bit_spin_lock(PG_locked, &slab->__page_flags);
> +	bit_spin_lock(SL_locked, &slab->flags);
>  }
>  
>  static __always_inline void slab_unlock(struct slab *slab)
>  {
> -	bit_spin_unlock(PG_locked, &slab->__page_flags);
> +	bit_spin_unlock(SL_locked, &slab->flags);
>  }
>  
>  static inline bool
> @@ -1010,7 +1010,7 @@ static void print_slab_info(const struct slab *slab)
>  {
>  	pr_err("Slab 0x%p objects=%u used=%u fp=0x%p flags=%pGp\n",
>  	       slab, slab->objects, slab->inuse, slab->freelist,
> -	       &slab->__page_flags);
> +	       &slab->flags);
>  }
>  
>  void skip_orig_size_check(struct kmem_cache *s, const void *object)



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 08/10] kfence: Remove mention of PG_slab
  2025-06-06 22:22 ` [PATCH 08/10] kfence: " Matthew Wilcox (Oracle)
  2025-06-09  3:42   ` Harry Yoo
@ 2025-06-09 13:33   ` Vlastimil Babka
  2025-06-09 15:02     ` Matthew Wilcox
  1 sibling, 1 reply; 31+ messages in thread
From: Vlastimil Babka @ 2025-06-09 13:33 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Christoph Lameter, David Rientjes, linux-mm, Harry Yoo, kasan-dev,
	Alexander Potapenko, Marco Elver

On 6/7/25 00:22, Matthew Wilcox (Oracle) wrote:
> Improve the documentation slightly, assuming I understood it correctly.

Assuming I understood it correctly, this is going to be fun part of
splitting struct slab from struct page. It gets __kfence_pool from memblock
allocator and then makes the corresponding struct pages look like slab
pages. Maybe it will be possible to simplify things so it won't have to
allocate struct slab for each page...

> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/kfence/core.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 102048821c22..0ed3be100963 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -605,8 +605,8 @@ static unsigned long kfence_init_pool(void)
>  	pages = virt_to_page(__kfence_pool);
>  
>  	/*
> -	 * Set up object pages: they must have PG_slab set, to avoid freeing
> -	 * these as real pages.
> +	 * Set up object pages: they must have PGTY_slab set to avoid freeing
> +	 * them as real pages.
>  	 *
>  	 * We also want to avoid inserting kfence_free() in the kfree()
>  	 * fast-path in SLUB, and therefore need to ensure kfree() correctly
	


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 10/10] slab: Fix MAINTAINERS entry
  2025-06-06 22:22 ` [PATCH 10/10] slab: Fix MAINTAINERS entry Matthew Wilcox (Oracle)
  2025-06-09  3:21   ` Harry Yoo
@ 2025-06-09 13:38   ` Vlastimil Babka
  2025-06-09 13:59   ` Lorenzo Stoakes
  2 siblings, 0 replies; 31+ messages in thread
From: Vlastimil Babka @ 2025-06-09 13:38 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Christoph Lameter, David Rientjes, linux-mm

On 6/7/25 00:22, Matthew Wilcox (Oracle) wrote:
> Add the two Documentation files to the maintainers entry.
> Simplify the match for header files now that there is only slab.h.
> Move Vlastimil to the top of the list of maintainers since he's
> the one doing the pull requests.

Thanks :)

> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  MAINTAINERS | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2f13e1602ae6..daf3bed30a20 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -22787,16 +22787,18 @@ F:	Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
>  F:	drivers/nvmem/layouts/sl28vpd.c
>  
>  SLAB ALLOCATOR
> +M:	Vlastimil Babka <vbabka@suse.cz>
>  M:	Christoph Lameter <cl@gentwo.org>
>  M:	David Rientjes <rientjes@google.com>
>  M:	Andrew Morton <akpm@linux-foundation.org>
> -M:	Vlastimil Babka <vbabka@suse.cz>
>  R:	Roman Gushchin <roman.gushchin@linux.dev>
>  R:	Harry Yoo <harry.yoo@oracle.com>
>  L:	linux-mm@kvack.org
>  S:	Maintained
>  T:	git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
> -F:	include/linux/sl?b*.h
> +F:	Documentation/admin-guide/mm/slab.rst
> +F:	Documentation/mm/slab.rst
> +F:	include/linux/slab.h
>  F:	mm/sl?b*

While at it we can expand this too? It's matching just 3 files now. Tools
don't care but if anyone greps MAINTAINERS manually for "slub" they would be
able to find something.

For the patches I didn't reply to, they LGTM.

>  
>  SLCAN CAN NETWORK DRIVER



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 10/10] slab: Fix MAINTAINERS entry
  2025-06-06 22:22 ` [PATCH 10/10] slab: Fix MAINTAINERS entry Matthew Wilcox (Oracle)
  2025-06-09  3:21   ` Harry Yoo
  2025-06-09 13:38   ` Vlastimil Babka
@ 2025-06-09 13:59   ` Lorenzo Stoakes
  2025-06-09 16:42     ` Christoph Lameter (Ampere)
  2 siblings, 1 reply; 31+ messages in thread
From: Lorenzo Stoakes @ 2025-06-09 13:59 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm,
	Andrew Morton, Harry Yoo

+cc Harry, Andrew

On Fri, Jun 06, 2025 at 11:22:12PM +0100, Matthew Wilcox (Oracle) wrote:
> Add the two Documentation files to the maintainers entry.
> Simplify the match for header files now that there is only slab.h.
> Move Vlastimil to the top of the list of maintainers since he's
> the one doing the pull requests.

With absolutely no disrespect intended to Christoph or David, both of whom
are fantastically talented kernel developers whom I admire - I feel like we
need to go further here.

I raise this point as I have been heavily involved in actioning the path
decided upon by the community at LSF/MM with regard to ensuring we have
accurate entries for each part of mm.

As part of this effort, it strikes me that the slab section is simply now
inaccurate.

Informally, as discussed at LSF/MM, the broad perception is that a
maintainer MUST be actively involved, whereas a reviewer has expertise and
MAY optionally get involved in the process.

While Christoph and David have both been heavily involved in slab
development in the past, and are both clearly experts and (rightly) VERY
well regarded by the community, It seems clear to me they are not
maintaining this code, but rather act as reviewers.

So I propose we go further here and update this section to reflect this
reality.

I base this on the fact that Christoph has made 2 patches (other than
updating email) since 2020 and David hasn't contributed patches since 2016
(please forgive me if I missed anything, this was a _rough_ git log check).

Whereas Vlastimil has essentially handled _all_ slab maintainership duties
for a very long time.

According to the maintainer handbook [0]:

	"That said, being a maintainer is an active role. The MAINTAINERS
	file is not a list of credits (in fact a separate CREDITS file
	exists), it is a list of those who will actively help with the
	code. If the author does not have the time, interest or ability to
	maintain the code, a different maintainer must be selected."

[0]:https://docs.kernel.org/maintainer/feature-and-driver-maintainers.html

Again, to be absolutely clear - this is not intended to be disrespectful in
the slightest but rather is a functional thing - we ought to ensure
MAINTAINERS is as accurate as we can make it.

Thanks, Lorenzo

>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  MAINTAINERS | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2f13e1602ae6..daf3bed30a20 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -22787,16 +22787,18 @@ F:	Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
>  F:	drivers/nvmem/layouts/sl28vpd.c
>
>  SLAB ALLOCATOR
> +M:	Vlastimil Babka <vbabka@suse.cz>
>  M:	Christoph Lameter <cl@gentwo.org>
>  M:	David Rientjes <rientjes@google.com>
>  M:	Andrew Morton <akpm@linux-foundation.org>
> -M:	Vlastimil Babka <vbabka@suse.cz>
>  R:	Roman Gushchin <roman.gushchin@linux.dev>
>  R:	Harry Yoo <harry.yoo@oracle.com>
>  L:	linux-mm@kvack.org
>  S:	Maintained
>  T:	git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
> -F:	include/linux/sl?b*.h
> +F:	Documentation/admin-guide/mm/slab.rst
> +F:	Documentation/mm/slab.rst
> +F:	include/linux/slab.h
>  F:	mm/sl?b*
>
>  SLCAN CAN NETWORK DRIVER
> --
> 2.47.2
>
>
>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 08/10] kfence: Remove mention of PG_slab
  2025-06-09 13:33   ` Vlastimil Babka
@ 2025-06-09 15:02     ` Matthew Wilcox
  2025-06-10 13:23       ` Marco Elver
  0 siblings, 1 reply; 31+ messages in thread
From: Matthew Wilcox @ 2025-06-09 15:02 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Lameter, David Rientjes, linux-mm, Harry Yoo, kasan-dev,
	Alexander Potapenko, Marco Elver, Dmitry Vyukov

On Mon, Jun 09, 2025 at 03:33:41PM +0200, Vlastimil Babka wrote:
> On 6/7/25 00:22, Matthew Wilcox (Oracle) wrote:
> > Improve the documentation slightly, assuming I understood it correctly.
> 
> Assuming I understood it correctly, this is going to be fun part of
> splitting struct slab from struct page. It gets __kfence_pool from memblock
> allocator and then makes the corresponding struct pages look like slab
> pages. Maybe it will be possible to simplify things so it won't have to
> allocate struct slab for each page...

I've been looking at this and I'm not sure I understand it correctly
either.  Perhaps the kfence people can weigh in.  It seems like the
kfence pages are being marked as slab pages, but not being assigned to
any particular slab cache?

Perhaps the right thing to do will be to allocate slabs for kfence
objects.  Or kfence objects get their own memdesc type.  It's hard to
say at this point.  My plan was to disable kfence (along with almost
everything else) when CONFIG_PAGE_DIET is enabled, and then someone
who understands what's going on can come in and do the necessary to
re-enable it.

> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> >  mm/kfence/core.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> > index 102048821c22..0ed3be100963 100644
> > --- a/mm/kfence/core.c
> > +++ b/mm/kfence/core.c
> > @@ -605,8 +605,8 @@ static unsigned long kfence_init_pool(void)
> >  	pages = virt_to_page(__kfence_pool);
> >  
> >  	/*
> > -	 * Set up object pages: they must have PG_slab set, to avoid freeing
> > -	 * these as real pages.
> > +	 * Set up object pages: they must have PGTY_slab set to avoid freeing
> > +	 * them as real pages.
> >  	 *
> >  	 * We also want to avoid inserting kfence_free() in the kfree()
> >  	 * fast-path in SLUB, and therefore need to ensure kfree() correctly
> 	


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 05/10] doc: Add slab internal kernel-doc
  2025-06-09  2:37   ` Harry Yoo
@ 2025-06-09 15:22     ` Matthew Wilcox
  0 siblings, 0 replies; 31+ messages in thread
From: Matthew Wilcox @ 2025-06-09 15:22 UTC (permalink / raw)
  To: Harry Yoo; +Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm

On Mon, Jun 09, 2025 at 11:37:21AM +0900, Harry Yoo wrote:
> On Fri, Jun 06, 2025 at 11:22:07PM +0100, Matthew Wilcox (Oracle) wrote:
> > We don't have much real internal documentation to extract yet, but
> > let's make sure that what we do have is available.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> 
> Acked-by: Harry Yoo <harry.yoo@oracle.com>
> 
> Yeah, we don't have much internal documentation... except some comments.
> 
> Or probably we can turn the top comment in mm/slub.c into DOC: and
> expose it to the documentation?

Sounds like a good follow-up patch for somebody not me ;-)


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 10/10] slab: Fix MAINTAINERS entry
  2025-06-09 13:59   ` Lorenzo Stoakes
@ 2025-06-09 16:42     ` Christoph Lameter (Ampere)
  2025-06-09 17:44       ` Matthew Wilcox
  0 siblings, 1 reply; 31+ messages in thread
From: Christoph Lameter (Ampere) @ 2025-06-09 16:42 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Matthew Wilcox (Oracle), Vlastimil Babka, David Rientjes,
	linux-mm, Andrew Morton, Harry Yoo

On Mon, 9 Jun 2025, Lorenzo Stoakes wrote:

> Informally, as discussed at LSF/MM, the broad perception is that a
> maintainer MUST be actively involved, whereas a reviewer has expertise and
> MAY optionally get involved in the process.

Both Andrew and I are actively involved in various ways but we have other
priorities as well. If there would be an issue with maintainership then we
would address the issue.

> I base this on the fact that Christoph has made 2 patches (other than
> updating email) since 2020 and David hasn't contributed patches since 2016
> (please forgive me if I missed anything, this was a _rough_ git log check).

The two patches were in 2024 and there was one in 2020.... The last
large scale change I did to slab was in 2015. If you look at Andrew's
changes its a similar thing.

> Whereas Vlastimil has essentially handled _all_ slab maintainership duties
> for a very long time.

Yes he is doing a great job and has done so for a long time. That is one
reason why we can spend more time with other things and limit ourselves to
mostly commenting here and there.

> According to the maintainer handbook [0]:
>
> 	"That said, being a maintainer is an active role. The MAINTAINERS
> 	file is not a list of credits (in fact a separate CREDITS file
> 	exists), it is a list of those who will actively help with the
> 	code. If the author does not have the time, interest or ability to
> 	maintain the code, a different maintainer must be selected."

Well both Andrew and I fit that role as maintainers.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 10/10] slab: Fix MAINTAINERS entry
  2025-06-09 16:42     ` Christoph Lameter (Ampere)
@ 2025-06-09 17:44       ` Matthew Wilcox
  0 siblings, 0 replies; 31+ messages in thread
From: Matthew Wilcox @ 2025-06-09 17:44 UTC (permalink / raw)
  To: Christoph Lameter (Ampere)
  Cc: Lorenzo Stoakes, Vlastimil Babka, David Rientjes, linux-mm,
	Andrew Morton, Harry Yoo

On Mon, Jun 09, 2025 at 09:42:37AM -0700, Christoph Lameter (Ampere) wrote:
> On Mon, 9 Jun 2025, Lorenzo Stoakes wrote:
> > Informally, as discussed at LSF/MM, the broad perception is that a
> > maintainer MUST be actively involved, whereas a reviewer has expertise and
> > MAY optionally get involved in the process.
> 
> Both Andrew and I are actively involved in various ways but we have other
> priorities as well. If there would be an issue with maintainership then we
> would address the issue.

I mean, you're not being fired.  This is just an acknowledgement that
your current role in slab is as a reviewer.  People's roles change over
time and maybe your role will shift back into being primary maintainer
in the future.

From the point of view of someone who doesn't know the MM area at all,
Vlastimil is the maintainer, Andrew is his backup and you, David & Harry
have expertise in this area.  That's the current state of affairs and
we should document it as such.



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 08/10] kfence: Remove mention of PG_slab
  2025-06-09 15:02     ` Matthew Wilcox
@ 2025-06-10 13:23       ` Marco Elver
  0 siblings, 0 replies; 31+ messages in thread
From: Marco Elver @ 2025-06-10 13:23 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, linux-mm,
	Harry Yoo, kasan-dev, Alexander Potapenko, Dmitry Vyukov

On Mon, 9 Jun 2025 at 17:02, Matthew Wilcox <willy@infradead.org> wrote:
>
> On Mon, Jun 09, 2025 at 03:33:41PM +0200, Vlastimil Babka wrote:
> > On 6/7/25 00:22, Matthew Wilcox (Oracle) wrote:
> > > Improve the documentation slightly, assuming I understood it correctly.
> >
> > Assuming I understood it correctly, this is going to be fun part of
> > splitting struct slab from struct page. It gets __kfence_pool from memblock
> > allocator and then makes the corresponding struct pages look like slab
> > pages. Maybe it will be possible to simplify things so it won't have to
> > allocate struct slab for each page...
>
> I've been looking at this and I'm not sure I understand it correctly
> either.  Perhaps the kfence people can weigh in.  It seems like the
> kfence pages are being marked as slab pages, but not being assigned to
> any particular slab cache?

They are marked as slab pages, because in kfree() there's the test for
folio_test_slab(..), which goes to free_large_kmalloc() if it's not a
slab page. But the kfence pool pages cannot be deallocated, given we
want to reuse them due to the particular layout of the pool (object
pages interleaved with guard pages, freed pages are only marked not
present to catch use-after-free).

But besides that they aren't real slab caches, given it's all managed
by kfence (each page can host at most 1 object, but those objects may
be of sizes up to PAGE_SIZE).

Some of this could be solved by adding more is_kfence_address()
checks, but that adds more branches in hot paths and more complexity
since some of the accounting and hooks are naturally shared with the
current design.

> Perhaps the right thing to do will be to allocate slabs for kfence
> objects.  Or kfence objects get their own memdesc type.  It's hard to
> say at this point.  My plan was to disable kfence (along with almost
> everything else) when CONFIG_PAGE_DIET is enabled, and then someone
> who understands what's going on can come in and do the necessary to
> re-enable it.

Assuming it's not a new default, and if this new feature disables most
slab debugging anyway, this appears reasonable.

> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > > ---
> > >  mm/kfence/core.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> > > index 102048821c22..0ed3be100963 100644
> > > --- a/mm/kfence/core.c
> > > +++ b/mm/kfence/core.c
> > > @@ -605,8 +605,8 @@ static unsigned long kfence_init_pool(void)
> > >     pages = virt_to_page(__kfence_pool);
> > >
> > >     /*
> > > -    * Set up object pages: they must have PG_slab set, to avoid freeing
> > > -    * these as real pages.
> > > +    * Set up object pages: they must have PGTY_slab set to avoid freeing
> > > +    * them as real pages.

Acked-by: Marco Elver <elver@google.com>


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2025-06-10 13:23 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-06 22:22 [PATCH 00/10] Various slab improvements Matthew Wilcox (Oracle)
2025-06-06 22:22 ` [PATCH 01/10] doc: Move SLUB documentation to the admin guide Matthew Wilcox (Oracle)
2025-06-09  1:42   ` Harry Yoo
2025-06-09 12:13   ` Vlastimil Babka
2025-06-06 22:22 ` [PATCH 02/10] slab: Rename slab->__page_flags to slab->flags Matthew Wilcox (Oracle)
2025-06-09  2:15   ` Harry Yoo
2025-06-09 12:45     ` Matthew Wilcox
2025-06-09 13:12   ` Vlastimil Babka
2025-06-06 22:22 ` [PATCH 03/10] slab: Add SL_private flag Matthew Wilcox (Oracle)
2025-06-09  2:25   ` Harry Yoo
2025-06-06 22:22 ` [PATCH 04/10] slab: Add SL_pfmemalloc flag Matthew Wilcox (Oracle)
2025-06-09  2:27   ` Harry Yoo
2025-06-06 22:22 ` [PATCH 05/10] doc: Add slab internal kernel-doc Matthew Wilcox (Oracle)
2025-06-09  2:37   ` Harry Yoo
2025-06-09 15:22     ` Matthew Wilcox
2025-06-06 22:22 ` [PATCH 06/10] vmcoreinfo: Remove documentation of PG_slab and PG_hugetlb Matthew Wilcox (Oracle)
2025-06-09  2:44   ` Harry Yoo
2025-06-06 22:22 ` [PATCH 07/10] proc: Remove mention of PG_slab Matthew Wilcox (Oracle)
2025-06-06 22:22 ` [PATCH 08/10] kfence: " Matthew Wilcox (Oracle)
2025-06-09  3:42   ` Harry Yoo
2025-06-09 13:33   ` Vlastimil Babka
2025-06-09 15:02     ` Matthew Wilcox
2025-06-10 13:23       ` Marco Elver
2025-06-06 22:22 ` [PATCH 09/10] memcg_slabinfo: Fix use " Matthew Wilcox (Oracle)
2025-06-09  3:08   ` Harry Yoo
2025-06-06 22:22 ` [PATCH 10/10] slab: Fix MAINTAINERS entry Matthew Wilcox (Oracle)
2025-06-09  3:21   ` Harry Yoo
2025-06-09 13:38   ` Vlastimil Babka
2025-06-09 13:59   ` Lorenzo Stoakes
2025-06-09 16:42     ` Christoph Lameter (Ampere)
2025-06-09 17:44       ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).