cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/33] Separate struct slab from struct page
@ 2021-12-01 18:14 Vlastimil Babka
       [not found] ` <20211201181510.18784-1-vbabka-AlSwsSmVLrQ@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-01 18:14 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrey Ryabinin,
	Alexander Potapenko, H. Peter Anvin, Will Deacon,
	Sergey Senozhatsky, x86-DgEjT+Ai2ygdnm+yROfE0A,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kasan-dev-/JYPxA39Uh5TLH3MbocFFw, Ingo Molnar, Vlastimil Babka,
	Nitin Gupta, Vladimir Davydov, Marco Elver, Borislav Petkov,
	Andy Lutomirski, cgroups-u79uwXL29TY76Z2rM5mHXA, Thomas Gleixner,
	Dmitry Vyukov, Andrey Konovalov, patches-cunTk1MwBs/YUNznpcFYbw,
	Julia Lawall, Minchan Kim

Folks from non-slab subsystems are Cc'd only to patches affecting them, and
this cover letter.

Series also available in git, based on 5.16-rc3:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
the git pull request way of eventually merging this, as it's also not a small
series. I will thus reply to this mail with asking to include my branch in
linux-next.

As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
maintaining a git tree for all slab patches in general. It was apparently
already done that way before, by Pekka:
https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/

Changes from v1/RFC:
https://lore.kernel.org/all/20211116001628.24216-1-vbabka-AlSwsSmVLrQ@public.gmane.org/
- Added virt_to_folio() and folio_address() in the new Patch 1.
- Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!)
- Added Tested-by: Marco Elver for the KFENCE parts (Thanks!)

Previous version from Matthew Wilcox:
https://lore.kernel.org/all/20211004134650.4031813-1-willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org/

LWN coverage of the above:
https://lwn.net/Articles/871982/

This is originally an offshoot of the folio work by Matthew. One of the more
complex parts of the struct page definition are the parts used by the slab
allocators. It would be good for the MM in general if struct slab were its own
data type, and it also helps to prevent tail pages from slipping in anywhere.
As Matthew requested in his proof of concept series, I have taken over the
development of this series, so it's a mix of patches from him (often modified
by me) and my own.

One big difference is the use of coccinelle to perform the relatively trivial
parts of the conversions automatically and at once, instead of a larger number
of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
Chamberlain for all their help!

Another notable difference is (based also on review feedback) I don't represent
with a struct slab the large kmalloc allocations which are not really a slab,
but use page allocator directly. When going from an object address to a struct
slab, the code tests first folio slab flag, and only if it's set it converts to
struct slab. This makes the struct slab type stronger.

Finally, although Matthew's version didn't use any of the folio work, the
initial support has been merged meanwhile so my version builds on top of it
where appropriate. This eliminates some of the redundant compound_head()
being performed e.g. when testing the slab flag.

To sum up, after this series, struct page fields used by slab allocators are
moved from struct page to a new struct slab, that uses the same physical
storage. The availability of the fields is further distinguished by the
selected slab allocator implementation. The advantages include:

- Similar to folios, if the slab is of order > 0, struct slab always is
  guaranteed to be the head page. Additionally it's guaranteed to be an actual
  slab page, not a large kmalloc. This removes uncertainty and potential for
  bugs.
- It's not possible to accidentally use fields of the slab implementation that's
  not configured.
- Other subsystems cannot use slab's fields in struct page anymore (some
  existing non-slab usages had to be adjusted in this series), so slab
  implementations have more freedom in rearranging them in the struct slab.

Matthew Wilcox (Oracle) (16):
  mm: Split slab into its own type
  mm: Add account_slab() and unaccount_slab()
  mm: Convert virt_to_cache() to use struct slab
  mm: Convert __ksize() to struct slab
  mm: Use struct slab in kmem_obj_info()
  mm: Convert check_heap_object() to use struct slab
  mm/slub: Convert detached_freelist to use a struct slab
  mm/slub: Convert kfree() to use a struct slab
  mm/slub: Convert print_page_info() to print_slab_info()
  mm/slub: Convert pfmemalloc_match() to take a struct slab
  mm/slob: Convert SLOB to use struct slab
  mm/kasan: Convert to struct folio and struct slab
  zsmalloc: Stop using slab fields in struct page
  bootmem: Use page->index instead of page->freelist
  iommu: Use put_pages_list
  mm: Remove slab from struct page

Vlastimil Babka (17):
  mm: add virt_to_folio() and folio_address()
  mm/slab: Dissolve slab_map_pages() in its caller
  mm/slub: Make object_err() static
  mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
  mm/slub: Convert alloc_slab_page() to return a struct slab
  mm/slub: Convert __free_slab() to use struct slab
  mm/slub: Convert most struct page to struct slab by spatch
  mm/slub: Finish struct page to struct slab conversion
  mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
  mm/slab: Convert most struct page to struct slab by spatch
  mm/slab: Finish struct page to struct slab conversion
  mm: Convert struct page to struct slab in functions used by other
    subsystems
  mm/memcg: Convert slab objcgs from struct page to struct slab
  mm/kfence: Convert kfence_guarded_alloc() to struct slab
  mm/sl*b: Differentiate struct slab fields by sl*b implementations
  mm/slub: Simplify struct slab slabs field definition
  mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only
    when enabled

 arch/x86/mm/init_64.c          |    2 +-
 drivers/iommu/amd/io_pgtable.c |   59 +-
 drivers/iommu/dma-iommu.c      |   11 +-
 drivers/iommu/intel/iommu.c    |   89 +--
 include/linux/bootmem_info.h   |    2 +-
 include/linux/iommu.h          |    3 +-
 include/linux/kasan.h          |    9 +-
 include/linux/memcontrol.h     |   48 --
 include/linux/mm.h             |   12 +
 include/linux/mm_types.h       |   38 +-
 include/linux/page-flags.h     |   37 -
 include/linux/slab.h           |    8 -
 include/linux/slab_def.h       |   16 +-
 include/linux/slub_def.h       |   29 +-
 mm/bootmem_info.c              |    7 +-
 mm/kasan/common.c              |   27 +-
 mm/kasan/generic.c             |    8 +-
 mm/kasan/kasan.h               |    1 +
 mm/kasan/quarantine.c          |    2 +-
 mm/kasan/report.c              |   13 +-
 mm/kasan/report_tags.c         |   10 +-
 mm/kfence/core.c               |   17 +-
 mm/kfence/kfence_test.c        |    6 +-
 mm/memcontrol.c                |   43 +-
 mm/slab.c                      |  455 ++++++-------
 mm/slab.h                      |  322 ++++++++-
 mm/slab_common.c               |    8 +-
 mm/slob.c                      |   46 +-
 mm/slub.c                      | 1164 ++++++++++++++++----------------
 mm/sparse.c                    |    2 +-
 mm/usercopy.c                  |   13 +-
 mm/zsmalloc.c                  |   18 +-
 32 files changed, 1317 insertions(+), 1208 deletions(-)

-- 
2.33.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems
       [not found] ` <20211201181510.18784-1-vbabka-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-01 18:14   ` Vlastimil Babka
       [not found]     ` <20211201181510.18784-23-vbabka-AlSwsSmVLrQ@public.gmane.org>
  2021-12-01 18:15   ` [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab Vlastimil Babka
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-01 18:14 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Vlastimil Babka, Julia Lawall,
	Luis Chamberlain, Andrey Ryabinin, Alexander Potapenko,
	Andrey Konovalov, Dmitry Vyukov, Marco Elver, Johannes Weiner,
	Michal Hocko, Vladimir Davydov, kasan-dev-/JYPxA39Uh5TLH3MbocFFw,
	cgroups-u79uwXL29TY76Z2rM5mHXA

KASAN, KFENCE and memcg interact with SLAB or SLUB internals through functions
nearest_obj(), obj_to_index() and objs_per_slab() that use struct page as
parameter. This patch converts it to struct slab including all callers, through
a coccinelle semantic patch.

// Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
// Note: needs coccinelle 1.1.1 to avoid breaking whitespace

@@
@@

-objs_per_slab_page(
+objs_per_slab(
 ...
 )
 { ... }

@@
@@

-objs_per_slab_page(
+objs_per_slab(
 ...
 )

@@
identifier fn =~ "obj_to_index|objs_per_slab";
@@

 fn(...,
-   const struct page *page
+   const struct slab *slab
    ,...)
 {
<...
(
- page_address(page)
+ slab_address(slab)
|
- page
+ slab
)
...>
 }

@@
identifier fn =~ "nearest_obj";
@@

 fn(...,
-   struct page *page
+   const struct slab *slab
    ,...)
 {
<...
(
- page_address(page)
+ slab_address(slab)
|
- page
+ slab
)
...>
 }

@@
identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
expression E;
@@

 fn(...,
(
- slab_page(E)
+ E
|
- virt_to_page(E)
+ virt_to_slab(E)
|
- virt_to_head_page(E)
+ virt_to_slab(E)
|
- page
+ page_slab(page)
)
  ,...)

Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
Cc: Julia Lawall <julia.lawall-MZpvjPyXg2s@public.gmane.org>
Cc: Luis Chamberlain <mcgrof-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Andrey Ryabinin <ryabinin.a.a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Alexander Potapenko <glider-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Dmitry Vyukov <dvyukov-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Marco Elver <elver-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Vladimir Davydov <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: <kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org>
Cc: <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
---
 include/linux/slab_def.h | 16 ++++++++--------
 include/linux/slub_def.h | 18 +++++++++---------
 mm/kasan/common.c        |  4 ++--
 mm/kasan/generic.c       |  2 +-
 mm/kasan/report.c        |  2 +-
 mm/kasan/report_tags.c   |  2 +-
 mm/kfence/kfence_test.c  |  4 ++--
 mm/memcontrol.c          |  4 ++--
 mm/slab.c                | 10 +++++-----
 mm/slab.h                |  4 ++--
 mm/slub.c                |  2 +-
 11 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 3aa5e1e73ab6..e24c9aff6fed 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -87,11 +87,11 @@ struct kmem_cache {
 	struct kmem_cache_node *node[MAX_NUMNODES];
 };
 
-static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
+static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
 				void *x)
 {
-	void *object = x - (x - page->s_mem) % cache->size;
-	void *last_object = page->s_mem + (cache->num - 1) * cache->size;
+	void *object = x - (x - slab->s_mem) % cache->size;
+	void *last_object = slab->s_mem + (cache->num - 1) * cache->size;
 
 	if (unlikely(object > last_object))
 		return last_object;
@@ -106,16 +106,16 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
  *   reciprocal_divide(offset, cache->reciprocal_buffer_size)
  */
 static inline unsigned int obj_to_index(const struct kmem_cache *cache,
-					const struct page *page, void *obj)
+					const struct slab *slab, void *obj)
 {
-	u32 offset = (obj - page->s_mem);
+	u32 offset = (obj - slab->s_mem);
 	return reciprocal_divide(offset, cache->reciprocal_buffer_size);
 }
 
-static inline int objs_per_slab_page(const struct kmem_cache *cache,
-				     const struct page *page)
+static inline int objs_per_slab(const struct kmem_cache *cache,
+				     const struct slab *slab)
 {
-	if (is_kfence_address(page_address(page)))
+	if (is_kfence_address(slab_address(slab)))
 		return 1;
 	return cache->num;
 }
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 8a9c2876ca89..33c5c0e3bd8d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -158,11 +158,11 @@ static inline void sysfs_slab_release(struct kmem_cache *s)
 
 void *fixup_red_left(struct kmem_cache *s, void *p);
 
-static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
+static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
 				void *x) {
-	void *object = x - (x - page_address(page)) % cache->size;
-	void *last_object = page_address(page) +
-		(page->objects - 1) * cache->size;
+	void *object = x - (x - slab_address(slab)) % cache->size;
+	void *last_object = slab_address(slab) +
+		(slab->objects - 1) * cache->size;
 	void *result = (unlikely(object > last_object)) ? last_object : object;
 
 	result = fixup_red_left(cache, result);
@@ -178,16 +178,16 @@ static inline unsigned int __obj_to_index(const struct kmem_cache *cache,
 }
 
 static inline unsigned int obj_to_index(const struct kmem_cache *cache,
-					const struct page *page, void *obj)
+					const struct slab *slab, void *obj)
 {
 	if (is_kfence_address(obj))
 		return 0;
-	return __obj_to_index(cache, page_address(page), obj);
+	return __obj_to_index(cache, slab_address(slab), obj);
 }
 
-static inline int objs_per_slab_page(const struct kmem_cache *cache,
-				     const struct page *page)
+static inline int objs_per_slab(const struct kmem_cache *cache,
+				     const struct slab *slab)
 {
-	return page->objects;
+	return slab->objects;
 }
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8428da2aaf17..6a1cd2d38bff 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache,
 	/* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
 #ifdef CONFIG_SLAB
 	/* For SLAB assign tags based on the object index in the freelist. */
-	return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object);
+	return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object);
 #else
 	/*
 	 * For SLUB assign a random tag during slab creation, otherwise reuse
@@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
 	if (is_kfence_address(object))
 		return false;
 
-	if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
+	if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
 	    object)) {
 		kasan_report_invalid_free(tagged_object, ip);
 		return true;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 84a038b07c6f..5d0b79416c4e 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -339,7 +339,7 @@ static void __kasan_record_aux_stack(void *addr, bool can_alloc)
 		return;
 
 	cache = page->slab_cache;
-	object = nearest_obj(cache, page, addr);
+	object = nearest_obj(cache, page_slab(page), addr);
 	alloc_meta = kasan_get_alloc_meta(cache, object);
 	if (!alloc_meta)
 		return;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 0bc10f452f7e..e00999dc6499 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -249,7 +249,7 @@ static void print_address_description(void *addr, u8 tag)
 
 	if (page && PageSlab(page)) {
 		struct kmem_cache *cache = page->slab_cache;
-		void *object = nearest_obj(cache, page,	addr);
+		void *object = nearest_obj(cache, page_slab(page),	addr);
 
 		describe_object(cache, object, addr, tag);
 	}
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 8a319fc16dab..06c21dd77493 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -23,7 +23,7 @@ const char *kasan_get_bug_type(struct kasan_access_info *info)
 	page = kasan_addr_to_page(addr);
 	if (page && PageSlab(page)) {
 		cache = page->slab_cache;
-		object = nearest_obj(cache, page, (void *)addr);
+		object = nearest_obj(cache, page_slab(page), (void *)addr);
 		alloc_meta = kasan_get_alloc_meta(cache, object);
 
 		if (alloc_meta) {
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index 695030c1fff8..f7276711d7b9 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -291,8 +291,8 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat
 			 * even for KFENCE objects; these are required so that
 			 * memcg accounting works correctly.
 			 */
-			KUNIT_EXPECT_EQ(test, obj_to_index(s, page, alloc), 0U);
-			KUNIT_EXPECT_EQ(test, objs_per_slab_page(s, page), 1);
+			KUNIT_EXPECT_EQ(test, obj_to_index(s, page_slab(page), alloc), 0U);
+			KUNIT_EXPECT_EQ(test, objs_per_slab(s, page_slab(page)), 1);
 
 			if (policy == ALLOCATE_ANY)
 				return alloc;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6863a834ed42..906edbd92436 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2819,7 +2819,7 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
 int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
 				 gfp_t gfp, bool new_page)
 {
-	unsigned int objects = objs_per_slab_page(s, page);
+	unsigned int objects = objs_per_slab(s, page_slab(page));
 	unsigned long memcg_data;
 	void *vec;
 
@@ -2881,7 +2881,7 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
 		struct obj_cgroup *objcg;
 		unsigned int off;
 
-		off = obj_to_index(page->slab_cache, page, p);
+		off = obj_to_index(page->slab_cache, page_slab(page), p);
 		objcg = page_objcgs(page)[off];
 		if (objcg)
 			return obj_cgroup_memcg(objcg);
diff --git a/mm/slab.c b/mm/slab.c
index f0447b087d02..785fffd527fe 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1560,7 +1560,7 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp)
 		struct slab *slab = virt_to_slab(objp);
 		unsigned int objnr;
 
-		objnr = obj_to_index(cachep, slab_page(slab), objp);
+		objnr = obj_to_index(cachep, slab, objp);
 		if (objnr) {
 			objp = index_to_obj(cachep, slab, objnr - 1);
 			realobj = (char *)objp + obj_offset(cachep);
@@ -2530,7 +2530,7 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab)
 static void slab_put_obj(struct kmem_cache *cachep,
 			struct slab *slab, void *objp)
 {
-	unsigned int objnr = obj_to_index(cachep, slab_page(slab), objp);
+	unsigned int objnr = obj_to_index(cachep, slab, objp);
 #if DEBUG
 	unsigned int i;
 
@@ -2717,7 +2717,7 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp,
 	if (cachep->flags & SLAB_STORE_USER)
 		*dbg_userword(cachep, objp) = (void *)caller;
 
-	objnr = obj_to_index(cachep, slab_page(slab), objp);
+	objnr = obj_to_index(cachep, slab, objp);
 
 	BUG_ON(objnr >= cachep->num);
 	BUG_ON(objp != index_to_obj(cachep, slab, objnr));
@@ -3663,7 +3663,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
 	objp = object - obj_offset(cachep);
 	kpp->kp_data_offset = obj_offset(cachep);
 	slab = virt_to_slab(objp);
-	objnr = obj_to_index(cachep, slab_page(slab), objp);
+	objnr = obj_to_index(cachep, slab, objp);
 	objp = index_to_obj(cachep, slab, objnr);
 	kpp->kp_objp = objp;
 	if (DEBUG && cachep->flags & SLAB_STORE_USER)
@@ -4181,7 +4181,7 @@ void __check_heap_object(const void *ptr, unsigned long n,
 
 	/* Find and validate object. */
 	cachep = slab->slab_cache;
-	objnr = obj_to_index(cachep, slab_page(slab), (void *)ptr);
+	objnr = obj_to_index(cachep, slab, (void *)ptr);
 	BUG_ON(objnr >= cachep->num);
 
 	/* Find offset within object. */
diff --git a/mm/slab.h b/mm/slab.h
index 7376c9d8aa2b..15d109d8ec89 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -483,7 +483,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
 				continue;
 			}
 
-			off = obj_to_index(s, page, p[i]);
+			off = obj_to_index(s, page_slab(page), p[i]);
 			obj_cgroup_get(objcg);
 			page_objcgs(page)[off] = objcg;
 			mod_objcg_state(objcg, page_pgdat(page),
@@ -522,7 +522,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig,
 		else
 			s = s_orig;
 
-		off = obj_to_index(s, page, p[i]);
+		off = obj_to_index(s, page_slab(page), p[i]);
 		objcg = objcgs[off];
 		if (!objcg)
 			continue;
diff --git a/mm/slub.c b/mm/slub.c
index f5344211d8cc..61aaaa662c5e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4342,7 +4342,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
 #else
 	objp = objp0;
 #endif
-	objnr = obj_to_index(s, slab_page(slab), objp);
+	objnr = obj_to_index(s, slab, objp);
 	kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp);
 	objp = base + s->size * objnr;
 	kpp->kp_objp = objp;
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab
       [not found] ` <20211201181510.18784-1-vbabka-AlSwsSmVLrQ@public.gmane.org>
  2021-12-01 18:14   ` [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems Vlastimil Babka
@ 2021-12-01 18:15   ` Vlastimil Babka
       [not found]     ` <20211201181510.18784-24-vbabka-AlSwsSmVLrQ@public.gmane.org>
  2021-12-02 12:25   ` [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
  2021-12-14 12:57   ` Vlastimil Babka
  3 siblings, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-01 18:15 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Vlastimil Babka, Johannes Weiner,
	Michal Hocko, Vladimir Davydov, cgroups-u79uwXL29TY76Z2rM5mHXA

page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages
so convert all the related infrastructure to struct slab.

To avoid include cycles, move the inline definitions of slab_objcgs() and
slab_objcgs_check() from memcontrol.h to mm/slab.h.

This is not just mechanistic changing of types and names. Now in
mem_cgroup_from_obj() we use PageSlab flag to decide if we interpret the page
as slab, instead of relying on MEMCG_DATA_OBJCGS bit checked in
page_objcgs_check() (now slab_objcgs_check()). Similarly in
memcg_slab_free_hook() where we can encounter kmalloc_large() pages (here the
PageSlab flag check is implied by virt_to_slab()).

Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Vladimir Davydov <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
---
 include/linux/memcontrol.h |  48 ------------------
 mm/memcontrol.c            |  43 +++++++++-------
 mm/slab.h                  | 101 ++++++++++++++++++++++++++++---------
 3 files changed, 103 insertions(+), 89 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0c5c403f4be6..e34112f6a369 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -536,45 +536,6 @@ static inline bool folio_memcg_kmem(struct folio *folio)
 	return folio->memcg_data & MEMCG_DATA_KMEM;
 }
 
-/*
- * page_objcgs - get the object cgroups vector associated with a page
- * @page: a pointer to the page struct
- *
- * Returns a pointer to the object cgroups vector associated with the page,
- * or NULL. This function assumes that the page is known to have an
- * associated object cgroups vector. It's not safe to call this function
- * against pages, which might have an associated memory cgroup: e.g.
- * kernel stack pages.
- */
-static inline struct obj_cgroup **page_objcgs(struct page *page)
-{
-	unsigned long memcg_data = READ_ONCE(page->memcg_data);
-
-	VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), page);
-	VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page);
-
-	return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-}
-
-/*
- * page_objcgs_check - get the object cgroups vector associated with a page
- * @page: a pointer to the page struct
- *
- * Returns a pointer to the object cgroups vector associated with the page,
- * or NULL. This function is safe to use if the page can be directly associated
- * with a memory cgroup.
- */
-static inline struct obj_cgroup **page_objcgs_check(struct page *page)
-{
-	unsigned long memcg_data = READ_ONCE(page->memcg_data);
-
-	if (!memcg_data || !(memcg_data & MEMCG_DATA_OBJCGS))
-		return NULL;
-
-	VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page);
-
-	return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-}
 
 #else
 static inline bool folio_memcg_kmem(struct folio *folio)
@@ -582,15 +543,6 @@ static inline bool folio_memcg_kmem(struct folio *folio)
 	return false;
 }
 
-static inline struct obj_cgroup **page_objcgs(struct page *page)
-{
-	return NULL;
-}
-
-static inline struct obj_cgroup **page_objcgs_check(struct page *page)
-{
-	return NULL;
-}
 #endif
 
 static inline bool PageMemcgKmem(struct page *page)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 906edbd92436..522fff11d6d1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2816,31 +2816,31 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
  */
 #define OBJCGS_CLEAR_MASK	(__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT)
 
-int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
-				 gfp_t gfp, bool new_page)
+int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s,
+				 gfp_t gfp, bool new_slab)
 {
-	unsigned int objects = objs_per_slab(s, page_slab(page));
+	unsigned int objects = objs_per_slab(s, slab);
 	unsigned long memcg_data;
 	void *vec;
 
 	gfp &= ~OBJCGS_CLEAR_MASK;
 	vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp,
-			   page_to_nid(page));
+			   slab_nid(slab));
 	if (!vec)
 		return -ENOMEM;
 
 	memcg_data = (unsigned long) vec | MEMCG_DATA_OBJCGS;
-	if (new_page) {
+	if (new_slab) {
 		/*
-		 * If the slab page is brand new and nobody can yet access
-		 * it's memcg_data, no synchronization is required and
-		 * memcg_data can be simply assigned.
+		 * If the slab is brand new and nobody can yet access its
+		 * memcg_data, no synchronization is required and memcg_data can
+		 * be simply assigned.
 		 */
-		page->memcg_data = memcg_data;
-	} else if (cmpxchg(&page->memcg_data, 0, memcg_data)) {
+		slab->memcg_data = memcg_data;
+	} else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) {
 		/*
-		 * If the slab page is already in use, somebody can allocate
-		 * and assign obj_cgroups in parallel. In this case the existing
+		 * If the slab is already in use, somebody can allocate and
+		 * assign obj_cgroups in parallel. In this case the existing
 		 * objcg vector should be reused.
 		 */
 		kfree(vec);
@@ -2865,24 +2865,31 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
  */
 struct mem_cgroup *mem_cgroup_from_obj(void *p)
 {
-	struct page *page;
+	struct folio *folio;
 
 	if (mem_cgroup_disabled())
 		return NULL;
 
-	page = virt_to_head_page(p);
+	folio = virt_to_folio(p);
 
 	/*
 	 * Slab objects are accounted individually, not per-page.
 	 * Memcg membership data for each individual object is saved in
 	 * the page->obj_cgroups.
 	 */
-	if (page_objcgs_check(page)) {
+	if (folio_test_slab(folio)) {
+		struct obj_cgroup **objcgs;
 		struct obj_cgroup *objcg;
+		struct slab *slab;
 		unsigned int off;
 
-		off = obj_to_index(page->slab_cache, page_slab(page), p);
-		objcg = page_objcgs(page)[off];
+		slab = folio_slab(folio);
+		objcgs = slab_objcgs_check(slab);
+		if (!objcgs)
+			return NULL;
+
+		off = obj_to_index(slab->slab_cache, slab, p);
+		objcg = objcgs[off];
 		if (objcg)
 			return obj_cgroup_memcg(objcg);
 
@@ -2896,7 +2903,7 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
 	 * page_memcg_check(page) will guarantee that a proper memory
 	 * cgroup pointer or NULL will be returned.
 	 */
-	return page_memcg_check(page);
+	return page_memcg_check(folio_page(folio, 0));
 }
 
 __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
diff --git a/mm/slab.h b/mm/slab.h
index 15d109d8ec89..0760f20686a7 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -412,15 +412,56 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla
 }
 
 #ifdef CONFIG_MEMCG_KMEM
-int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
-				 gfp_t gfp, bool new_page);
+/*
+ * slab_objcgs - get the object cgroups vector associated with a slab
+ * @slab: a pointer to the slab struct
+ *
+ * Returns a pointer to the object cgroups vector associated with the slab,
+ * or NULL. This function assumes that the slab is known to have an
+ * associated object cgroups vector. It's not safe to call this function
+ * against slabs with underlying pages, which might have an associated memory
+ * cgroup: e.g.  kernel stack pages.
+ */
+static inline struct obj_cgroup **slab_objcgs(struct slab *slab)
+{
+	unsigned long memcg_data = READ_ONCE(slab->memcg_data);
+
+	VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS),
+							slab_page(slab));
+	VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab));
+
+	return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+}
+
+/*
+ * slab_objcgs_check - get the object cgroups vector associated with a slab
+ * @slab: a pointer to the slab struct
+ *
+ * Returns a pointer to the object cgroups vector associated with the slab, or
+ * NULL. This function is safe to use if the underlying page can be directly
+ * associated with a memory cgroup.
+ */
+static inline struct obj_cgroup **slab_objcgs_check(struct slab *slab)
+{
+	unsigned long memcg_data = READ_ONCE(slab->memcg_data);
+
+	if (!memcg_data || !(memcg_data & MEMCG_DATA_OBJCGS))
+		return NULL;
+
+	VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab));
+
+	return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+}
+
+int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s,
+				 gfp_t gfp, bool new_slab);
 void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
 		     enum node_stat_item idx, int nr);
 
-static inline void memcg_free_page_obj_cgroups(struct page *page)
+static inline void memcg_free_slab_cgroups(struct slab *slab)
 {
-	kfree(page_objcgs(page));
-	page->memcg_data = 0;
+	kfree(slab_objcgs(slab));
+	slab->memcg_data = 0;
 }
 
 static inline size_t obj_full_size(struct kmem_cache *s)
@@ -465,7 +506,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
 					      gfp_t flags, size_t size,
 					      void **p)
 {
-	struct page *page;
+	struct slab *slab;
 	unsigned long off;
 	size_t i;
 
@@ -474,19 +515,19 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
 
 	for (i = 0; i < size; i++) {
 		if (likely(p[i])) {
-			page = virt_to_head_page(p[i]);
+			slab = virt_to_slab(p[i]);
 
-			if (!page_objcgs(page) &&
-			    memcg_alloc_page_obj_cgroups(page, s, flags,
+			if (!slab_objcgs(slab) &&
+			    memcg_alloc_slab_cgroups(slab, s, flags,
 							 false)) {
 				obj_cgroup_uncharge(objcg, obj_full_size(s));
 				continue;
 			}
 
-			off = obj_to_index(s, page_slab(page), p[i]);
+			off = obj_to_index(s, slab, p[i]);
 			obj_cgroup_get(objcg);
-			page_objcgs(page)[off] = objcg;
-			mod_objcg_state(objcg, page_pgdat(page),
+			slab_objcgs(slab)[off] = objcg;
+			mod_objcg_state(objcg, slab_pgdat(slab),
 					cache_vmstat_idx(s), obj_full_size(s));
 		} else {
 			obj_cgroup_uncharge(objcg, obj_full_size(s));
@@ -501,7 +542,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig,
 	struct kmem_cache *s;
 	struct obj_cgroup **objcgs;
 	struct obj_cgroup *objcg;
-	struct page *page;
+	struct slab *slab;
 	unsigned int off;
 	int i;
 
@@ -512,43 +553,57 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig,
 		if (unlikely(!p[i]))
 			continue;
 
-		page = virt_to_head_page(p[i]);
-		objcgs = page_objcgs_check(page);
+		slab = virt_to_slab(p[i]);
+		/* we could be given a kmalloc_large() object, skip those */
+		if (!slab)
+			continue;
+
+		objcgs = slab_objcgs_check(slab);
 		if (!objcgs)
 			continue;
 
 		if (!s_orig)
-			s = page->slab_cache;
+			s = slab->slab_cache;
 		else
 			s = s_orig;
 
-		off = obj_to_index(s, page_slab(page), p[i]);
+		off = obj_to_index(s, slab, p[i]);
 		objcg = objcgs[off];
 		if (!objcg)
 			continue;
 
 		objcgs[off] = NULL;
 		obj_cgroup_uncharge(objcg, obj_full_size(s));
-		mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s),
+		mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s),
 				-obj_full_size(s));
 		obj_cgroup_put(objcg);
 	}
 }
 
 #else /* CONFIG_MEMCG_KMEM */
+static inline struct obj_cgroup **slab_objcgs(struct slab *slab)
+{
+	return NULL;
+}
+
+static inline struct obj_cgroup **slab_objcgs_check(struct slab *slab)
+{
+	return NULL;
+}
+
 static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr)
 {
 	return NULL;
 }
 
-static inline int memcg_alloc_page_obj_cgroups(struct page *page,
+static inline int memcg_alloc_slab_cgroups(struct slab *slab,
 					       struct kmem_cache *s, gfp_t gfp,
-					       bool new_page)
+					       bool new_slab)
 {
 	return 0;
 }
 
-static inline void memcg_free_page_obj_cgroups(struct page *page)
+static inline void memcg_free_slab_cgroups(struct slab *slab)
 {
 }
 
@@ -587,7 +642,7 @@ static __always_inline void account_slab(struct slab *slab, int order,
 					 struct kmem_cache *s, gfp_t gfp)
 {
 	if (memcg_kmem_enabled() && (s->flags & SLAB_ACCOUNT))
-		memcg_alloc_page_obj_cgroups(slab_page(slab), s, gfp, true);
+		memcg_alloc_slab_cgroups(slab, s, gfp, true);
 
 	mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s),
 			    PAGE_SIZE << order);
@@ -597,7 +652,7 @@ static __always_inline void unaccount_slab(struct slab *slab, int order,
 					   struct kmem_cache *s)
 {
 	if (memcg_kmem_enabled())
-		memcg_free_page_obj_cgroups(slab_page(slab));
+		memcg_free_slab_cgroups(slab);
 
 	mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s),
 			    -(PAGE_SIZE << order));
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found] ` <20211201181510.18784-1-vbabka-AlSwsSmVLrQ@public.gmane.org>
  2021-12-01 18:14   ` [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems Vlastimil Babka
  2021-12-01 18:15   ` [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab Vlastimil Babka
@ 2021-12-02 12:25   ` Vlastimil Babka
  2021-12-14 12:57   ` Vlastimil Babka
  3 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-02 12:25 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner, Julia Lawall, kasan-dev-/JYPxA39Uh5TLH3MbocFFw,
	Lu Baolu, Luis Chamberlain, Marco Elver, Mic

On 12/1/21 19:14, Vlastimil Babka wrote:
> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> this cover letter.
> 
> Series also available in git, based on 5.16-rc3:
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

I have pushed a v3, but not going to resent immediately to avoid unnecessary
spamming, the differences is just that some patches are removed and other
reordered, so the current v2 posting should be still sufficient for on-list
review:

https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v3r1

patch 29/33 iommu: Use put_pages_list
- removed as this version is broken and Robin Murphy has meanwhile
incorporated it partially to his series:
https://lore.kernel.org/lkml/cover.1637671820.git.robin.murphy-5wv7dgnIgG8@public.gmane.org/

patch 30/33 mm: Remove slab from struct page
- removed and postponed for later as this can be only be applied after the
iommu use of page.freelist is resolved

patch 27/33 zsmalloc: Stop using slab fields in struct page
patch 28/33 bootmem: Use page->index instead of page->freelist
- moved towards the end of series, to further separate the part that adjusts
non-slab users of slab fields towards removing those fields from struct page.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems
       [not found]     ` <20211201181510.18784-23-vbabka-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-02 17:16       ` Andrey Konovalov
  2021-12-14 14:31       ` Johannes Weiner
  1 sibling, 0 replies; 28+ messages in thread
From: Andrey Konovalov @ 2021-12-02 17:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, Linux Memory Management List, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Julia Lawall, Luis Chamberlain,
	Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov, Marco Elver,
	Johannes Weiner, Michal Hocko, Vladimir Davydov, kasan-dev,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On Wed, Dec 1, 2021 at 7:15 PM Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org> wrote:
>
> KASAN, KFENCE and memcg interact with SLAB or SLUB internals through functions
> nearest_obj(), obj_to_index() and objs_per_slab() that use struct page as
> parameter. This patch converts it to struct slab including all callers, through
> a coccinelle semantic patch.
>
> // Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
> // Note: needs coccinelle 1.1.1 to avoid breaking whitespace
>
> @@
> @@
>
> -objs_per_slab_page(
> +objs_per_slab(
>  ...
>  )
>  { ... }
>
> @@
> @@
>
> -objs_per_slab_page(
> +objs_per_slab(
>  ...
>  )
>
> @@
> identifier fn =~ "obj_to_index|objs_per_slab";
> @@
>
>  fn(...,
> -   const struct page *page
> +   const struct slab *slab
>     ,...)
>  {
> <...
> (
> - page_address(page)
> + slab_address(slab)
> |
> - page
> + slab
> )
> ...>
>  }
>
> @@
> identifier fn =~ "nearest_obj";
> @@
>
>  fn(...,
> -   struct page *page
> +   const struct slab *slab
>     ,...)
>  {
> <...
> (
> - page_address(page)
> + slab_address(slab)
> |
> - page
> + slab
> )
> ...>
>  }
>
> @@
> identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
> expression E;
> @@
>
>  fn(...,
> (
> - slab_page(E)
> + E
> |
> - virt_to_page(E)
> + virt_to_slab(E)
> |
> - virt_to_head_page(E)
> + virt_to_slab(E)
> |
> - page
> + page_slab(page)
> )
>   ,...)
>
> Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
> Cc: Julia Lawall <julia.lawall-MZpvjPyXg2s@public.gmane.org>
> Cc: Luis Chamberlain <mcgrof-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Andrey Ryabinin <ryabinin.a.a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Alexander Potapenko <glider-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Dmitry Vyukov <dvyukov-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Marco Elver <elver-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Vladimir Davydov <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: <kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org>
> Cc: <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> ---
>  include/linux/slab_def.h | 16 ++++++++--------
>  include/linux/slub_def.h | 18 +++++++++---------
>  mm/kasan/common.c        |  4 ++--
>  mm/kasan/generic.c       |  2 +-
>  mm/kasan/report.c        |  2 +-
>  mm/kasan/report_tags.c   |  2 +-
>  mm/kfence/kfence_test.c  |  4 ++--
>  mm/memcontrol.c          |  4 ++--
>  mm/slab.c                | 10 +++++-----
>  mm/slab.h                |  4 ++--
>  mm/slub.c                |  2 +-
>  11 files changed, 34 insertions(+), 34 deletions(-)
>
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index 3aa5e1e73ab6..e24c9aff6fed 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -87,11 +87,11 @@ struct kmem_cache {
>         struct kmem_cache_node *node[MAX_NUMNODES];
>  };
>
> -static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
> +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
>                                 void *x)
>  {
> -       void *object = x - (x - page->s_mem) % cache->size;
> -       void *last_object = page->s_mem + (cache->num - 1) * cache->size;
> +       void *object = x - (x - slab->s_mem) % cache->size;
> +       void *last_object = slab->s_mem + (cache->num - 1) * cache->size;
>
>         if (unlikely(object > last_object))
>                 return last_object;
> @@ -106,16 +106,16 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
>   *   reciprocal_divide(offset, cache->reciprocal_buffer_size)
>   */
>  static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> -                                       const struct page *page, void *obj)
> +                                       const struct slab *slab, void *obj)
>  {
> -       u32 offset = (obj - page->s_mem);
> +       u32 offset = (obj - slab->s_mem);
>         return reciprocal_divide(offset, cache->reciprocal_buffer_size);
>  }
>
> -static inline int objs_per_slab_page(const struct kmem_cache *cache,
> -                                    const struct page *page)
> +static inline int objs_per_slab(const struct kmem_cache *cache,
> +                                    const struct slab *slab)
>  {
> -       if (is_kfence_address(page_address(page)))
> +       if (is_kfence_address(slab_address(slab)))
>                 return 1;
>         return cache->num;
>  }
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 8a9c2876ca89..33c5c0e3bd8d 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -158,11 +158,11 @@ static inline void sysfs_slab_release(struct kmem_cache *s)
>
>  void *fixup_red_left(struct kmem_cache *s, void *p);
>
> -static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
> +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab,
>                                 void *x) {
> -       void *object = x - (x - page_address(page)) % cache->size;
> -       void *last_object = page_address(page) +
> -               (page->objects - 1) * cache->size;
> +       void *object = x - (x - slab_address(slab)) % cache->size;
> +       void *last_object = slab_address(slab) +
> +               (slab->objects - 1) * cache->size;
>         void *result = (unlikely(object > last_object)) ? last_object : object;
>
>         result = fixup_red_left(cache, result);
> @@ -178,16 +178,16 @@ static inline unsigned int __obj_to_index(const struct kmem_cache *cache,
>  }
>
>  static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> -                                       const struct page *page, void *obj)
> +                                       const struct slab *slab, void *obj)
>  {
>         if (is_kfence_address(obj))
>                 return 0;
> -       return __obj_to_index(cache, page_address(page), obj);
> +       return __obj_to_index(cache, slab_address(slab), obj);
>  }
>
> -static inline int objs_per_slab_page(const struct kmem_cache *cache,
> -                                    const struct page *page)
> +static inline int objs_per_slab(const struct kmem_cache *cache,
> +                                    const struct slab *slab)
>  {
> -       return page->objects;
> +       return slab->objects;
>  }
>  #endif /* _LINUX_SLUB_DEF_H */
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 8428da2aaf17..6a1cd2d38bff 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache,
>         /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
>  #ifdef CONFIG_SLAB
>         /* For SLAB assign tags based on the object index in the freelist. */
> -       return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object);
> +       return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object);
>  #else
>         /*
>          * For SLUB assign a random tag during slab creation, otherwise reuse
> @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
>         if (is_kfence_address(object))
>                 return false;
>
> -       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> +       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
>             object)) {
>                 kasan_report_invalid_free(tagged_object, ip);
>                 return true;
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 84a038b07c6f..5d0b79416c4e 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -339,7 +339,7 @@ static void __kasan_record_aux_stack(void *addr, bool can_alloc)
>                 return;
>
>         cache = page->slab_cache;
> -       object = nearest_obj(cache, page, addr);
> +       object = nearest_obj(cache, page_slab(page), addr);
>         alloc_meta = kasan_get_alloc_meta(cache, object);
>         if (!alloc_meta)
>                 return;
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 0bc10f452f7e..e00999dc6499 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -249,7 +249,7 @@ static void print_address_description(void *addr, u8 tag)
>
>         if (page && PageSlab(page)) {
>                 struct kmem_cache *cache = page->slab_cache;
> -               void *object = nearest_obj(cache, page, addr);
> +               void *object = nearest_obj(cache, page_slab(page),      addr);
>
>                 describe_object(cache, object, addr, tag);
>         }
> diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
> index 8a319fc16dab..06c21dd77493 100644
> --- a/mm/kasan/report_tags.c
> +++ b/mm/kasan/report_tags.c
> @@ -23,7 +23,7 @@ const char *kasan_get_bug_type(struct kasan_access_info *info)
>         page = kasan_addr_to_page(addr);
>         if (page && PageSlab(page)) {
>                 cache = page->slab_cache;
> -               object = nearest_obj(cache, page, (void *)addr);
> +               object = nearest_obj(cache, page_slab(page), (void *)addr);
>                 alloc_meta = kasan_get_alloc_meta(cache, object);
>
>                 if (alloc_meta) {
> diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
> index 695030c1fff8..f7276711d7b9 100644
> --- a/mm/kfence/kfence_test.c
> +++ b/mm/kfence/kfence_test.c
> @@ -291,8 +291,8 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat
>                          * even for KFENCE objects; these are required so that
>                          * memcg accounting works correctly.
>                          */
> -                       KUNIT_EXPECT_EQ(test, obj_to_index(s, page, alloc), 0U);
> -                       KUNIT_EXPECT_EQ(test, objs_per_slab_page(s, page), 1);
> +                       KUNIT_EXPECT_EQ(test, obj_to_index(s, page_slab(page), alloc), 0U);
> +                       KUNIT_EXPECT_EQ(test, objs_per_slab(s, page_slab(page)), 1);
>
>                         if (policy == ALLOCATE_ANY)
>                                 return alloc;
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 6863a834ed42..906edbd92436 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2819,7 +2819,7 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
>  int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
>                                  gfp_t gfp, bool new_page)
>  {
> -       unsigned int objects = objs_per_slab_page(s, page);
> +       unsigned int objects = objs_per_slab(s, page_slab(page));
>         unsigned long memcg_data;
>         void *vec;
>
> @@ -2881,7 +2881,7 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
>                 struct obj_cgroup *objcg;
>                 unsigned int off;
>
> -               off = obj_to_index(page->slab_cache, page, p);
> +               off = obj_to_index(page->slab_cache, page_slab(page), p);
>                 objcg = page_objcgs(page)[off];
>                 if (objcg)
>                         return obj_cgroup_memcg(objcg);
> diff --git a/mm/slab.c b/mm/slab.c
> index f0447b087d02..785fffd527fe 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1560,7 +1560,7 @@ static void check_poison_obj(struct kmem_cache *cachep, void *objp)
>                 struct slab *slab = virt_to_slab(objp);
>                 unsigned int objnr;
>
> -               objnr = obj_to_index(cachep, slab_page(slab), objp);
> +               objnr = obj_to_index(cachep, slab, objp);
>                 if (objnr) {
>                         objp = index_to_obj(cachep, slab, objnr - 1);
>                         realobj = (char *)objp + obj_offset(cachep);
> @@ -2530,7 +2530,7 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct slab *slab)
>  static void slab_put_obj(struct kmem_cache *cachep,
>                         struct slab *slab, void *objp)
>  {
> -       unsigned int objnr = obj_to_index(cachep, slab_page(slab), objp);
> +       unsigned int objnr = obj_to_index(cachep, slab, objp);
>  #if DEBUG
>         unsigned int i;
>
> @@ -2717,7 +2717,7 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp,
>         if (cachep->flags & SLAB_STORE_USER)
>                 *dbg_userword(cachep, objp) = (void *)caller;
>
> -       objnr = obj_to_index(cachep, slab_page(slab), objp);
> +       objnr = obj_to_index(cachep, slab, objp);
>
>         BUG_ON(objnr >= cachep->num);
>         BUG_ON(objp != index_to_obj(cachep, slab, objnr));
> @@ -3663,7 +3663,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
>         objp = object - obj_offset(cachep);
>         kpp->kp_data_offset = obj_offset(cachep);
>         slab = virt_to_slab(objp);
> -       objnr = obj_to_index(cachep, slab_page(slab), objp);
> +       objnr = obj_to_index(cachep, slab, objp);
>         objp = index_to_obj(cachep, slab, objnr);
>         kpp->kp_objp = objp;
>         if (DEBUG && cachep->flags & SLAB_STORE_USER)
> @@ -4181,7 +4181,7 @@ void __check_heap_object(const void *ptr, unsigned long n,
>
>         /* Find and validate object. */
>         cachep = slab->slab_cache;
> -       objnr = obj_to_index(cachep, slab_page(slab), (void *)ptr);
> +       objnr = obj_to_index(cachep, slab, (void *)ptr);
>         BUG_ON(objnr >= cachep->num);
>
>         /* Find offset within object. */
> diff --git a/mm/slab.h b/mm/slab.h
> index 7376c9d8aa2b..15d109d8ec89 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -483,7 +483,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
>                                 continue;
>                         }
>
> -                       off = obj_to_index(s, page, p[i]);
> +                       off = obj_to_index(s, page_slab(page), p[i]);
>                         obj_cgroup_get(objcg);
>                         page_objcgs(page)[off] = objcg;
>                         mod_objcg_state(objcg, page_pgdat(page),
> @@ -522,7 +522,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig,
>                 else
>                         s = s_orig;
>
> -               off = obj_to_index(s, page, p[i]);
> +               off = obj_to_index(s, page_slab(page), p[i]);
>                 objcg = objcgs[off];
>                 if (!objcg)
>                         continue;
> diff --git a/mm/slub.c b/mm/slub.c
> index f5344211d8cc..61aaaa662c5e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4342,7 +4342,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
>  #else
>         objp = objp0;
>  #endif
> -       objnr = obj_to_index(s, slab_page(slab), objp);
> +       objnr = obj_to_index(s, slab, objp);
>         kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp);
>         objp = base + s->size * objnr;
>         kpp->kp_objp = objp;
> --
> 2.33.1
>

Reviewed-by: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

Thanks!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found] ` <20211201181510.18784-1-vbabka-AlSwsSmVLrQ@public.gmane.org>
                     ` (2 preceding siblings ...)
  2021-12-02 12:25   ` [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
@ 2021-12-14 12:57   ` Vlastimil Babka
       [not found]     ` <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05-AlSwsSmVLrQ@public.gmane.org>
  3 siblings, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-14 12:57 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner, Julia Lawall, kasan-dev-/JYPxA39Uh5TLH3MbocFFw,
	Lu Baolu, Luis Chamberlain, Marco Elver, Mic

On 12/1/21 19:14, Vlastimil Babka wrote:
> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> this cover letter.
> 
> Series also available in git, based on 5.16-rc3:
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

 1:  10b656f9eb1e =  1:  10b656f9eb1e mm: add virt_to_folio() and folio_address()
 2:  5e6ad846acf1 =  2:  5e6ad846acf1 mm/slab: Dissolve slab_map_pages() in its caller
 3:  48d4e9407aa0 =  3:  48d4e9407aa0 mm/slub: Make object_err() static
 4:  fe1e19081321 =  4:  fe1e19081321 mm: Split slab into its own type
 5:  af7fd46fbb9b =  5:  af7fd46fbb9b mm: Add account_slab() and unaccount_slab()
 6:  7ed088d601d9 =  6:  7ed088d601d9 mm: Convert virt_to_cache() to use struct slab
 7:  1d41188b9401 =  7:  1d41188b9401 mm: Convert __ksize() to struct slab
 8:  5d9d1231461f !  8:  8fd22e0b086e mm: Use struct slab in kmem_obj_info()
    @@ Commit message
         slab type instead of the page type, we make it obvious that this can
         only be called for slabs.
     
    +    [ vbabka-AlSwsSmVLrQ@public.gmane.org: also convert the related kmem_valid_obj() to folios ]
    +
         Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
         Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
     
    @@ mm/slab.h: struct kmem_obj_info {
      #endif /* MM_SLAB_H */
     
      ## mm/slab_common.c ##
    +@@ mm/slab_common.c: bool slab_is_available(void)
    +  */
    + bool kmem_valid_obj(void *object)
    + {
    +-	struct page *page;
    ++	struct folio *folio;
    + 
    + 	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
    + 	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
    + 		return false;
    +-	page = virt_to_head_page(object);
    +-	return PageSlab(page);
    ++	folio = virt_to_folio(object);
    ++	return folio_test_slab(folio);
    + }
    + EXPORT_SYMBOL_GPL(kmem_valid_obj);
    + 
     @@ mm/slab_common.c: void kmem_dump_obj(void *object)
      {
      	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
    @@ mm/slub.c: int __kmem_cache_shutdown(struct kmem_cache *s)
      	objp = base + s->size * objnr;
      	kpp->kp_objp = objp;
     -	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
    -+	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) ||
    ++	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size
    ++			 || (objp - base) % s->size) ||
      	    !(s->flags & SLAB_STORE_USER))
      		return;
      #ifdef CONFIG_SLUB_DEBUG
 9:  3aef771be335 !  9:  c97e73c3b6c2 mm: Convert check_heap_object() to use struct slab
    @@ mm/slab.h: struct kmem_obj_info {
     +#else
     +static inline
     +void __check_heap_object(const void *ptr, unsigned long n,
    -+			 const struct slab *slab, bool to_user) { }
    ++			 const struct slab *slab, bool to_user)
    ++{
    ++}
     +#endif
     +
      #endif /* MM_SLAB_H */
10:  2253e45e6bef = 10:  da05e0f7179c mm/slub: Convert detached_freelist to use a struct slab
11:  f28202bc27ba = 11:  383887e77104 mm/slub: Convert kfree() to use a struct slab
12:  31b58b1e914f = 12:  c46be093c637 mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
13:  636406a3ad59 = 13:  49dbbf917052 mm/slub: Convert print_page_info() to print_slab_info()
14:  3b49efda3b6f = 14:  4bb0c932156a mm/slub: Convert alloc_slab_page() to return a struct slab
15:  61a195526d3b ! 15:  4b9761b5cfab mm/slub: Convert __free_slab() to use struct slab
    @@ mm/slub.c: static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int n
      
     -	__ClearPageSlabPfmemalloc(page);
     -	__ClearPageSlab(page);
    +-	/* In union with page->mapping where page allocator expects NULL */
    +-	page->slab_cache = NULL;
     +	__slab_clear_pfmemalloc(slab);
     +	__folio_clear_slab(folio);
    - 	/* In union with page->mapping where page allocator expects NULL */
    --	page->slab_cache = NULL;
    -+	slab->slab_cache = NULL;
    ++	folio->mapping = NULL;
      	if (current->reclaim_state)
      		current->reclaim_state->reclaimed_slab += pages;
     -	unaccount_slab(page_slab(page), order, s);
16:  987c7ed31580 = 16:  f384ec918065 mm/slub: Convert pfmemalloc_match() to take a struct slab
17:  cc742564237e ! 17:  06738ade4e17 mm/slub: Convert most struct page to struct slab by spatch
    @@ Commit message
     
         // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
         // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
    -    // embedded script script
    +    // embedded script
     
         // build list of functions to exclude from applying the next rule
         @initialize:ocaml@
18:  b45acac9aace = 18:  1a4f69a4cced mm/slub: Finish struct page to struct slab conversion
19:  76c3eeb39684 ! 19:  1d62d706e884 mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
    @@ mm/slab.c: slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nod
     -	__ClearPageSlabPfmemalloc(page);
     -	__ClearPageSlab(page);
     -	page_mapcount_reset(page);
    +-	/* In union with page->mapping where page allocator expects NULL */
    +-	page->slab_cache = NULL;
     +	BUG_ON(!folio_test_slab(folio));
     +	__slab_clear_pfmemalloc(slab);
     +	__folio_clear_slab(folio);
     +	page_mapcount_reset(folio_page(folio, 0));
    - 	/* In union with page->mapping where page allocator expects NULL */
    --	page->slab_cache = NULL;
    -+	slab->slab_cache = NULL;
    ++	folio->mapping = NULL;
      
      	if (current->reclaim_state)
      		current->reclaim_state->reclaimed_slab += 1 << order;
20:  ed6144dbebce ! 20:  fd4c3aabacd3 mm/slab: Convert most struct page to struct slab by spatch
    @@ Commit message
     
         // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
         // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
    -    // embedded script script
    +    // embedded script
     
         // build list of functions for applying the next rule
         @initialize:ocaml@
21:  17fb81e601e6 = 21:  b59720b2edba mm/slab: Finish struct page to struct slab conversion
22:  4e8d1faebc24 ! 22:  65ced071c3e7 mm: Convert struct page to struct slab in functions used by other subsystems
    @@ Commit message
           ,...)
     
         Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
    +    Reviewed-by: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
         Cc: Julia Lawall <julia.lawall-MZpvjPyXg2s@public.gmane.org>
         Cc: Luis Chamberlain <mcgrof-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
         Cc: Andrey Ryabinin <ryabinin.a.a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
23:  eefa12e18a92 = 23:  c9c8dee01e5d mm/memcg: Convert slab objcgs from struct page to struct slab
24:  fa5ba4107ce2 ! 24:  def731137335 mm/slob: Convert SLOB to use struct slab
    @@ Metadata
     Author: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
     
      ## Commit message ##
    -    mm/slob: Convert SLOB to use struct slab
    +    mm/slob: Convert SLOB to use struct slab and struct folio
     
    -    Use struct slab throughout the slob allocator.
    +    Use struct slab throughout the slob allocator. Where non-slab page can appear
    +    use struct folio instead of struct page.
     
         [ vbabka-AlSwsSmVLrQ@public.gmane.org: don't introduce wrappers for PageSlobFree in mm/slab.h just
           for the single callers being wrappers in mm/slob.c ]
     
    +    [ Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>: fix NULL pointer deference ]
    +
         Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
         Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
     
      ## mm/slob.c ##
    +@@
    +  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
    +  * alloc_pages() directly, allocating compound pages so the page order
    +  * does not have to be separately tracked.
    +- * These objects are detected in kfree() because PageSlab()
    ++ * These objects are detected in kfree() because folio_test_slab()
    +  * is false for them.
    +  *
    +  * SLAB is emulated on top of SLOB by simply calling constructors and
     @@ mm/slob.c: static LIST_HEAD(free_slob_large);
      /*
       * slob_page_free: true for pages on free_slob_pages list.
    @@ mm/slob.c: static void *slob_page_alloc(struct page *sp, size_t size, int align,
      							int align_offset)
      {
     -	struct page *sp;
    ++	struct folio *folio;
     +	struct slab *sp;
      	struct list_head *slob_list;
      	slob_t *b = NULL;
    @@ mm/slob.c: static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
      			return NULL;
     -		sp = virt_to_page(b);
     -		__SetPageSlab(sp);
    -+		sp = virt_to_slab(b);
    -+		__SetPageSlab(slab_page(sp));
    ++		folio = virt_to_folio(b);
    ++		__folio_set_slab(folio);
    ++		sp = folio_slab(folio);
      
      		spin_lock_irqsave(&slob_lock, flags);
      		sp->units = SLOB_UNITS(PAGE_SIZE);
    @@ mm/slob.c: static void slob_free(void *block, int size)
      		spin_unlock_irqrestore(&slob_lock, flags);
     -		__ClearPageSlab(sp);
     -		page_mapcount_reset(sp);
    -+		__ClearPageSlab(slab_page(sp));
    ++		__folio_clear_slab(slab_folio(sp));
     +		page_mapcount_reset(slab_page(sp));
      		slob_free_pages(b, 0);
      		return;
      	}
    +@@ mm/slob.c: EXPORT_SYMBOL(__kmalloc_node_track_caller);
    + 
    + void kfree(const void *block)
    + {
    +-	struct page *sp;
    ++	struct folio *sp;
    + 
    + 	trace_kfree(_RET_IP_, block);
    + 
    +@@ mm/slob.c: void kfree(const void *block)
    + 		return;
    + 	kmemleak_free(block);
    + 
    +-	sp = virt_to_page(block);
    +-	if (PageSlab(sp)) {
    ++	sp = virt_to_folio(block);
    ++	if (folio_test_slab(sp)) {
    + 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
    + 		unsigned int *m = (unsigned int *)(block - align);
    + 		slob_free(m, *m + align);
    + 	} else {
    +-		unsigned int order = compound_order(sp);
    +-		mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
    ++		unsigned int order = folio_order(sp);
    ++
    ++		mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
    + 				    -(PAGE_SIZE << order));
    +-		__free_pages(sp, order);
    ++		__free_pages(folio_page(sp, 0), order);
    + 
    + 	}
    + }
25:  aa4f573a4c96 ! 25:  466b9fb1f6e5 mm/kasan: Convert to struct folio and struct slab
    @@ Commit message
     
         Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
         Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
    +    Reviewed-by: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
         Cc: Andrey Ryabinin <ryabinin.a.a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
         Cc: Alexander Potapenko <glider-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
         Cc: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
26:  67b7966d2fb6 = 26:  b8159ae8e5cd mm/kfence: Convert kfence_guarded_alloc() to struct slab
31:  d64dfe49c1e7 ! 27:  4525180926f9 mm/sl*b: Differentiate struct slab fields by sl*b implementations
    @@ Commit message
         possible.
     
         This should also prevent accidental use of fields that don't exist in given
    -    implementation. Before this patch virt_to_cache() and and cache_from_obj() was
    -    visible for SLOB (albeit not used), although it relies on the slab_cache field
    +    implementation. Before this patch virt_to_cache() and cache_from_obj() were
    +    visible for SLOB (albeit not used), although they rely on the slab_cache field
         that isn't set by SLOB. With this patch it's now a compile error, so these
         functions are now hidden behind #ifndef CONFIG_SLOB.
     
    @@ mm/kfence/core.c: static void *kfence_guarded_alloc(struct kmem_cache *cache, si
     -		slab->s_mem = addr;
     +#if defined(CONFIG_SLUB)
     +	slab->objects = 1;
    -+#elif defined (CONFIG_SLAB)
    ++#elif defined(CONFIG_SLAB)
     +	slab->s_mem = addr;
     +#endif
      
    @@ mm/slab.h
     +
     +#if defined(CONFIG_SLAB)
     +
    -+	union {
    -+		struct list_head slab_list;
    + 	union {
    + 		struct list_head slab_list;
    +-		struct {	/* Partial pages */
     +		struct rcu_head rcu_head;
     +	};
     +	struct kmem_cache *slab_cache;
     +	void *freelist;	/* array of free object indexes */
    -+	void * s_mem;	/* first object */
    ++	void *s_mem;	/* first object */
     +	unsigned int active;
     +
     +#elif defined(CONFIG_SLUB)
     +
    - 	union {
    - 		struct list_head slab_list;
    --		struct {	/* Partial pages */
    ++	union {
    ++		struct list_head slab_list;
     +		struct rcu_head rcu_head;
     +		struct {
      			struct slab *next;
    @@ mm/slab.h: struct slab {
     +#elif defined(CONFIG_SLOB)
     +
     +	struct list_head slab_list;
    -+	void * __unused_1;
    ++	void *__unused_1;
     +	void *freelist;		/* first free block */
    -+	void * __unused_2;
    ++	void *__unused_2;
     +	int units;
     +
     +#else
    @@ mm/slab.h: struct slab {
      #ifdef CONFIG_MEMCG
      	unsigned long memcg_data;
     @@ mm/slab.h: struct slab {
    - 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
      SLAB_MATCH(flags, __page_flags);
      SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
    + SLAB_MATCH(slab_list, slab_list);
     +#ifndef CONFIG_SLOB
      SLAB_MATCH(rcu_head, rcu_head);
    + SLAB_MATCH(slab_cache, slab_cache);
    ++#endif
    ++#ifdef CONFIG_SLAB
    + SLAB_MATCH(s_mem, s_mem);
    + SLAB_MATCH(active, active);
     +#endif
      SLAB_MATCH(_refcount, __page_refcount);
      #ifdef CONFIG_MEMCG
32:  0abf87bae67e = 28:  94b78948d53f mm/slub: Simplify struct slab slabs field definition
33:  813c304f18e4 = 29:  f5261e6375f0 mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
27:  ebce4b5b5ced ! 30:  1414e8c87de6 zsmalloc: Stop using slab fields in struct page
    @@ Commit message
     
         Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
         Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
    -    Cc: Minchan Kim <minchan-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
    +    Acked-by: Minchan Kim <minchan-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
         Cc: Nitin Gupta <ngupta-KNmc09w0p+Ednm+yROfE0A@public.gmane.org>
         Cc: Sergey Senozhatsky <senozhatsky-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
     
28:  f124425ae7de = 31:  8a3cda6b38eb bootmem: Use page->index instead of page->freelist
29:  82da48c73b2e <  -:  ------------ iommu: Use put_pages_list
30:  181e16dfefbb <  -:  ------------ mm: Remove slab from struct page
 -:  ------------ > 32:  91e069ba116b mm/slob: Remove unnecessary page_mapcount_reset() function call

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems
       [not found]     ` <20211201181510.18784-23-vbabka-AlSwsSmVLrQ@public.gmane.org>
  2021-12-02 17:16       ` Andrey Konovalov
@ 2021-12-14 14:31       ` Johannes Weiner
  1 sibling, 0 replies; 28+ messages in thread
From: Johannes Weiner @ 2021-12-14 14:31 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Julia Lawall, Luis Chamberlain,
	Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Marco Elver, Michal Hocko, Vladimir Davydov,
	kasan-dev-/JYPxA39Uh5TLH3MbocFFw, cgroups-u79uwXL29TY76Z2rM5mHXA

On Wed, Dec 01, 2021 at 07:14:59PM +0100, Vlastimil Babka wrote:
> KASAN, KFENCE and memcg interact with SLAB or SLUB internals through functions
> nearest_obj(), obj_to_index() and objs_per_slab() that use struct page as
> parameter. This patch converts it to struct slab including all callers, through
> a coccinelle semantic patch.
> 
> // Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
> // Note: needs coccinelle 1.1.1 to avoid breaking whitespace
> 
> @@
> @@
> 
> -objs_per_slab_page(
> +objs_per_slab(
>  ...
>  )
>  { ... }
> 
> @@
> @@
> 
> -objs_per_slab_page(
> +objs_per_slab(
>  ...
>  )
> 
> @@
> identifier fn =~ "obj_to_index|objs_per_slab";
> @@
> 
>  fn(...,
> -   const struct page *page
> +   const struct slab *slab
>     ,...)
>  {
> <...
> (
> - page_address(page)
> + slab_address(slab)
> |
> - page
> + slab
> )
> ...>
>  }
> 
> @@
> identifier fn =~ "nearest_obj";
> @@
> 
>  fn(...,
> -   struct page *page
> +   const struct slab *slab
>     ,...)
>  {
> <...
> (
> - page_address(page)
> + slab_address(slab)
> |
> - page
> + slab
> )
> ...>
>  }
> 
> @@
> identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
> expression E;
> @@
> 
>  fn(...,
> (
> - slab_page(E)
> + E
> |
> - virt_to_page(E)
> + virt_to_slab(E)
> |
> - virt_to_head_page(E)
> + virt_to_slab(E)
> |
> - page
> + page_slab(page)
> )
>   ,...)
> 
> Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
> Cc: Julia Lawall <julia.lawall-MZpvjPyXg2s@public.gmane.org>
> Cc: Luis Chamberlain <mcgrof-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Andrey Ryabinin <ryabinin.a.a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Alexander Potapenko <glider-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Dmitry Vyukov <dvyukov-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Marco Elver <elver-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Vladimir Davydov <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: <kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org>
> Cc: <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>

LGTM.

Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]     ` <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-14 14:38       ` Hyeonggon Yoo
  2021-12-14 14:43         ` Vlastimil Babka
  2021-12-15  1:03       ` Roman Gushchin via iommu
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 28+ messages in thread
From: Hyeonggon Yoo @ 2021-12-14 14:38 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 

Hello Vlastimil, Thank you for nice work.
I'm going to review and test new version soon in free time.

Btw, I gave you some review and test tags and seems to be missing in new
series. Did I do review/test process wrongly? It's first time to review
patches so please let me know if I did it wrongly.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab
       [not found]     ` <20211201181510.18784-24-vbabka-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-14 14:43       ` Johannes Weiner
       [not found]         ` <Ybite9s1TS7cS67J-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Johannes Weiner @ 2021-12-14 14:43 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Michal Hocko, Vladimir Davydov,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On Wed, Dec 01, 2021 at 07:15:00PM +0100, Vlastimil Babka wrote:
> page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages
> so convert all the related infrastructure to struct slab.
> 
> To avoid include cycles, move the inline definitions of slab_objcgs() and
> slab_objcgs_check() from memcontrol.h to mm/slab.h.
> 
> This is not just mechanistic changing of types and names. Now in
> mem_cgroup_from_obj() we use PageSlab flag to decide if we interpret the page
> as slab, instead of relying on MEMCG_DATA_OBJCGS bit checked in
> page_objcgs_check() (now slab_objcgs_check()). Similarly in
> memcg_slab_free_hook() where we can encounter kmalloc_large() pages (here the
> PageSlab flag check is implied by virt_to_slab()).

Yup, this is great.

> @@ -2865,24 +2865,31 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
>   */
>  struct mem_cgroup *mem_cgroup_from_obj(void *p)
>  {
> -	struct page *page;
> +	struct folio *folio;
>  
>  	if (mem_cgroup_disabled())
>  		return NULL;
>  
> -	page = virt_to_head_page(p);
> +	folio = virt_to_folio(p);
>  
>  	/*
>  	 * Slab objects are accounted individually, not per-page.
>  	 * Memcg membership data for each individual object is saved in
>  	 * the page->obj_cgroups.
>  	 */
> -	if (page_objcgs_check(page)) {
> +	if (folio_test_slab(folio)) {
> +		struct obj_cgroup **objcgs;
>  		struct obj_cgroup *objcg;
> +		struct slab *slab;
>  		unsigned int off;
>  
> -		off = obj_to_index(page->slab_cache, page_slab(page), p);
> -		objcg = page_objcgs(page)[off];
> +		slab = folio_slab(folio);
> +		objcgs = slab_objcgs_check(slab);

AFAICS the change to the _check() variant was accidental.

folio_test_slab() makes sure it's a slab page, so the legit options
for memcg_data are NULL or |MEMCG_DATA_OBJCGS; using slab_objcgs()
here would include the proper asserts, like page_objcgs() used to.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
  2021-12-14 14:38       ` Hyeonggon Yoo
@ 2021-12-14 14:43         ` Vlastimil Babka
       [not found]           ` <87584294-b1bc-aabe-d86a-1a8b93a7f4d4-AlSwsSmVLrQ@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-14 14:43 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On 12/14/21 15:38, Hyeonggon Yoo wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
> 
> Hello Vlastimil, Thank you for nice work.
> I'm going to review and test new version soon in free time.

Thanks!

> Btw, I gave you some review and test tags and seems to be missing in new
> series. Did I do review/test process wrongly? It's first time to review
> patches so please let me know if I did it wrongly.

You did right, sorry! I didn't include them as those were for patches that I
was additionally changing after your review/test and the decision what is
substantial change enough to need a new test/review is often fuzzy. So if
you can recheck the new versions it would be great and then I will pick that
up, thanks!

> --
> Thank you.
> Hyeonggon.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]     ` <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05-AlSwsSmVLrQ@public.gmane.org>
  2021-12-14 14:38       ` Hyeonggon Yoo
@ 2021-12-15  1:03       ` Roman Gushchin via iommu
       [not found]         ` <Ybk+0LKrsAJatILE-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
  2021-12-16 15:00       ` Hyeonggon Yoo
  2021-12-22 16:56       ` Vlastimil Babka
  3 siblings, 1 reply; 28+ messages in thread
From: Roman Gushchin via iommu @ 2021-12-15  1:03 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrey Ryabinin,
	Alexander Potapenko, kasan-dev-/JYPxA39Uh5TLH3MbocFFw,
	H. Peter Anvin, Hyeonggon Yoo, Christoph Lameter, Will Deacon,
	Julia Lawall, Sergey Senozhatsky, x86-DgEjT+Ai2ygdnm+yROfE0A,
	Luis Chamberlain, Matthew Wilcox, Ingo Molnar, Vladimir Davydov,
	David Rientjes, Nitin Gupta, Marco Elver, Borislav Petkov,
	Andy Lutomirski, cgroups-u79uwXL29TY76Z2rM5mHXA, Thomas Gleixner,
	Joonsoo Kim <ia>

On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Hi Vlastimil!

I've started to review this patchset (btw, a really nice work, I like
the resulting code way more). Because I'm looking at v3 and I don't have
the whole v2 in my mailbox, here is what I've now:

* mm: add virt_to_folio() and folio_address()
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slab: Dissolve slab_map_pages() in its caller
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Make object_err() static
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm: Split slab into its own type
1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
comparison?
2) page_slab() is used only in kasan and only in one place, so maybe it's better
to not introduce it as a generic helper?
Other than that
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm: Add account_slab() and unaccount_slab()
1) maybe change the title to convert/replace instead of add?
2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm: Convert virt_to_cache() to use struct slab
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm: Convert __ksize() to struct slab
It looks like certain parts of __ksize() can be merged between slab, slub and slob?
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm: Use struct slab in kmem_obj_info()
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>


I'll try to finish reviewing the patchset until the  end of the week.

Thanks!

Roman

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]           ` <87584294-b1bc-aabe-d86a-1a8b93a7f4d4-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-15  3:47             ` Hyeonggon Yoo
  0 siblings, 0 replies; 28+ messages in thread
From: Hyeonggon Yoo @ 2021-12-15  3:47 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Tue, Dec 14, 2021 at 03:43:35PM +0100, Vlastimil Babka wrote:
> On 12/14/21 15:38, Hyeonggon Yoo wrote:
> > On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> >> On 12/1/21 19:14, Vlastimil Babka wrote:
> >> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> > this cover letter.
> >> > 
> >> > Series also available in git, based on 5.16-rc3:
> >> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> >> 
> >> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> >> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> >> 
> > 
> > Hello Vlastimil, Thank you for nice work.
> > I'm going to review and test new version soon in free time.
> 
> Thanks!
> 

You're welcome!

> > Btw, I gave you some review and test tags and seems to be missing in new
> > series. Did I do review/test process wrongly? It's first time to review
> > patches so please let me know if I did it wrongly.
> 
> You did right, sorry! I didn't include them as those were for patches that I
> was additionally changing after your review/test and the decision what is
> substantial change enough to need a new test/review is often fuzzy. 

Ah, Okay. review/test becomes invalid after some changing.
that's okay. I was just unfamiliar with the process. Thank you!

> So if you can recheck the new versions it would be great and then I will pick that
> up, thanks!

Okay. I'll new versions.

> 
> > --
> > Thank you.
> > Hyeonggon.
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]         ` <Ybk+0LKrsAJatILE-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
@ 2021-12-15 23:38           ` Roman Gushchin
       [not found]             ` <Ybp8a5JNndgCLy2w-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
  2021-12-20  0:24           ` Vlastimil Babka
  1 sibling, 1 reply; 28+ messages in thread
From: Roman Gushchin @ 2021-12-15 23:38 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner <hannes>

On Tue, Dec 14, 2021 at 05:03:12PM -0800, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> > > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > > this cover letter.
> > > 
> > > Series also available in git, based on 5.16-rc3:
> > > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi Vlastimil!
> 
> I've started to review this patchset (btw, a really nice work, I like
> the resulting code way more). Because I'm looking at v3 and I don't have
> the whole v2 in my mailbox, here is what I've now:
> 
> * mm: add virt_to_folio() and folio_address()
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slab: Dissolve slab_map_pages() in its caller
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Make object_err() static
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Split slab into its own type
> 1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
> comparison?
> 2) page_slab() is used only in kasan and only in one place, so maybe it's better
> to not introduce it as a generic helper?
> Other than that
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Add account_slab() and unaccount_slab()
> 1) maybe change the title to convert/replace instead of add?
> 2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Convert virt_to_cache() to use struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Convert __ksize() to struct slab
> It looks like certain parts of __ksize() can be merged between slab, slub and slob?
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Use struct slab in kmem_obj_info()
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

Part 2:

* mm: Convert check_heap_object() to use struct slab
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Convert detached_freelist to use a struct slab
How about to convert free_nonslab_page() to free_nonslab_folio()?
And maybe rename it to something like free_large_kmalloc()?
If I'm not missing something, large kmallocs is the only way how we can end up
there with a !slab folio/page.

* mm/slub: Convert kfree() to use a struct slab
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Convert print_page_info() to print_slab_info()
Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Convert alloc_slab_page() to return a struct slab
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Convert __free_slab() to use struct slab
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Convert pfmemalloc_match() to take a struct slab
Cool! Removing pfmemalloc_unsafe() is really nice.
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Convert most struct page to struct slab by spatch
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slub: Finish struct page to struct slab conversion
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

* mm/slab: Convert most struct page to struct slab by spatch

Another patch with the same title? Rebase error?

* mm/slab: Finish struct page to struct slab conversion

And this one too?


Thanks!

Roman

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]             ` <Ybp8a5JNndgCLy2w-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
@ 2021-12-16  9:19               ` Vlastimil Babka
  2021-12-20  0:47               ` Vlastimil Babka
  1 sibling, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-16  9:19 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On 12/16/21 00:38, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 05:03:12PM -0800, Roman Gushchin wrote:
>> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> > On 12/1/21 19:14, Vlastimil Babka wrote:
>> > > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > > this cover letter.
>> > > 
>> > > Series also available in git, based on 5.16-rc3:
>> > > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> > 
>> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
>> Hi Vlastimil!
>> 
>> I've started to review this patchset (btw, a really nice work, I like
>> the resulting code way more). Because I'm looking at v3 and I don't have

Thanks a lot, Roman!

...

> 
> * mm/slab: Convert most struct page to struct slab by spatch
> 
> Another patch with the same title? Rebase error?
> 
> * mm/slab: Finish struct page to struct slab conversion
> 
> And this one too?

No, these are for mm/slab.c, the previous were for mm/slub.c :)

> 
> Thanks!
> 
> Roman


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]     ` <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05-AlSwsSmVLrQ@public.gmane.org>
  2021-12-14 14:38       ` Hyeonggon Yoo
  2021-12-15  1:03       ` Roman Gushchin via iommu
@ 2021-12-16 15:00       ` Hyeonggon Yoo
       [not found]         ` <YbtUmi5kkhmlXEB1-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
  2021-12-22 16:56       ` Vlastimil Babka
  3 siblings, 1 reply; 28+ messages in thread
From: Hyeonggon Yoo @ 2021-12-16 15:00 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Reviewing the whole patch series takes longer than I thought.
I'll try to review and test rest of patches when I have time.

I added Tested-by if kernel builds okay and kselftests
does not break the kernel on my machine.
(with CONFIG_SLAB/SLUB/SLOB depending on the patch),
Let me know me if you know better way to test a patch.

# mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled

Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

Comment:
Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
btw, do we need slabs_cpu_partial attribute when we don't use
cpu partials? (!SLUB_CPU_PARTIAL)

# mm/slub: Simplify struct slab slabs field definition
Comment:

This is how struct page looks on the top of v3r3 branch:
struct page {
[...]
                struct {        /* slab, slob and slub */
                        union {
                                struct list_head slab_list;
                                struct {        /* Partial pages */
                                        struct page *next;
#ifdef CONFIG_64BIT
                                        int pages;      /* Nr of pages left */
#else
                                        short int pages;
#endif
                                };
                        };
[...]

It's not consistent with struct slab.
I think this is because "mm: Remove slab from struct page" was dropped.
Would you update some of patches?

# mm/sl*b: Differentiate struct slab fields by sl*b implementations
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Works SL[AUO]B on my machine and makes code much better.

# mm/slob: Convert SLOB to use struct slab and struct folio
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
It still works fine on SLOB.

# mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

# mm/slub: Convert __free_slab() to use struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

Thanks,
Hyeonggon.

> 
>  1:  10b656f9eb1e =  1:  10b656f9eb1e mm: add virt_to_folio() and folio_address()
>  2:  5e6ad846acf1 =  2:  5e6ad846acf1 mm/slab: Dissolve slab_map_pages() in its caller
>  3:  48d4e9407aa0 =  3:  48d4e9407aa0 mm/slub: Make object_err() static
>  4:  fe1e19081321 =  4:  fe1e19081321 mm: Split slab into its own type
>  5:  af7fd46fbb9b =  5:  af7fd46fbb9b mm: Add account_slab() and unaccount_slab()
>  6:  7ed088d601d9 =  6:  7ed088d601d9 mm: Convert virt_to_cache() to use struct slab
>  7:  1d41188b9401 =  7:  1d41188b9401 mm: Convert __ksize() to struct slab
>  8:  5d9d1231461f !  8:  8fd22e0b086e mm: Use struct slab in kmem_obj_info()
>     @@ Commit message
>          slab type instead of the page type, we make it obvious that this can
>          only be called for slabs.
>      
>     +    [ vbabka-AlSwsSmVLrQ@public.gmane.org: also convert the related kmem_valid_obj() to folios ]
>     +
>          Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>          Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
>      
>     @@ mm/slab.h: struct kmem_obj_info {
>       #endif /* MM_SLAB_H */
>      
>       ## mm/slab_common.c ##
>     +@@ mm/slab_common.c: bool slab_is_available(void)
>     +  */
>     + bool kmem_valid_obj(void *object)
>     + {
>     +-	struct page *page;
>     ++	struct folio *folio;
>     + 
>     + 	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
>     + 	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
>     + 		return false;
>     +-	page = virt_to_head_page(object);
>     +-	return PageSlab(page);
>     ++	folio = virt_to_folio(object);
>     ++	return folio_test_slab(folio);
>     + }
>     + EXPORT_SYMBOL_GPL(kmem_valid_obj);
>     + 
>      @@ mm/slab_common.c: void kmem_dump_obj(void *object)
>       {
>       	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
>     @@ mm/slub.c: int __kmem_cache_shutdown(struct kmem_cache *s)
>       	objp = base + s->size * objnr;
>       	kpp->kp_objp = objp;
>      -	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
>     -+	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) ||
>     ++	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size
>     ++			 || (objp - base) % s->size) ||
>       	    !(s->flags & SLAB_STORE_USER))
>       		return;
>       #ifdef CONFIG_SLUB_DEBUG
>  9:  3aef771be335 !  9:  c97e73c3b6c2 mm: Convert check_heap_object() to use struct slab
>     @@ mm/slab.h: struct kmem_obj_info {
>      +#else
>      +static inline
>      +void __check_heap_object(const void *ptr, unsigned long n,
>     -+			 const struct slab *slab, bool to_user) { }
>     ++			 const struct slab *slab, bool to_user)
>     ++{
>     ++}
>      +#endif
>      +
>       #endif /* MM_SLAB_H */
> 10:  2253e45e6bef = 10:  da05e0f7179c mm/slub: Convert detached_freelist to use a struct slab
> 11:  f28202bc27ba = 11:  383887e77104 mm/slub: Convert kfree() to use a struct slab
> 12:  31b58b1e914f = 12:  c46be093c637 mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
> 13:  636406a3ad59 = 13:  49dbbf917052 mm/slub: Convert print_page_info() to print_slab_info()
> 14:  3b49efda3b6f = 14:  4bb0c932156a mm/slub: Convert alloc_slab_page() to return a struct slab
> 15:  61a195526d3b ! 15:  4b9761b5cfab mm/slub: Convert __free_slab() to use struct slab
>     @@ mm/slub.c: static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int n
>       
>      -	__ClearPageSlabPfmemalloc(page);
>      -	__ClearPageSlab(page);
>     +-	/* In union with page->mapping where page allocator expects NULL */
>     +-	page->slab_cache = NULL;
>      +	__slab_clear_pfmemalloc(slab);
>      +	__folio_clear_slab(folio);
>     - 	/* In union with page->mapping where page allocator expects NULL */
>     --	page->slab_cache = NULL;
>     -+	slab->slab_cache = NULL;
>     ++	folio->mapping = NULL;
>       	if (current->reclaim_state)
>       		current->reclaim_state->reclaimed_slab += pages;
>      -	unaccount_slab(page_slab(page), order, s);
> 16:  987c7ed31580 = 16:  f384ec918065 mm/slub: Convert pfmemalloc_match() to take a struct slab
> 17:  cc742564237e ! 17:  06738ade4e17 mm/slub: Convert most struct page to struct slab by spatch
>     @@ Commit message
>      
>          // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
>          // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
>     -    // embedded script script
>     +    // embedded script
>      
>          // build list of functions to exclude from applying the next rule
>          @initialize:ocaml@
> 18:  b45acac9aace = 18:  1a4f69a4cced mm/slub: Finish struct page to struct slab conversion
> 19:  76c3eeb39684 ! 19:  1d62d706e884 mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>     @@ mm/slab.c: slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nod
>      -	__ClearPageSlabPfmemalloc(page);
>      -	__ClearPageSlab(page);
>      -	page_mapcount_reset(page);
>     +-	/* In union with page->mapping where page allocator expects NULL */
>     +-	page->slab_cache = NULL;
>      +	BUG_ON(!folio_test_slab(folio));
>      +	__slab_clear_pfmemalloc(slab);
>      +	__folio_clear_slab(folio);
>      +	page_mapcount_reset(folio_page(folio, 0));
>     - 	/* In union with page->mapping where page allocator expects NULL */
>     --	page->slab_cache = NULL;
>     -+	slab->slab_cache = NULL;
>     ++	folio->mapping = NULL;
>       
>       	if (current->reclaim_state)
>       		current->reclaim_state->reclaimed_slab += 1 << order;
> 20:  ed6144dbebce ! 20:  fd4c3aabacd3 mm/slab: Convert most struct page to struct slab by spatch
>     @@ Commit message
>      
>          // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
>          // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
>     -    // embedded script script
>     +    // embedded script
>      
>          // build list of functions for applying the next rule
>          @initialize:ocaml@
> 21:  17fb81e601e6 = 21:  b59720b2edba mm/slab: Finish struct page to struct slab conversion
> 22:  4e8d1faebc24 ! 22:  65ced071c3e7 mm: Convert struct page to struct slab in functions used by other subsystems
>     @@ Commit message
>            ,...)
>      
>          Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
>     +    Reviewed-by: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>          Cc: Julia Lawall <julia.lawall-MZpvjPyXg2s@public.gmane.org>
>          Cc: Luis Chamberlain <mcgrof-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
>          Cc: Andrey Ryabinin <ryabinin.a.a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 23:  eefa12e18a92 = 23:  c9c8dee01e5d mm/memcg: Convert slab objcgs from struct page to struct slab
> 24:  fa5ba4107ce2 ! 24:  def731137335 mm/slob: Convert SLOB to use struct slab
>     @@ Metadata
>      Author: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>      
>       ## Commit message ##
>     -    mm/slob: Convert SLOB to use struct slab
>     +    mm/slob: Convert SLOB to use struct slab and struct folio
>      
>     -    Use struct slab throughout the slob allocator.
>     +    Use struct slab throughout the slob allocator. Where non-slab page can appear
>     +    use struct folio instead of struct page.
>      
>          [ vbabka-AlSwsSmVLrQ@public.gmane.org: don't introduce wrappers for PageSlobFree in mm/slab.h just
>            for the single callers being wrappers in mm/slob.c ]
>      
>     +    [ Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>: fix NULL pointer deference ]
>     +
>          Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>          Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
>      
>       ## mm/slob.c ##
>     +@@
>     +  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
>     +  * alloc_pages() directly, allocating compound pages so the page order
>     +  * does not have to be separately tracked.
>     +- * These objects are detected in kfree() because PageSlab()
>     ++ * These objects are detected in kfree() because folio_test_slab()
>     +  * is false for them.
>     +  *
>     +  * SLAB is emulated on top of SLOB by simply calling constructors and
>      @@ mm/slob.c: static LIST_HEAD(free_slob_large);
>       /*
>        * slob_page_free: true for pages on free_slob_pages list.
>     @@ mm/slob.c: static void *slob_page_alloc(struct page *sp, size_t size, int align,
>       							int align_offset)
>       {
>      -	struct page *sp;
>     ++	struct folio *folio;
>      +	struct slab *sp;
>       	struct list_head *slob_list;
>       	slob_t *b = NULL;
>     @@ mm/slob.c: static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
>       			return NULL;
>      -		sp = virt_to_page(b);
>      -		__SetPageSlab(sp);
>     -+		sp = virt_to_slab(b);
>     -+		__SetPageSlab(slab_page(sp));
>     ++		folio = virt_to_folio(b);
>     ++		__folio_set_slab(folio);
>     ++		sp = folio_slab(folio);
>       
>       		spin_lock_irqsave(&slob_lock, flags);
>       		sp->units = SLOB_UNITS(PAGE_SIZE);
>     @@ mm/slob.c: static void slob_free(void *block, int size)
>       		spin_unlock_irqrestore(&slob_lock, flags);
>      -		__ClearPageSlab(sp);
>      -		page_mapcount_reset(sp);
>     -+		__ClearPageSlab(slab_page(sp));
>     ++		__folio_clear_slab(slab_folio(sp));
>      +		page_mapcount_reset(slab_page(sp));
>       		slob_free_pages(b, 0);
>       		return;
>       	}
>     +@@ mm/slob.c: EXPORT_SYMBOL(__kmalloc_node_track_caller);
>     + 
>     + void kfree(const void *block)
>     + {
>     +-	struct page *sp;
>     ++	struct folio *sp;
>     + 
>     + 	trace_kfree(_RET_IP_, block);
>     + 
>     +@@ mm/slob.c: void kfree(const void *block)
>     + 		return;
>     + 	kmemleak_free(block);
>     + 
>     +-	sp = virt_to_page(block);
>     +-	if (PageSlab(sp)) {
>     ++	sp = virt_to_folio(block);
>     ++	if (folio_test_slab(sp)) {
>     + 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
>     + 		unsigned int *m = (unsigned int *)(block - align);
>     + 		slob_free(m, *m + align);
>     + 	} else {
>     +-		unsigned int order = compound_order(sp);
>     +-		mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>     ++		unsigned int order = folio_order(sp);
>     ++
>     ++		mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>     + 				    -(PAGE_SIZE << order));
>     +-		__free_pages(sp, order);
>     ++		__free_pages(folio_page(sp, 0), order);
>     + 
>     + 	}
>     + }
> 25:  aa4f573a4c96 ! 25:  466b9fb1f6e5 mm/kasan: Convert to struct folio and struct slab
>     @@ Commit message
>      
>          Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>          Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
>     +    Reviewed-by: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>          Cc: Andrey Ryabinin <ryabinin.a.a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>          Cc: Alexander Potapenko <glider-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>          Cc: Andrey Konovalov <andreyknvl-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 26:  67b7966d2fb6 = 26:  b8159ae8e5cd mm/kfence: Convert kfence_guarded_alloc() to struct slab
> 31:  d64dfe49c1e7 ! 27:  4525180926f9 mm/sl*b: Differentiate struct slab fields by sl*b implementations
>     @@ Commit message
>          possible.
>      
>          This should also prevent accidental use of fields that don't exist in given
>     -    implementation. Before this patch virt_to_cache() and and cache_from_obj() was
>     -    visible for SLOB (albeit not used), although it relies on the slab_cache field
>     +    implementation. Before this patch virt_to_cache() and cache_from_obj() were
>     +    visible for SLOB (albeit not used), although they rely on the slab_cache field
>          that isn't set by SLOB. With this patch it's now a compile error, so these
>          functions are now hidden behind #ifndef CONFIG_SLOB.
>      
>     @@ mm/kfence/core.c: static void *kfence_guarded_alloc(struct kmem_cache *cache, si
>      -		slab->s_mem = addr;
>      +#if defined(CONFIG_SLUB)
>      +	slab->objects = 1;
>     -+#elif defined (CONFIG_SLAB)
>     ++#elif defined(CONFIG_SLAB)
>      +	slab->s_mem = addr;
>      +#endif
>       
>     @@ mm/slab.h
>      +
>      +#if defined(CONFIG_SLAB)
>      +
>     -+	union {
>     -+		struct list_head slab_list;
>     + 	union {
>     + 		struct list_head slab_list;
>     +-		struct {	/* Partial pages */
>      +		struct rcu_head rcu_head;
>      +	};
>      +	struct kmem_cache *slab_cache;
>      +	void *freelist;	/* array of free object indexes */
>     -+	void * s_mem;	/* first object */
>     ++	void *s_mem;	/* first object */
>      +	unsigned int active;
>      +
>      +#elif defined(CONFIG_SLUB)
>      +
>     - 	union {
>     - 		struct list_head slab_list;
>     --		struct {	/* Partial pages */
>     ++	union {
>     ++		struct list_head slab_list;
>      +		struct rcu_head rcu_head;
>      +		struct {
>       			struct slab *next;
>     @@ mm/slab.h: struct slab {
>      +#elif defined(CONFIG_SLOB)
>      +
>      +	struct list_head slab_list;
>     -+	void * __unused_1;
>     ++	void *__unused_1;
>      +	void *freelist;		/* first free block */
>     -+	void * __unused_2;
>     ++	void *__unused_2;
>      +	int units;
>      +
>      +#else
>     @@ mm/slab.h: struct slab {
>       #ifdef CONFIG_MEMCG
>       	unsigned long memcg_data;
>      @@ mm/slab.h: struct slab {
>     - 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
>       SLAB_MATCH(flags, __page_flags);
>       SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
>     + SLAB_MATCH(slab_list, slab_list);
>      +#ifndef CONFIG_SLOB
>       SLAB_MATCH(rcu_head, rcu_head);
>     + SLAB_MATCH(slab_cache, slab_cache);
>     ++#endif
>     ++#ifdef CONFIG_SLAB
>     + SLAB_MATCH(s_mem, s_mem);
>     + SLAB_MATCH(active, active);
>      +#endif
>       SLAB_MATCH(_refcount, __page_refcount);
>       #ifdef CONFIG_MEMCG
> 32:  0abf87bae67e = 28:  94b78948d53f mm/slub: Simplify struct slab slabs field definition
> 33:  813c304f18e4 = 29:  f5261e6375f0 mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> 27:  ebce4b5b5ced ! 30:  1414e8c87de6 zsmalloc: Stop using slab fields in struct page
>     @@ Commit message
>      
>          Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>          Signed-off-by: Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>
>     -    Cc: Minchan Kim <minchan-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
>     +    Acked-by: Minchan Kim <minchan-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
>          Cc: Nitin Gupta <ngupta-KNmc09w0p+Ednm+yROfE0A@public.gmane.org>
>          Cc: Sergey Senozhatsky <senozhatsky-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
>      
> 28:  f124425ae7de = 31:  8a3cda6b38eb bootmem: Use page->index instead of page->freelist
> 29:  82da48c73b2e <  -:  ------------ iommu: Use put_pages_list
> 30:  181e16dfefbb <  -:  ------------ mm: Remove slab from struct page
>  -:  ------------ > 32:  91e069ba116b mm/slob: Remove unnecessary page_mapcount_reset() function call

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]         ` <Ybk+0LKrsAJatILE-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
  2021-12-15 23:38           ` Roman Gushchin
@ 2021-12-20  0:24           ` Vlastimil Babka
  1 sibling, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-20  0:24 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On 12/15/21 02:03, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi Vlastimil!
> 
> I've started to review this patchset (btw, a really nice work, I like
> the resulting code way more). Because I'm looking at v3 and I don't have
> the whole v2 in my mailbox, here is what I've now:

Thanks a lot, Roman!

> * mm: add virt_to_folio() and folio_address()
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slab: Dissolve slab_map_pages() in its caller
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Make object_err() static
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Split slab into its own type
> 1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
> comparison?

Folio doesn't have define most of the fields, and matching some to page and
others to folio seems like unnecessary complication. Maybe as part of the
final struct page cleanup when the slab fields are gone from struct page,
the rest could all be in folio - I'll check once we get there.

> 2) page_slab() is used only in kasan and only in one place, so maybe it's better
> to not introduce it as a generic helper?

Yeah that's the case after the series, but as part of the incremental steps,
page_slab() gets used in many places. I'll consider removing it on top though.

> Other than that
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Add account_slab() and unaccount_slab()
> 1) maybe change the title to convert/replace instead of add?

Done.

> 2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?

Maybe possible, but that would distort the series more than I'd like to at
this rc6 time.

> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Convert virt_to_cache() to use struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Convert __ksize() to struct slab
> It looks like certain parts of __ksize() can be merged between slab, slub and slob?
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm: Use struct slab in kmem_obj_info()
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> 
> I'll try to finish reviewing the patchset until the  end of the week.
> 
> Thanks!
> 
> Roman


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]             ` <Ybp8a5JNndgCLy2w-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
  2021-12-16  9:19               ` Vlastimil Babka
@ 2021-12-20  0:47               ` Vlastimil Babka
       [not found]                 ` <86617be0-8aa8-67d2-08bd-1e06c3d12785-AlSwsSmVLrQ@public.gmane.org>
  1 sibling, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-20  0:47 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On 12/16/21 00:38, Roman Gushchin wrote:
> Part 2:
> 
> * mm: Convert check_heap_object() to use struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Convert detached_freelist to use a struct slab
> How about to convert free_nonslab_page() to free_nonslab_folio()?
> And maybe rename it to something like free_large_kmalloc()?
> If I'm not missing something, large kmallocs is the only way how we can end up
> there with a !slab folio/page.

Good point, thanks! But did at as part of the following patch, where it fits
logically better.

> * mm/slub: Convert kfree() to use a struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

Didn't add your tag because of the addition of free_large_kmalloc() change.

> * mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Convert print_page_info() to print_slab_info()
> Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?

Unfortunately yes, as long as folio_flags() don't take const struct folio *,
which will need some yak shaving.

> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Convert alloc_slab_page() to return a struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Convert __free_slab() to use struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Convert pfmemalloc_match() to take a struct slab
> Cool! Removing pfmemalloc_unsafe() is really nice.
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Convert most struct page to struct slab by spatch
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slub: Finish struct page to struct slab conversion
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> 
> * mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> Reviewed-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>

Thanks again!

> * mm/slab: Convert most struct page to struct slab by spatch
> 
> Another patch with the same title? Rebase error?
> 
> * mm/slab: Finish struct page to struct slab conversion
> 
> And this one too?
> 
> 
> Thanks!
> 
> Roman


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]                 ` <86617be0-8aa8-67d2-08bd-1e06c3d12785-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-20  1:42                   ` Matthew Wilcox
  0 siblings, 0 replies; 28+ messages in thread
From: Matthew Wilcox @ 2021-12-20  1:42 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Roman Gushchin, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner, J

On Mon, Dec 20, 2021 at 01:47:54AM +0100, Vlastimil Babka wrote:
> > * mm/slub: Convert print_page_info() to print_slab_info()
> > Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?
> 
> Unfortunately yes, as long as folio_flags() don't take const struct folio *,
> which will need some yak shaving.

In case anyone's interested ...

folio_flags calls VM_BUG_ON_PGFLAGS() which would need its second
argument to be const.

That means dump_page() needs to take a const struct page, which
means __dump_page() needs its argument to be const.

That calls ...

is_migrate_cma_page()
page_mapping()
page_mapcount()
page_ref_count()
page_to_pgoff()
page_to_pfn()
hpage_pincount_available()
head_compound_mapcount()
head_compound_pincount()
compound_order()
PageKsm()
PageAnon()
PageCompound()

... and at that point, I ran out of motivation to track down some parts
of this tarbaby that could be fixed.  I did do:

    mm: constify page_count and page_ref_count
    mm: constify get_pfnblock_flags_mask and get_pfnblock_migratetype
    mm: make compound_head const-preserving
    mm/page_owner: constify dump_page_owner

so some of those are already done.  But a lot of them just need to be
done at the same time.  For example, page_mapping() calls
folio_mapping() which calls folio_test_slab() which calls folio_flags(),
so dump_page() and page_mapping() need to be done at the same time.

One bit that could be broken off easily (I think ...) is PageTransTail()
PageTail(), PageCompound(), PageHuge(), page_to_pgoff() and
page_to_index().  One wrinkle is needed a temporary
TESTPAGEFLAGS_FALSE_CONST.  But I haven't tried it yet.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab
       [not found]         ` <Ybite9s1TS7cS67J-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
@ 2021-12-20 23:31           ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-20 23:31 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Michal Hocko, Vladimir Davydov,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On 12/14/21 15:43, Johannes Weiner wrote:
> On Wed, Dec 01, 2021 at 07:15:00PM +0100, Vlastimil Babka wrote:
>> page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages
>> so convert all the related infrastructure to struct slab.
>> 
>> To avoid include cycles, move the inline definitions of slab_objcgs() and
>> slab_objcgs_check() from memcontrol.h to mm/slab.h.
>> 
>> This is not just mechanistic changing of types and names. Now in
>> mem_cgroup_from_obj() we use PageSlab flag to decide if we interpret the page
>> as slab, instead of relying on MEMCG_DATA_OBJCGS bit checked in
>> page_objcgs_check() (now slab_objcgs_check()). Similarly in
>> memcg_slab_free_hook() where we can encounter kmalloc_large() pages (here the
>> PageSlab flag check is implied by virt_to_slab()).
> 
> Yup, this is great.
> 
>> @@ -2865,24 +2865,31 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
>>   */
>>  struct mem_cgroup *mem_cgroup_from_obj(void *p)
>>  {
>> -	struct page *page;
>> +	struct folio *folio;
>>  
>>  	if (mem_cgroup_disabled())
>>  		return NULL;
>>  
>> -	page = virt_to_head_page(p);
>> +	folio = virt_to_folio(p);
>>  
>>  	/*
>>  	 * Slab objects are accounted individually, not per-page.
>>  	 * Memcg membership data for each individual object is saved in
>>  	 * the page->obj_cgroups.
>>  	 */
>> -	if (page_objcgs_check(page)) {
>> +	if (folio_test_slab(folio)) {
>> +		struct obj_cgroup **objcgs;
>>  		struct obj_cgroup *objcg;
>> +		struct slab *slab;
>>  		unsigned int off;
>>  
>> -		off = obj_to_index(page->slab_cache, page_slab(page), p);
>> -		objcg = page_objcgs(page)[off];
>> +		slab = folio_slab(folio);
>> +		objcgs = slab_objcgs_check(slab);
> 
> AFAICS the change to the _check() variant was accidental.
> 
> folio_test_slab() makes sure it's a slab page, so the legit options
> for memcg_data are NULL or |MEMCG_DATA_OBJCGS; using slab_objcgs()
> here would include the proper asserts, like page_objcgs() used to.

Well spotted. On closer look, it's actually the same in
memcg_slab_free_hook() where I also added a folio_test_slab() as part of
virt_to_slab(). In fact it means page_objcgs_check() can be just deleted
instead of replaced by slab_objcgs_check(). Thanks!



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]         ` <YbtUmi5kkhmlXEB1-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
@ 2021-12-20 23:58           ` Vlastimil Babka
       [not found]             ` <38976607-b9f9-1bce-9db9-60c23da65d2e-AlSwsSmVLrQ@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-20 23:58 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On 12/16/21 16:00, Hyeonggon Yoo wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Reviewing the whole patch series takes longer than I thought.
> I'll try to review and test rest of patches when I have time.
> 
> I added Tested-by if kernel builds okay and kselftests
> does not break the kernel on my machine.
> (with CONFIG_SLAB/SLUB/SLOB depending on the patch),

Thanks!

> Let me know me if you know better way to test a patch.

Testing on your machine is just fine.

> # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> 
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> Comment:
> Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
> btw, do we need slabs_cpu_partial attribute when we don't use
> cpu partials? (!SLUB_CPU_PARTIAL)

The sysfs attribute? Yeah we should be consistent to userspace expecting to
read it (even with zeroes), regardless of config.

> # mm/slub: Simplify struct slab slabs field definition
> Comment:
> 
> This is how struct page looks on the top of v3r3 branch:
> struct page {
> [...]
>                 struct {        /* slab, slob and slub */
>                         union {
>                                 struct list_head slab_list;
>                                 struct {        /* Partial pages */
>                                         struct page *next;
> #ifdef CONFIG_64BIT
>                                         int pages;      /* Nr of pages left */
> #else
>                                         short int pages;
> #endif
>                                 };
>                         };
> [...]
> 
> It's not consistent with struct slab.

Hm right. But as we don't actually use the struct page version anymore, and
it's not one of the fields checked by SLAB_MATCH(), we can ignore this.

> I think this is because "mm: Remove slab from struct page" was dropped.

That was just postponed until iommu changes are in. Matthew mentioned those
might be merged too, so that final cleanup will happen too and take care of
the discrepancy above, so no need for extra churn to address it speficially.

> Would you update some of patches?
> 
> # mm/sl*b: Differentiate struct slab fields by sl*b implementations
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Works SL[AUO]B on my machine and makes code much better.
> 
> # mm/slob: Convert SLOB to use struct slab and struct folio
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> It still works fine on SLOB.
> 
> # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> # mm/slub: Convert __free_slab() to use struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> Thanks,
> Hyeonggon.

Thanks again,
Vlastimil

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]             ` <38976607-b9f9-1bce-9db9-60c23da65d2e-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-21 17:25               ` Robin Murphy
  2021-12-22  7:36               ` Hyeonggon Yoo
  1 sibling, 0 replies; 28+ messages in thread
From: Robin Murphy @ 2021-12-21 17:25 UTC (permalink / raw)
  To: Vlastimil Babka, Hyeonggon Yoo
  Cc: Peter Zijlstra, Dave Hansen, Michal Hocko,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrey Ryabinin,
	Alexander Potapenko, kasan-dev-/JYPxA39Uh5TLH3MbocFFw,
	H. Peter Anvin, Christoph Lameter, Will Deacon, Julia Lawall,
	Sergey Senozhatsky, x86-DgEjT+Ai2ygdnm+yROfE0A, Luis Chamberlain,
	Matthew Wilcox, Ingo Molnar, Vladimir Davydov, David Rientjes,
	Nitin Gupta, Marco Elver, Borislav Petkov, Andy Lutomirski

On 2021-12-20 23:58, Vlastimil Babka wrote:
> On 12/16/21 16:00, Hyeonggon Yoo wrote:
>> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>>> On 12/1/21 19:14, Vlastimil Babka wrote:
>>>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>>>> this cover letter.
>>>>
>>>> Series also available in git, based on 5.16-rc3:
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>>>
>>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>>
>> Reviewing the whole patch series takes longer than I thought.
>> I'll try to review and test rest of patches when I have time.
>>
>> I added Tested-by if kernel builds okay and kselftests
>> does not break the kernel on my machine.
>> (with CONFIG_SLAB/SLUB/SLOB depending on the patch),
> 
> Thanks!
> 
>> Let me know me if you know better way to test a patch.
> 
> Testing on your machine is just fine.
> 
>> # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
>>
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>>
>> Comment:
>> Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
>> btw, do we need slabs_cpu_partial attribute when we don't use
>> cpu partials? (!SLUB_CPU_PARTIAL)
> 
> The sysfs attribute? Yeah we should be consistent to userspace expecting to
> read it (even with zeroes), regardless of config.
> 
>> # mm/slub: Simplify struct slab slabs field definition
>> Comment:
>>
>> This is how struct page looks on the top of v3r3 branch:
>> struct page {
>> [...]
>>                  struct {        /* slab, slob and slub */
>>                          union {
>>                                  struct list_head slab_list;
>>                                  struct {        /* Partial pages */
>>                                          struct page *next;
>> #ifdef CONFIG_64BIT
>>                                          int pages;      /* Nr of pages left */
>> #else
>>                                          short int pages;
>> #endif
>>                                  };
>>                          };
>> [...]
>>
>> It's not consistent with struct slab.
> 
> Hm right. But as we don't actually use the struct page version anymore, and
> it's not one of the fields checked by SLAB_MATCH(), we can ignore this.
> 
>> I think this is because "mm: Remove slab from struct page" was dropped.
> 
> That was just postponed until iommu changes are in. Matthew mentioned those
> might be merged too, so that final cleanup will happen too and take care of
> the discrepancy above, so no need for extra churn to address it speficially.

FYI the IOMMU changes are now queued in linux-next, so if all goes well 
you might be able to sneak that final patch in too.

Robin.

> 
>> Would you update some of patches?
>>
>> # mm/sl*b: Differentiate struct slab fields by sl*b implementations
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> Works SL[AUO]B on my machine and makes code much better.
>>
>> # mm/slob: Convert SLOB to use struct slab and struct folio
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> It still works fine on SLOB.
>>
>> # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>>
>> # mm/slub: Convert __free_slab() to use struct slab
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>>
>> Thanks,
>> Hyeonggon.
> 
> Thanks again,
> Vlastimil
> _______________________________________________
> iommu mailing list
> iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]             ` <38976607-b9f9-1bce-9db9-60c23da65d2e-AlSwsSmVLrQ@public.gmane.org>
  2021-12-21 17:25               ` Robin Murphy
@ 2021-12-22  7:36               ` Hyeonggon Yoo
  1 sibling, 0 replies; 28+ messages in thread
From: Hyeonggon Yoo @ 2021-12-22  7:36 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Tue, Dec 21, 2021 at 12:58:14AM +0100, Vlastimil Babka wrote:
> On 12/16/21 16:00, Hyeonggon Yoo wrote:
> > On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> >> On 12/1/21 19:14, Vlastimil Babka wrote:
> >> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> > this cover letter.
> >> > 
> >> > Series also available in git, based on 5.16-rc3:
> >> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> >> 
> >> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> >> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> > 
> > Reviewing the whole patch series takes longer than I thought.
> > I'll try to review and test rest of patches when I have time.
> > 
> > I added Tested-by if kernel builds okay and kselftests
> > does not break the kernel on my machine.
> > (with CONFIG_SLAB/SLUB/SLOB depending on the patch),
> 
> Thanks!
>

:)

> > Let me know me if you know better way to test a patch.
> 
> Testing on your machine is just fine.
>

Good!

> > # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> > 
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > 
> > Comment:
> > Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
> > btw, do we need slabs_cpu_partial attribute when we don't use
> > cpu partials? (!SLUB_CPU_PARTIAL)
> 
> The sysfs attribute? Yeah we should be consistent to userspace expecting to
> read it (even with zeroes), regardless of config.
> 

I thought entirely disabling the attribute is simpler,
But okay If it should be exposed even if it's always zero.

> > # mm/slub: Simplify struct slab slabs field definition
> > Comment:
> > 
> > This is how struct page looks on the top of v3r3 branch:
> > struct page {
> > [...]
> >                 struct {        /* slab, slob and slub */
> >                         union {
> >                                 struct list_head slab_list;
> >                                 struct {        /* Partial pages */
> >                                         struct page *next;
> > #ifdef CONFIG_64BIT
> >                                         int pages;      /* Nr of pages left */
> > #else
> >                                         short int pages;
> > #endif
> >                                 };
> >                         };
> > [...]
> > 
> > It's not consistent with struct slab.
> 
> Hm right. But as we don't actually use the struct page version anymore, and
> it's not one of the fields checked by SLAB_MATCH(), we can ignore this.
>

Yeah this is not a big problem. just mentioned this because 
it looked weird and I didn't know when the patch "mm: Remove slab from struct page"
will come back.

> > I think this is because "mm: Remove slab from struct page" was dropped.
>
> That was just postponed until iommu changes are in. Matthew mentioned those
> might be merged too, so that final cleanup will happen too and take care of
> the discrepancy above, so no need for extra churn to address it speficially.
> 

Okay it seems no extra work needed until the iommu changes are in!

BTW, in the patch (that I sent) ("mm/slob: Remove unnecessary page_mapcount_reset()
function call"), it refers commit 4525180926f9  ("mm/sl*b: Differentiate struct slab fields by
sl*b implementations"). But the commit hash 4525180926f9 changed after the
tree has been changed.

It will be nice to write a script to handle situations like this.

> > Would you update some of patches?
> > 
> > # mm/sl*b: Differentiate struct slab fields by sl*b implementations
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > Works SL[AUO]B on my machine and makes code much better.
> > 
> > # mm/slob: Convert SLOB to use struct slab and struct folio
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > It still works fine on SLOB.
> > 
> > # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> >
> > # mm/slub: Convert __free_slab() to use struct slab
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > 
> > Thanks,
> > Hyeonggon.
> 
> Thanks again,
> Vlastimil

Have a nice day, thanks!
Hyeonggon.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]     ` <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05-AlSwsSmVLrQ@public.gmane.org>
                         ` (2 preceding siblings ...)
  2021-12-16 15:00       ` Hyeonggon Yoo
@ 2021-12-22 16:56       ` Vlastimil Babka
       [not found]         ` <f3a83708-3f3c-a634-7bee-dcfcaaa7f36e-AlSwsSmVLrQ@public.gmane.org>
  3 siblings, 1 reply; 28+ messages in thread
From: Vlastimil Babka @ 2021-12-22 16:56 UTC (permalink / raw)
  To: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg
  Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner, Julia Lawall, kasan-dev-/JYPxA39Uh5TLH3MbocFFw,
	Lu Baolu, Luis Chamberlain, Marco Elver, Mic

On 12/14/21 13:57, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> this cover letter.
>>
>> Series also available in git, based on 5.16-rc3:
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
-next. I've shortened git commit log lines to make checkpatch happier,
so no range-diff as it would be too long. I believe it would be useless
spam to post the whole series now, shortly before xmas, so I will do it
at rc8 time, to hopefully collect remaining reviews. But if anyone wants
a mailed version, I can do that.

Changes in v4:
- rebase to 5.16-rc6 to avoid a conflict with mainline
- collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
thanks!
- in patch "mm/slub: Convert detached_freelist to use a struct slab"
renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
as suggested by Roman
- in "mm/memcg: Convert slab objcgs from struct page to struct slab"
change one caller of slab_objcgs_check() to slab_objcgs() as suggested
by Johannes, realize the other caller should be also changed, and remove
slab_objcgs_check() completely.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]         ` <f3a83708-3f3c-a634-7bee-dcfcaaa7f36e-AlSwsSmVLrQ@public.gmane.org>
@ 2021-12-25  9:16           ` Hyeonggon Yoo
       [not found]             ` <Ycbhh5n8TBODWHR+-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
  2021-12-29 11:22           ` Hyeonggon Yoo
  1 sibling, 1 reply; 28+ messages in thread
From: Hyeonggon Yoo @ 2021-12-25  9:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
> On 12/14/21 13:57, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> this cover letter.
> >>
> >> Series also available in git, based on 5.16-rc3:
> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
> -next. I've shortened git commit log lines to make checkpatch happier,
> so no range-diff as it would be too long. I believe it would be useless
> spam to post the whole series now, shortly before xmas, so I will do it
> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
> a mailed version, I can do that.

Hello Vlastimil, Merry Christmas!
This is part 2 of reviewing/testing patches.

# mm/kasan: Convert to struct folio and struct slab
I'm not familiar with kasan yet but kasan runs well on my machine and
kasan's bug report functionality too works fine.
Tested-by: Hyeongogn Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

# mm: Convert struct page to struct slab in functions used by other subsystems
I'm not familiar with kasan, but to ask:
Does ____kasan_slab_free detect invalid free if someone frees
an object that is not allocated from slab?

@@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
-       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
+       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
            object)) {
                kasan_report_invalid_free(tagged_object, ip);
                return true;

I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
returns false. That will cause NULL pointer dereference in nearest_obj.
I don't think this change is intended.

This makes me think some of virt_to_head_page() -> virt_to_slab()
conversion need to be reviewed with caution.

# mm/slab: Finish struct page to struct slab conversion
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

# mm/slab: Convert most struct page to struct slab by spatch
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

I'll come back with part 3 :)
Enjoy your Christmas!
Hyeonggon

> Changes in v4:
> - rebase to 5.16-rc6 to avoid a conflict with mainline
> - collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
> thanks!
> - in patch "mm/slub: Convert detached_freelist to use a struct slab"
> renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
> as suggested by Roman
> - in "mm/memcg: Convert slab objcgs from struct page to struct slab"
> change one caller of slab_objcgs_check() to slab_objcgs() as suggested
> by Johannes, realize the other caller should be also changed, and remove
> slab_objcgs_check() completely.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]             ` <Ycbhh5n8TBODWHR+-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
@ 2021-12-25 17:53               ` Matthew Wilcox
       [not found]                 ` <Ycdak5J48i7CGkHU-FZi0V3Vbi30CUdFEqe4BF2D2FQJk+8+b@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Matthew Wilcox @ 2021-12-25 17:53 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Sat, Dec 25, 2021 at 09:16:55AM +0000, Hyeonggon Yoo wrote:
> # mm: Convert struct page to struct slab in functions used by other subsystems
> I'm not familiar with kasan, but to ask:
> Does ____kasan_slab_free detect invalid free if someone frees
> an object that is not allocated from slab?
> 
> @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> -       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> +       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
>             object)) {
>                 kasan_report_invalid_free(tagged_object, ip);
>                 return true;
> 
> I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
> returns false. That will cause NULL pointer dereference in nearest_obj.
> I don't think this change is intended.

You need to track down how this could happen.  As far as I can tell,
it's always called when we know the object is part of a slab.  That's
where the cachep pointer is deduced from.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]                 ` <Ycdak5J48i7CGkHU-FZi0V3Vbi30CUdFEqe4BF2D2FQJk+8+b@public.gmane.org>
@ 2021-12-27  2:43                   ` Hyeonggon Yoo
  0 siblings, 0 replies; 28+ messages in thread
From: Hyeonggon Yoo @ 2021-12-27  2:43 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Vlastimil Babka, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Sat, Dec 25, 2021 at 05:53:23PM +0000, Matthew Wilcox wrote:
> On Sat, Dec 25, 2021 at 09:16:55AM +0000, Hyeonggon Yoo wrote:
> > # mm: Convert struct page to struct slab in functions used by other subsystems
> > I'm not familiar with kasan, but to ask:
> > Does ____kasan_slab_free detect invalid free if someone frees
> > an object that is not allocated from slab?
> > 
> > @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> > -       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> > +       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
> >             object)) {
> >                 kasan_report_invalid_free(tagged_object, ip);
> >                 return true;
> > 
> > I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
> > returns false. That will cause NULL pointer dereference in nearest_obj.
> > I don't think this change is intended.
> 
> You need to track down how this could happen.  As far as I can tell,
> it's always called when we know the object is part of a slab.  That's
> where the cachep pointer is deduced from.

Thank you Matthew, you are right. I read the code too narrowly.
when we call kasan hooks, we know that the object is allocated from
the slab cache. (through cache_from_obj)

I'll review that patch again in part 3!

Thanks,
Hyeonggon

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]         ` <f3a83708-3f3c-a634-7bee-dcfcaaa7f36e-AlSwsSmVLrQ@public.gmane.org>
  2021-12-25  9:16           ` Hyeonggon Yoo
@ 2021-12-29 11:22           ` Hyeonggon Yoo
       [not found]             ` <YcxFDuPXlTwrPSPk-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
  1 sibling, 1 reply; 28+ messages in thread
From: Hyeonggon Yoo @ 2021-12-29 11:22 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
> On 12/14/21 13:57, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> this cover letter.
> >>
> >> Series also available in git, based on 5.16-rc3:
> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
> -next. I've shortened git commit log lines to make checkpatch happier,
> so no range-diff as it would be too long. I believe it would be useless
> spam to post the whole series now, shortly before xmas, so I will do it
> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
> a mailed version, I can do that.
>

Hello Matthew and Vlastimil.
it's part 3 of review.

# mm: Convert struct page to struct slab in functions used by other subsystems
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>


# mm/slub: Convert most struct page to struct slab by spatch
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
with a question below.

-static int check_slab(struct kmem_cache *s, struct page *page)
+static int check_slab(struct kmem_cache *s, struct slab *slab)
 {
        int maxobj;
 
-       if (!PageSlab(page)) {
-               slab_err(s, page, "Not a valid slab page");
+       if (!folio_test_slab(slab_folio(slab))) {
+               slab_err(s, slab, "Not a valid slab page");
                return 0;
        }

Can't we guarantee that struct slab * always points to a slab?

for struct page * it can be !PageSlab(page) because struct page *
can be other than slab. but struct slab * can only be slab
unlike struct page. code will be simpler if we guarantee that
struct slab * always points to a slab (or NULL).


# mm/slub: Convert pfmemalloc_match() to take a struct slab
It's confusing to me because the original pfmemalloc_match() is removed
and pfmemalloc_match_unsafe() was renamed to pfmemalloc_match() and
converted to use slab_test_pfmemalloc() helper.

But I agree with the resulting code. so:
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>


# mm/slub: Convert alloc_slab_page() to return a struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>


# mm/slub: Convert print_page_info() to print_slab_info()
Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

I hope to review rest of patches in a week.

Thanks,
Hyeonggon

> Changes in v4:
> - rebase to 5.16-rc6 to avoid a conflict with mainline
> - collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
> thanks!
> - in patch "mm/slub: Convert detached_freelist to use a struct slab"
> renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
> as suggested by Roman
> - in "mm/memcg: Convert slab objcgs from struct page to struct slab"
> change one caller of slab_objcgs_check() to slab_objcgs() as suggested
> by Johannes, realize the other caller should be also changed, and remove
> slab_objcgs_check() completely.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 00/33] Separate struct slab from struct page
       [not found]             ` <YcxFDuPXlTwrPSPk-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
@ 2022-01-03 17:56               ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2022-01-03 17:56 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Matthew Wilcox, Christoph Lameter, David Rientjes, Joonsoo Kim,
	Pekka Enberg, linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Andrew Morton,
	patches-cunTk1MwBs/YUNznpcFYbw, Alexander Potapenko,
	Andrey Konovalov, Andrey Ryabinin, Andy Lutomirski,
	Borislav Petkov, cgroups-u79uwXL29TY76Z2rM5mHXA, Dave Hansen,
	David Woodhouse, Dmitry Vyukov, H. Peter Anvin, Ingo Molnar,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joerg Roedel,
	Johannes Weiner

On 12/29/21 12:22, Hyeonggon Yoo wrote:
> On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
>> On 12/14/21 13:57, Vlastimil Babka wrote:
>> > On 12/1/21 19:14, Vlastimil Babka wrote:
>> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> >> this cover letter.
>> >>
>> >> Series also available in git, based on 5.16-rc3:
>> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> > 
>> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
>> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
>> -next. I've shortened git commit log lines to make checkpatch happier,
>> so no range-diff as it would be too long. I believe it would be useless
>> spam to post the whole series now, shortly before xmas, so I will do it
>> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
>> a mailed version, I can do that.
>>
> 
> Hello Matthew and Vlastimil.
> it's part 3 of review.
> 
> # mm: Convert struct page to struct slab in functions used by other subsystems
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> 
> # mm/slub: Convert most struct page to struct slab by spatch
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> with a question below.
> 
> -static int check_slab(struct kmem_cache *s, struct page *page)
> +static int check_slab(struct kmem_cache *s, struct slab *slab)
>  {
>         int maxobj;
>  
> -       if (!PageSlab(page)) {
> -               slab_err(s, page, "Not a valid slab page");
> +       if (!folio_test_slab(slab_folio(slab))) {
> +               slab_err(s, slab, "Not a valid slab page");
>                 return 0;
>         }
> 
> Can't we guarantee that struct slab * always points to a slab?

Normally, yes.

> for struct page * it can be !PageSlab(page) because struct page *
> can be other than slab. but struct slab * can only be slab
> unlike struct page. code will be simpler if we guarantee that
> struct slab * always points to a slab (or NULL).

That's what the code does indeed. But check_slab() is called as part of
various consistency checks, so there we on purpose question all assumptions
in order to find a bug (e.g. memory corruption) - such as a page that's
still on the list of slabs while it was already freed and reused and thus
e.g. lacks the slab page flag.

But it's nice how using struct slab makes such a check immediately stand out
as suspicious, right?

> # mm/slub: Convert pfmemalloc_match() to take a struct slab
> It's confusing to me because the original pfmemalloc_match() is removed
> and pfmemalloc_match_unsafe() was renamed to pfmemalloc_match() and
> converted to use slab_test_pfmemalloc() helper.
> 
> But I agree with the resulting code. so:
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> 
> # mm/slub: Convert alloc_slab_page() to return a struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Tested-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> 
> # mm/slub: Convert print_page_info() to print_slab_info()
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> 
> I hope to review rest of patches in a week.

Thanks for your reviews/tests!

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2022-01-03 17:56 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-12-01 18:14 [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
     [not found] ` <20211201181510.18784-1-vbabka-AlSwsSmVLrQ@public.gmane.org>
2021-12-01 18:14   ` [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems Vlastimil Babka
     [not found]     ` <20211201181510.18784-23-vbabka-AlSwsSmVLrQ@public.gmane.org>
2021-12-02 17:16       ` Andrey Konovalov
2021-12-14 14:31       ` Johannes Weiner
2021-12-01 18:15   ` [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab Vlastimil Babka
     [not found]     ` <20211201181510.18784-24-vbabka-AlSwsSmVLrQ@public.gmane.org>
2021-12-14 14:43       ` Johannes Weiner
     [not found]         ` <Ybite9s1TS7cS67J-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2021-12-20 23:31           ` Vlastimil Babka
2021-12-02 12:25   ` [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
2021-12-14 12:57   ` Vlastimil Babka
     [not found]     ` <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05-AlSwsSmVLrQ@public.gmane.org>
2021-12-14 14:38       ` Hyeonggon Yoo
2021-12-14 14:43         ` Vlastimil Babka
     [not found]           ` <87584294-b1bc-aabe-d86a-1a8b93a7f4d4-AlSwsSmVLrQ@public.gmane.org>
2021-12-15  3:47             ` Hyeonggon Yoo
2021-12-15  1:03       ` Roman Gushchin via iommu
     [not found]         ` <Ybk+0LKrsAJatILE-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
2021-12-15 23:38           ` Roman Gushchin
     [not found]             ` <Ybp8a5JNndgCLy2w-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>
2021-12-16  9:19               ` Vlastimil Babka
2021-12-20  0:47               ` Vlastimil Babka
     [not found]                 ` <86617be0-8aa8-67d2-08bd-1e06c3d12785-AlSwsSmVLrQ@public.gmane.org>
2021-12-20  1:42                   ` Matthew Wilcox
2021-12-20  0:24           ` Vlastimil Babka
2021-12-16 15:00       ` Hyeonggon Yoo
     [not found]         ` <YbtUmi5kkhmlXEB1-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
2021-12-20 23:58           ` Vlastimil Babka
     [not found]             ` <38976607-b9f9-1bce-9db9-60c23da65d2e-AlSwsSmVLrQ@public.gmane.org>
2021-12-21 17:25               ` Robin Murphy
2021-12-22  7:36               ` Hyeonggon Yoo
2021-12-22 16:56       ` Vlastimil Babka
     [not found]         ` <f3a83708-3f3c-a634-7bee-dcfcaaa7f36e-AlSwsSmVLrQ@public.gmane.org>
2021-12-25  9:16           ` Hyeonggon Yoo
     [not found]             ` <Ycbhh5n8TBODWHR+-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
2021-12-25 17:53               ` Matthew Wilcox
     [not found]                 ` <Ycdak5J48i7CGkHU-FZi0V3Vbi30CUdFEqe4BF2D2FQJk+8+b@public.gmane.org>
2021-12-27  2:43                   ` Hyeonggon Yoo
2021-12-29 11:22           ` Hyeonggon Yoo
     [not found]             ` <YcxFDuPXlTwrPSPk-UAkeum2rnor1owfsx/q8xBTA2CDgPq3U3JN/6IHv09UQhXZLsr/E6M+yXz+rYK7V@public.gmane.org>
2022-01-03 17:56               ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).