* [PATCH RFC 1/6] slab: introduce kmem_cache_setup()
2024-09-02 15:31 [PATCH RFC 0/6] slab: add kmem_cache_setup() Christian Brauner
@ 2024-09-02 15:31 ` Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 2/6] slab: port KMEM_CACHE() to kmem_cache_setup() Christian Brauner
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Brauner @ 2024-09-02 15:31 UTC (permalink / raw)
To: Vlastimil Babka, Jens Axboe, Jann Horn, Linus Torvalds
Cc: linux-mm, Christian Brauner
Replace the custom kmem_cache_*() functions with a unified
kmem_cache_setup() function that is based on struct kmem_cache_args and
will replace kmem_cache_create(), kmem_cache_create_usercopy(), and
kmem_cache_create_rcu().
The @name, @object_size, and @flags parameters are passed separately as
they are nearly universally used. The rest of the arguments moves into
struct kmem_cache_args.
A new "use_freeptr_offset" boolean is added as zero is a valid freelist
pointer offset. The boolean allows callers to avoid having to do
anything special if they don't care about freelist pointer offsets (most
callers don't).
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
Documentation/core-api/memory-allocation.rst | 10 +--
include/linux/slab.h | 21 ++++++
mm/slab_common.c | 102 +++++++++++++++++----------
3 files changed, 91 insertions(+), 42 deletions(-)
diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
index 8b84eb4bdae7..83ae3021c89a 100644
--- a/Documentation/core-api/memory-allocation.rst
+++ b/Documentation/core-api/memory-allocation.rst
@@ -166,11 +166,11 @@ documentation. Note that `kvmalloc` may return memory that is not
physically contiguous.
If you need to allocate many identical objects you can use the slab
-cache allocator. The cache should be set up with kmem_cache_create() or
-kmem_cache_create_usercopy() before it can be used. The second function
-should be used if a part of the cache might be copied to the userspace.
-After the cache is created kmem_cache_alloc() and its convenience
-wrappers can allocate memory from that cache.
+cache allocator. The cache should be set up with kmem_cache_setup()
+before it can be used. The second function should be used if a part of
+the cache might be copied to the userspace. After the cache is created
+kmem_cache_alloc() and its convenience wrappers can allocate memory from
+that cache.
When the allocated memory is no longer needed it must be freed.
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 5b2da2cf31a8..41135ea03299 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -240,6 +240,27 @@ struct mem_cgroup;
*/
bool slab_is_available(void);
+/**
+ * @align: The required alignment for the objects.
+ * @useroffset: Usercopy region offset
+ * @usersize: Usercopy region size
+ * @freeptr_offset: Custom offset for the free pointer in RCU caches
+ * @use_freeptr_offset: Whether a @freeptr_offset is used
+ * @ctor: A constructor for the objects.
+ */
+struct kmem_cache_args {
+ unsigned int align;
+ unsigned int useroffset;
+ unsigned int usersize;
+ unsigned int freeptr_offset;
+ bool use_freeptr_offset;
+ void (*ctor)(void *);
+};
+
+struct kmem_cache *kmem_cache_setup(const char *name, unsigned int object_size,
+ struct kmem_cache_args *args,
+ slab_flags_t flags);
+
struct kmem_cache *kmem_cache_create(const char *name, unsigned int size,
unsigned int align, slab_flags_t flags,
void (*ctor)(void *));
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 95db3702f8d6..515ad422cf30 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -202,22 +202,22 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
}
static struct kmem_cache *create_cache(const char *name,
- unsigned int object_size, unsigned int freeptr_offset,
- unsigned int align, slab_flags_t flags,
- unsigned int useroffset, unsigned int usersize,
- void (*ctor)(void *))
+ unsigned int object_size,
+ struct kmem_cache_args *args,
+ slab_flags_t flags)
{
struct kmem_cache *s;
int err;
- if (WARN_ON(useroffset + usersize > object_size))
- useroffset = usersize = 0;
+ if (WARN_ON(args->useroffset + args->usersize > object_size))
+ args->useroffset = args->usersize = 0;
/* If a custom freelist pointer is requested make sure it's sane. */
err = -EINVAL;
- if (freeptr_offset != UINT_MAX &&
- (freeptr_offset >= object_size || !(flags & SLAB_TYPESAFE_BY_RCU) ||
- !IS_ALIGNED(freeptr_offset, sizeof(freeptr_t))))
+ if (args->use_freeptr_offset &&
+ (args->freeptr_offset >= object_size ||
+ !(flags & SLAB_TYPESAFE_BY_RCU) ||
+ !IS_ALIGNED(args->freeptr_offset, sizeof(freeptr_t))))
goto out;
err = -ENOMEM;
@@ -227,12 +227,15 @@ static struct kmem_cache *create_cache(const char *name,
s->name = name;
s->size = s->object_size = object_size;
- s->rcu_freeptr_offset = freeptr_offset;
- s->align = align;
- s->ctor = ctor;
+ if (args->use_freeptr_offset)
+ s->rcu_freeptr_offset = args->freeptr_offset;
+ else
+ s->rcu_freeptr_offset = UINT_MAX;
+ s->align = args->align;
+ s->ctor = args->ctor;
#ifdef CONFIG_HARDENED_USERCOPY
- s->useroffset = useroffset;
- s->usersize = usersize;
+ s->useroffset = args->useroffset;
+ s->usersize = args->usersize;
#endif
err = __kmem_cache_create(s, flags);
if (err)
@@ -248,12 +251,22 @@ static struct kmem_cache *create_cache(const char *name,
return ERR_PTR(err);
}
-static struct kmem_cache *
-do_kmem_cache_create_usercopy(const char *name,
- unsigned int size, unsigned int freeptr_offset,
- unsigned int align, slab_flags_t flags,
- unsigned int useroffset, unsigned int usersize,
- void (*ctor)(void *))
+/**
+ * kmem_cache_setup - Create a kmem cache
+ * @name: A string which is used in /proc/slabinfo to identify this cache.
+ * @object_size: The size of objects to be created in this cache.
+ * @args: Arguments for the cache creation (see struct kmem_cache_args).
+ *
+ * Cannot be called within a interrupt, but can be interrupted.
+ * The @ctor is run when new pages are allocated by the cache.
+ *
+ * See %SLAB_* flags for an explanation of individual @flags.
+ *
+ * Return: a pointer to the cache on success, NULL on failure.
+ */
+struct kmem_cache *kmem_cache_setup(const char *name, unsigned int object_size,
+ struct kmem_cache_args *args,
+ slab_flags_t flags)
{
struct kmem_cache *s = NULL;
const char *cache_name;
@@ -275,7 +288,7 @@ do_kmem_cache_create_usercopy(const char *name,
mutex_lock(&slab_mutex);
- err = kmem_cache_sanity_check(name, size);
+ err = kmem_cache_sanity_check(name, object_size);
if (err) {
goto out_unlock;
}
@@ -296,12 +309,14 @@ do_kmem_cache_create_usercopy(const char *name,
/* Fail closed on bad usersize of useroffset values. */
if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) ||
- WARN_ON(!usersize && useroffset) ||
- WARN_ON(size < usersize || size - usersize < useroffset))
- usersize = useroffset = 0;
-
- if (!usersize)
- s = __kmem_cache_alias(name, size, align, flags, ctor);
+ WARN_ON(!args->usersize && args->useroffset) ||
+ WARN_ON(object_size < args->usersize ||
+ object_size - args->usersize < args->useroffset))
+ args->usersize = args->useroffset = 0;
+
+ if (!args->usersize)
+ s = __kmem_cache_alias(name, object_size, args->align, flags,
+ args->ctor);
if (s)
goto out_unlock;
@@ -311,9 +326,8 @@ do_kmem_cache_create_usercopy(const char *name,
goto out_unlock;
}
- s = create_cache(cache_name, size, freeptr_offset,
- calculate_alignment(flags, align, size),
- flags, useroffset, usersize, ctor);
+ args->align = calculate_alignment(flags, args->align, object_size);
+ s = create_cache(cache_name, object_size, args, flags);
if (IS_ERR(s)) {
err = PTR_ERR(s);
kfree_const(cache_name);
@@ -335,6 +349,7 @@ do_kmem_cache_create_usercopy(const char *name,
}
return s;
}
+EXPORT_SYMBOL(kmem_cache_setup);
/**
* kmem_cache_create_usercopy - Create a cache with a region suitable
@@ -370,8 +385,14 @@ kmem_cache_create_usercopy(const char *name, unsigned int size,
unsigned int useroffset, unsigned int usersize,
void (*ctor)(void *))
{
- return do_kmem_cache_create_usercopy(name, size, UINT_MAX, align, flags,
- useroffset, usersize, ctor);
+ return kmem_cache_setup(name, size,
+ &(struct kmem_cache_args){
+ .align = align,
+ .ctor = ctor,
+ .useroffset = useroffset,
+ .usersize = usersize,
+ },
+ flags);
}
EXPORT_SYMBOL(kmem_cache_create_usercopy);
@@ -404,8 +425,12 @@ struct kmem_cache *
kmem_cache_create(const char *name, unsigned int size, unsigned int align,
slab_flags_t flags, void (*ctor)(void *))
{
- return do_kmem_cache_create_usercopy(name, size, UINT_MAX, align, flags,
- 0, 0, ctor);
+ return kmem_cache_setup(name, size,
+ &(struct kmem_cache_args){
+ .align = align,
+ .ctor = ctor,
+ },
+ flags);
}
EXPORT_SYMBOL(kmem_cache_create);
@@ -442,9 +467,12 @@ struct kmem_cache *kmem_cache_create_rcu(const char *name, unsigned int size,
unsigned int freeptr_offset,
slab_flags_t flags)
{
- return do_kmem_cache_create_usercopy(name, size, freeptr_offset, 0,
- flags | SLAB_TYPESAFE_BY_RCU, 0, 0,
- NULL);
+ return kmem_cache_setup(name, size,
+ &(struct kmem_cache_args){
+ .freeptr_offset = freeptr_offset,
+ .use_freeptr_offset = true,
+ },
+ flags | SLAB_TYPESAFE_BY_RCU);
}
EXPORT_SYMBOL(kmem_cache_create_rcu);
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH RFC 2/6] slab: port KMEM_CACHE() to kmem_cache_setup()
2024-09-02 15:31 [PATCH RFC 0/6] slab: add kmem_cache_setup() Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 1/6] slab: introduce kmem_cache_setup() Christian Brauner
@ 2024-09-02 15:31 ` Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 3/6] slab: port KMEM_CACHE_USERCOPY() " Christian Brauner
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Brauner @ 2024-09-02 15:31 UTC (permalink / raw)
To: Vlastimil Babka, Jens Axboe, Jann Horn, Linus Torvalds
Cc: linux-mm, Christian Brauner
Make KMEM_CACHE() use the new kmem_cache_setup() helper.
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
include/linux/slab.h | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 41135ea03299..79f4799b7083 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -283,9 +283,13 @@ int kmem_cache_shrink(struct kmem_cache *s);
* f.e. add ____cacheline_aligned_in_smp to the struct declaration
* then the objects will be properly aligned in SMP configurations.
*/
-#define KMEM_CACHE(__struct, __flags) \
- kmem_cache_create(#__struct, sizeof(struct __struct), \
- __alignof__(struct __struct), (__flags), NULL)
+#define KMEM_CACHE(__struct, __flags) \
+ kmem_cache_setup(#__struct, sizeof(struct __struct), \
+ &(struct kmem_cache_args){ \
+ .align = __alignof__(struct __struct), \
+ .ctor = NULL, \
+ }, \
+ (__flags))
/*
* To whitelist a single field for copying to/from usercopy, use this
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH RFC 3/6] slab: port KMEM_CACHE_USERCOPY() to kmem_cache_setup()
2024-09-02 15:31 [PATCH RFC 0/6] slab: add kmem_cache_setup() Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 1/6] slab: introduce kmem_cache_setup() Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 2/6] slab: port KMEM_CACHE() to kmem_cache_setup() Christian Brauner
@ 2024-09-02 15:31 ` Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 4/6] file: port " Christian Brauner
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Brauner @ 2024-09-02 15:31 UTC (permalink / raw)
To: Vlastimil Babka, Jens Axboe, Jann Horn, Linus Torvalds
Cc: linux-mm, Christian Brauner
Make KMEM_CACHE_USERCOPY() use kmem_cache_setup().
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
include/linux/slab.h | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 79f4799b7083..52928474e6a1 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -283,24 +283,29 @@ int kmem_cache_shrink(struct kmem_cache *s);
* f.e. add ____cacheline_aligned_in_smp to the struct declaration
* then the objects will be properly aligned in SMP configurations.
*/
-#define KMEM_CACHE(__struct, __flags) \
- kmem_cache_setup(#__struct, sizeof(struct __struct), \
- &(struct kmem_cache_args){ \
- .align = __alignof__(struct __struct), \
- .ctor = NULL, \
- }, \
+#define KMEM_CACHE(__struct, __flags) \
+ kmem_cache_setup(#__struct, \
+ sizeof(struct __struct), \
+ &(struct kmem_cache_args) { \
+ .align = __alignof__(struct __struct), \
+ .ctor = NULL, \
+ }, \
(__flags))
/*
* To whitelist a single field for copying to/from usercopy, use this
* macro instead for KMEM_CACHE() above.
*/
-#define KMEM_CACHE_USERCOPY(__struct, __flags, __field) \
- kmem_cache_create_usercopy(#__struct, \
- sizeof(struct __struct), \
- __alignof__(struct __struct), (__flags), \
- offsetof(struct __struct, __field), \
- sizeof_field(struct __struct, __field), NULL)
+#define KMEM_CACHE_USERCOPY(__struct, __flags, __field) \
+ kmem_cache_setup(#__struct, \
+ sizeof(struct __struct), \
+ &(struct kmem_cache_args) { \
+ .align = __alignof__(struct __struct), \
+ .useroffset = offsetof(struct __struct, __field), \
+ .usersize = sizeof_field(struct __struct, __field), \
+ .ctor = NULL, \
+ }, \
+ (__flags))
/*
* Common kmalloc functions provided by all allocators
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH RFC 4/6] file: port to kmem_cache_setup()
2024-09-02 15:31 [PATCH RFC 0/6] slab: add kmem_cache_setup() Christian Brauner
` (2 preceding siblings ...)
2024-09-02 15:31 ` [PATCH RFC 3/6] slab: port KMEM_CACHE_USERCOPY() " Christian Brauner
@ 2024-09-02 15:31 ` Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 5/6] slab: remove kmem_cache_create_rcu() Christian Brauner
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Christian Brauner @ 2024-09-02 15:31 UTC (permalink / raw)
To: Vlastimil Babka, Jens Axboe, Jann Horn, Linus Torvalds
Cc: linux-mm, Christian Brauner
Replace the custom kmem_cache_create_rcu() method with a regular call to
the new kmem_cache_setup() function.
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
fs/file_table.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/fs/file_table.c b/fs/file_table.c
index 3ef558f27a1c..c20066b3f677 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -511,9 +511,15 @@ EXPORT_SYMBOL(__fput_sync);
void __init files_init(void)
{
- filp_cachep = kmem_cache_create_rcu("filp", sizeof(struct file),
- offsetof(struct file, f_freeptr),
- SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT);
+ filp_cachep = kmem_cache_setup("filp", sizeof(struct file),
+ &(struct kmem_cache_args) {
+ .use_freeptr_offset = true,
+ .freeptr_offset = offsetof(struct file, f_freeptr),
+ },
+ SLAB_HWCACHE_ALIGN |
+ SLAB_PANIC |
+ SLAB_ACCOUNT |
+ SLAB_TYPESAFE_BY_RCU);
percpu_counter_init(&nr_files, 0, GFP_KERNEL);
}
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH RFC 5/6] slab: remove kmem_cache_create_rcu()
2024-09-02 15:31 [PATCH RFC 0/6] slab: add kmem_cache_setup() Christian Brauner
` (3 preceding siblings ...)
2024-09-02 15:31 ` [PATCH RFC 4/6] file: port " Christian Brauner
@ 2024-09-02 15:31 ` Christian Brauner
2024-09-02 15:31 ` [PATCH RFC 6/6] io_uring: port to kmem_cache_setup() Christian Brauner
2024-09-02 18:15 ` [PATCH RFC 0/6] slab: add kmem_cache_setup() Vlastimil Babka
6 siblings, 0 replies; 8+ messages in thread
From: Christian Brauner @ 2024-09-02 15:31 UTC (permalink / raw)
To: Vlastimil Babka, Jens Axboe, Jann Horn, Linus Torvalds
Cc: linux-mm, Christian Brauner
Since we have kmem_cache_setup() and have ported kmem_cache_create_rcu()
users over to it is unused and can be removed.
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
include/linux/slab.h | 3 ---
mm/slab_common.c | 42 ------------------------------------------
2 files changed, 45 deletions(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 52928474e6a1..7d79410996b8 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -269,9 +269,6 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name,
slab_flags_t flags,
unsigned int useroffset, unsigned int usersize,
void (*ctor)(void *));
-struct kmem_cache *kmem_cache_create_rcu(const char *name, unsigned int size,
- unsigned int freeptr_offset,
- slab_flags_t flags);
void kmem_cache_destroy(struct kmem_cache *s);
int kmem_cache_shrink(struct kmem_cache *s);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 515ad422cf30..90180f02a153 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -434,48 +434,6 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
}
EXPORT_SYMBOL(kmem_cache_create);
-/**
- * kmem_cache_create_rcu - Create a SLAB_TYPESAFE_BY_RCU cache.
- * @name: A string which is used in /proc/slabinfo to identify this cache.
- * @size: The size of objects to be created in this cache.
- * @freeptr_offset: The offset into the memory to the free pointer
- * @flags: SLAB flags
- *
- * Cannot be called within an interrupt, but can be interrupted.
- *
- * See kmem_cache_create() for an explanation of possible @flags.
- *
- * By default SLAB_TYPESAFE_BY_RCU caches place the free pointer outside
- * of the object. This might cause the object to grow in size. Callers
- * that have a reason to avoid this can specify a custom free pointer
- * offset in their struct where the free pointer will be placed.
- *
- * Note that placing the free pointer inside the object requires the
- * caller to ensure that no fields are invalidated that are required to
- * guard against object recycling (See SLAB_TYPESAFE_BY_RCU for
- * details.).
- *
- * Using zero as a value for @freeptr_offset is valid. To request no
- * offset UINT_MAX must be specified.
- *
- * Note that @ctor isn't supported with custom free pointers as a @ctor
- * requires an external free pointer.
- *
- * Return: a pointer to the cache on success, NULL on failure.
- */
-struct kmem_cache *kmem_cache_create_rcu(const char *name, unsigned int size,
- unsigned int freeptr_offset,
- slab_flags_t flags)
-{
- return kmem_cache_setup(name, size,
- &(struct kmem_cache_args){
- .freeptr_offset = freeptr_offset,
- .use_freeptr_offset = true,
- },
- flags | SLAB_TYPESAFE_BY_RCU);
-}
-EXPORT_SYMBOL(kmem_cache_create_rcu);
-
static struct kmem_cache *kmem_buckets_cache __ro_after_init;
/**
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH RFC 6/6] io_uring: port to kmem_cache_setup()
2024-09-02 15:31 [PATCH RFC 0/6] slab: add kmem_cache_setup() Christian Brauner
` (4 preceding siblings ...)
2024-09-02 15:31 ` [PATCH RFC 5/6] slab: remove kmem_cache_create_rcu() Christian Brauner
@ 2024-09-02 15:31 ` Christian Brauner
2024-09-02 18:15 ` [PATCH RFC 0/6] slab: add kmem_cache_setup() Vlastimil Babka
6 siblings, 0 replies; 8+ messages in thread
From: Christian Brauner @ 2024-09-02 15:31 UTC (permalink / raw)
To: Vlastimil Babka, Jens Axboe, Jann Horn, Linus Torvalds
Cc: linux-mm, Christian Brauner
Replace kmem_cache_create_usercopy() with a call to the new
kmem_cache_setup() function.
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
io_uring/io_uring.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 3942db160f18..092a43a63236 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3722,12 +3722,15 @@ static int __init io_uring_init(void)
* range, and HARDENED_USERCOPY will complain if we haven't
* correctly annotated this range.
*/
- req_cachep = kmem_cache_create_usercopy("io_kiocb",
- sizeof(struct io_kiocb), 0,
- SLAB_HWCACHE_ALIGN | SLAB_PANIC |
- SLAB_ACCOUNT | SLAB_TYPESAFE_BY_RCU,
- offsetof(struct io_kiocb, cmd.data),
- sizeof_field(struct io_kiocb, cmd.data), NULL);
+ req_cachep = kmem_cache_setup("io_kiocb", sizeof(struct io_kiocb),
+ &(struct kmem_cache_args) {
+ .useroffset = offsetof(struct io_kiocb, cmd.data),
+ .usersize = sizeof_field(struct io_kiocb, cmd.data),
+ },
+ SLAB_HWCACHE_ALIGN |
+ SLAB_PANIC |
+ SLAB_ACCOUNT |
+ SLAB_TYPESAFE_BY_RCU);
io_buf_cachep = KMEM_CACHE(io_buffer,
SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT);
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH RFC 0/6] slab: add kmem_cache_setup()
2024-09-02 15:31 [PATCH RFC 0/6] slab: add kmem_cache_setup() Christian Brauner
` (5 preceding siblings ...)
2024-09-02 15:31 ` [PATCH RFC 6/6] io_uring: port to kmem_cache_setup() Christian Brauner
@ 2024-09-02 18:15 ` Vlastimil Babka
6 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka @ 2024-09-02 18:15 UTC (permalink / raw)
To: Christian Brauner, Jens Axboe, Jann Horn, Linus Torvalds
Cc: linux-mm, Christoph Lameter
On 9/2/24 17:31, Christian Brauner wrote:
> Hey,
>
> As discussed last week the various kmem_cache_*() functions should be
> replaced by a unified function that is based around a struct, with only
> the basic parameters passed separately.
>
> Vlastimil already hinted that he would like to keep core parameters out
> of the struct: name, object size, and flags. I personally don't care
> much and would not object to moving everything into the struct but
> that's a matter of taste and I yield that decision power to the
> maintainer.
Yeah the reason I suggested that is the new struct contains rarely needed
arguments, so most usages won't have to instantiate it and thus look a bit
nicer.
> Here's an RFC built for a kmem_cache_setup() based on struct
> kmem_cache_args.
>
> The choice of name is somewhat forced as kmem_cache_create() is taken
> and the only way to reuse it would be to replace all users in one go.
> Or to do a global sed/kmem_cache_create()/kmem_cache_create2()/g. And
> then introduce kmem_cache_setup(). That doesn't strike me as a viable
> option.
>
> If we really cared about the *_create() suffix then an alternative might
> be to do a sed/kmem_cache_setup()/kmem_cache_create()/g after every user
> in the kernel is ported. I honestly don't think that's worth it but I
> wanted to at least mention it to highlight the fact that this might lead
> to a naming compromise.
I think using macros would allow us to have kmem_cache_create() in both the
current variant (5 args) and new one (4 args) at the same time? But that's
also not ideal so perhaps viable only if we really decided to convert
everything sooner than later and drop that temporary compatibility layer.
But perhaps if we're converting, it should be mainly to KMEM_CACHE() as it
handles alignment.
> From a cursory grep (and not excluding Documentation mentions) we will
> have to replace 44 kmem_cache_create_usercopy() calls and about 463
> kmem_cache_create() calls which makes for a bit above 500 calls to port
> to kmem_cache_setup(). That'll probably be good work for people getting
> into kernel development.
>
> Anyway, I wanted to get an RFC out to get some rough agreement on what
> the struct should look like, get some bikeshedding on the name done, and
> what else needs to be massaged as part of this. I think that
> cache_create() is the deepest we should stuff struct kmem_cache_args
> into the bowels of slab.
Well, if you wanted to be more adventurous... we could pass it also to
__kmem_cache_create(), then remove kmem_cache_open() (move the code to its
only caller __kmem_cache_create(), probably another thing not cleaned up
after SLAB removal).
And then also pass it to calculate_sizes() and at that point
rcu_freeptr_offset can be removed from struct kmem_cache, because we just
use the value from kmem_cache_args to calculate s->offset and then we can
forget that it was requested specifically.
> Signed-off-by: Christian Brauner <brauner@kernel.org>
> ---
> Christian Brauner (6):
> slab: introduce kmem_cache_setup()
> slab: port KMEM_CACHE() to kmem_cache_setup()
> slab: port KMEM_CACHE_USERCOPY() to kmem_cache_setup()
> file: port to kmem_cache_setup()
> slab: remove kmem_cache_create_rcu()
> io_uring: port to kmem_cache_setup()
>
> Documentation/core-api/memory-allocation.rst | 10 +-
> fs/file_table.c | 12 ++-
> include/linux/slab.h | 51 ++++++++---
> io_uring/io_uring.c | 15 +--
> mm/slab_common.c | 132 ++++++++++++---------------
> 5 files changed, 121 insertions(+), 99 deletions(-)
> ---
> base-commit: 6e016babce7c845ed015da25c7a097fa3482d95a
> change-id: 20240902-work-kmem_cache_args-e9760972c7d4
>
^ permalink raw reply [flat|nested] 8+ messages in thread