* [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family
@ 2025-11-22 1:42 Kees Cook
2025-11-22 1:42 ` [PATCH v5 1/4] compiler_types: Introduce __flex_counter() " Kees Cook
` (3 more replies)
0 siblings, 4 replies; 20+ messages in thread
From: Kees Cook @ 2025-11-22 1:42 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Randy Dunlap, Miguel Ojeda, Przemek Kitszel,
Gustavo A. R. Silva, Linus Torvalds, Matthew Wilcox,
Christoph Lameter, Marco Elver, Vegard Nossum, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Harry Yoo, Bill Wendling, Justin Stitt, Jann Horn,
Greg Kroah-Hartman, Sasha Levin, linux-mm, Nathan Chancellor,
Peter Zijlstra, Nick Desaulniers, Jonathan Corbet, Jakub Kicinski,
Yafang Shao, Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
Hi,
Here's a refresh and update on the kmalloc_obj() API proposal for
discussion here and at LPC[1]. Please see patch 2 for the bulk of the
details. And note that this is obviously not v6.19 material! :)
The tree-wide patch for conversions is here:
https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git/commit/?h=dev/v6.18-rc6/alloc_obj/v5&id=f79ee96ad6a3cafdb274fe15d3ae067724e72327
Thanks!
-Kees
[1] https://lpc.events/event/19/contributions/2136/
v5:
- switch to using assignment with type as first argument (Linus)
- fix various comment, commit log, and kern-docs (Randy, Miguel)
- renamed flex_counter internal helpers with "__" prefix (Przemek)
v4: https://lore.kernel.org/lkml/20250315025852.it.568-kees@kernel.org/
v3: https://lore.kernel.org/lkml/20240822231324.make.666-kees@kernel.org/
v2: https://lore.kernel.org/lkml/20240807235433.work.317-kees@kernel.org/
v1: https://lore.kernel.org/lkml/20240719192744.work.264-kees@kernel.org/
Kees Cook (4):
compiler_types: Introduce __flex_counter() and family
slab: Introduce kmalloc_obj() and family
checkpatch: Suggest kmalloc_obj family for sizeof allocations
coccinelle: Add kmalloc_objs conversion script
scripts/checkpatch.pl | 39 ++++-
scripts/coccinelle/api/kmalloc_objs.cocci | 168 +++++++++++++++++++++
Documentation/process/deprecated.rst | 42 ++++++
include/linux/compiler_types.h | 31 ++++
include/linux/overflow.h | 40 +++++
include/linux/slab.h | 172 ++++++++++++++++++++++
6 files changed, 486 insertions(+), 6 deletions(-)
create mode 100644 scripts/coccinelle/api/kmalloc_objs.cocci
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v5 1/4] compiler_types: Introduce __flex_counter() and family
2025-11-22 1:42 [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family Kees Cook
@ 2025-11-22 1:42 ` Kees Cook
2025-11-22 1:42 ` [PATCH v5 2/4] slab: Introduce kmalloc_obj() " Kees Cook
` (2 subsequent siblings)
3 siblings, 0 replies; 20+ messages in thread
From: Kees Cook @ 2025-11-22 1:42 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Miguel Ojeda, Gustavo A. R. Silva, Nathan Chancellor,
Peter Zijlstra, Nick Desaulniers, Marco Elver, Przemek Kitszel,
linux-hardening, Randy Dunlap, Linus Torvalds, Matthew Wilcox,
Christoph Lameter, Vegard Nossum, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Harry Yoo,
Bill Wendling, Justin Stitt, Jann Horn, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Nick Desaulniers, Jonathan Corbet,
Jakub Kicinski, Yafang Shao, Tony Ambardar, Alexander Lobakin,
Jan Hendrik Farr, Alexander Potapenko, linux-kernel, linux-doc,
llvm
Introduce __flex_counter() which wraps __builtin_counted_by_ref(),
as newly introduced by GCC[1] and Clang[2]. Use of __flex_counter()
allows access to the counter member of a struct's flexible array member
when it has been annotated with __counted_by().
Introduce typeof_flex_counter(), can_set_flex_counter(), and
set_flex_counter() to provide the needed _Generic() wrappers to get sane
results out of __flex_counter().
For example, with:
struct foo {
int counter;
short array[] __counted_by(counter);
} *p;
__flex_counter(p->array) will resolve to: &p->counter
typeof_flex_counter(p->array) will resolve to "int". (If p->array was not
annotated, it would resolve to "size_t".)
can_set_flex_counter(p->array, COUNT) is the same as:
COUNT <= type_max(p->counter) && COUNT >= type_min(p->counter)
(If p->array was not annotated it would return true since everything
fits in size_t.)
set_flex_counter(p->array, COUNT) is the same as:
p->counter = COUNT;
(It is a no-op if p->array is not annotated with __counted_by().)
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Miguel Ojeda <ojeda@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Cc: linux-hardening@vger.kernel.org
---
include/linux/compiler_types.h | 31 ++++++++++++++++++++++++++
include/linux/overflow.h | 40 ++++++++++++++++++++++++++++++++++
2 files changed, 71 insertions(+)
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index c46855162a8a..a31fe3dbf576 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -507,6 +507,37 @@ struct ftrace_likely_data {
#define __annotated(var, attr) __builtin_has_attribute(var, attr)
#endif
+/*
+ * Optional: only supported since gcc >= 15, clang >= 19
+ *
+ * gcc: https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#index-_005f_005fbuiltin_005fcounted_005fby_005fref
+ * clang: https://clang.llvm.org/docs/LanguageExtensions.html#builtin-counted-by-ref
+ */
+#if __has_builtin(__builtin_counted_by_ref)
+/**
+ * __flex_counter() - Get pointer to counter member for the given
+ * flexible array, if it was annotated with __counted_by()
+ * @FAM: Pointer to flexible array member of an addressable struct instance
+ *
+ * For example, with:
+ *
+ * struct foo {
+ * int counter;
+ * short array[] __counted_by(counter);
+ * } *p;
+ *
+ * __flex_counter(p->array) will resolve to &p->counter.
+ *
+ * Note that Clang may not allow this to be assigned to a separate
+ * variable; it must be used directly.
+ *
+ * If p->array is unannotated, this returns (void *)NULL.
+ */
+#define __flex_counter(FAM) __builtin_counted_by_ref(FAM)
+#else
+#define __flex_counter(FAM) ((void *)NULL)
+#endif
+
/*
* Some versions of gcc do not mark 'asm goto' volatile:
*
diff --git a/include/linux/overflow.h b/include/linux/overflow.h
index 725f95f7e416..12ca286c0f34 100644
--- a/include/linux/overflow.h
+++ b/include/linux/overflow.h
@@ -540,4 +540,44 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
(__member_size((name)->array) / sizeof(*(name)->array) + \
__must_be_array((name)->array))
+/**
+ * typeof_flex_counter() - Return the type of the counter variable of a given
+ * flexible array member annotated by __counted_by().
+ * @FAM: Pointer to the flexible array member within a given struct.
+ *
+ * Returns: "size_t" if no annotation exists.
+ */
+#define typeof_flex_counter(FAM) \
+ typeof(_Generic(__flex_counter(FAM), \
+ void *: (size_t)0, \
+ default: *__flex_counter(FAM)))
+
+/**
+ * __can_set_flex_counter() - Check if the counter associated with the given
+ * flexible array member can represent a value.
+ * @FAM: Pointer to the flexible array member within a given struct.
+ * @COUNT: Value to check against the __counted_by annotated @FAM's counter.
+ *
+ * Returns: true if @COUNT can be represented in the @FAM counter. When
+ * @FAM is not annotated with __counted_by(), always returns true.
+ */
+#define __can_set_flex_counter(FAM, COUNT) \
+ (!overflows_type(COUNT, typeof_flex_counter(FAM)))
+
+/**
+ * __set_flex_counter() - Set the counter associated with the given flexible
+ * array member that has been annoated by __counted_by().
+ * @FAM: Pointer to the flexible array member within a given struct.
+ * @COUNT: Value to store to the __counted_by annotated @FAM's counter.
+ *
+ * This is a no-op if no annotation exists. Count needs to be checked with
+ * __can_set_flex_counter(@FAM, @COUNT) before using this function.
+ */
+#define __set_flex_counter(FAM, COUNT) \
+({ \
+ *_Generic(__flex_counter(FAM), \
+ void *: &(size_t){ 0 }, \
+ default: __flex_counter(FAM)) = (COUNT); \
+})
+
#endif /* __LINUX_OVERFLOW_H */
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-22 1:42 [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family Kees Cook
2025-11-22 1:42 ` [PATCH v5 1/4] compiler_types: Introduce __flex_counter() " Kees Cook
@ 2025-11-22 1:42 ` Kees Cook
2025-11-22 19:53 ` Linus Torvalds
2025-11-22 1:42 ` [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations Kees Cook
2025-11-22 1:43 ` [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script Kees Cook
3 siblings, 1 reply; 20+ messages in thread
From: Kees Cook @ 2025-11-22 1:42 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Linus Torvalds, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
Introduce type-aware kmalloc-family helpers to replace the common
idioms for single, array, and flexible object allocations:
ptr = kmalloc(sizeof(*ptr), gfp);
ptr = kmalloc(sizeof(struct some_obj_name), gfp);
ptr = kzalloc(sizeof(*ptr), gfp);
ptr = kmalloc_array(count, sizeof(*ptr), gfp);
ptr = kcalloc(count, sizeof(*ptr), gfp);
ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
These become, respectively:
ptr = kmalloc_obj(*ptr, gfp);
ptr = kmalloc_obj(*ptr, gfp);
ptr = kzalloc_obj(*ptr, gfp);
ptr = kmalloc_objs(*ptr, count, gfp);
ptr = kzalloc_objs(*ptr, count, gfp);
ptr = kmalloc_flex(*ptr, flex_member, count, gfp);
Beyond the other benefits outlined below, the primary ergonomic benefit
is the elimination of needing "sizeof" nor the type name, and the
enforcement of assignment types (they do not return "void *", but rather
a pointer to the type of the first argument). The type name _can_ be
used, though, in the case where an assignment is indirect (e.g. via
"return").
These each return the newly allocated pointer to the type (which may be
NULL on failure). For cases where the total size of the allocation is
needed, the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz()
family of macros can be used. For example:
size = struct_size(ptr, flex_member, count);
ptr = kmalloc(size, gfp);
becomes:
ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &size);
With the *_sz() helpers, it becomes possible to do bounds checking of
the final size to make sure no arithmetic overflow has happened that
exceeds the storage size of the target size variable. E.g. it was possible
before to end up wrapping an allocation size and not noticing, there by
allocating too small a size. (Most of Linux's exposure on that particular
problem is via newly written code as we already did bulk conversions[1],
but we continue to have a steady stream of patches catching additional
cases[2] that would just go away with this API.)
Internal introspection of the allocated type now becomes possible,
allowing for future alignment-aware choices to be made by the allocator
and future hardening work that can be type sensitive. For example,
adding __alignof(*ptr) as an argument to the internal allocators so that
appropriate/efficient alignment choices can be made, or being able to
correctly choose per-allocation offset randomization within a bucket
that does not break alignment requirements.
For the flexible array helpers, the internal use of __flex_counter()
allows for automatically setting the counter member of a struct's flexible
array member when it has been annotated with __counted_by(), avoiding
any missed early size initializations while __counted_by() annotations
are added to the kernel. Additionally, this also checks for "too large"
allocations based on the type size of the counter variable. For example:
if (count > type_max(ptr->flex_count))
fail...;
size = struct_size(ptr, flex_member, count);
ptr = kmalloc(size, gfp);
ptr->flex_count = count;
becomes (n.b. unchanged from earlier example):
ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &size);
ptr->flex_count = count;
Note that manual initialization of the flexible array counter is still
required (at some point) after allocation as not all compiler versions
support the __counted_by annotation yet. But doing it internally makes
sure they cannot be missed when __counted_by _is_ available, meaning
that the bounds checker will not trip due to the lack of "early enough"
initializations that used to work before enabling the stricter bounds
checking. For example:
ptr = kmalloc_flex(*ptr, flex_member, count);
fill(ptr->flex, count);
ptr->flex_count = count;
This works correctly before adding a __counted_by annotation (since
nothing is checking ptr->flex accesses against ptr->flex_count). After
adding the annotation, the bounds sanitizer would trip during fill()
because ptr->flex_count wasn't set yet. But with kmalloc_flex() setting
ptr->flex_count internally at allocation time, the existing code works
without needing to move the ptr->flex_count assignment before the call
to fill(). (This has been a stumbling block for __counted_by adoption.)
Replacing all existing simple code patterns found via Coccinelle[3]
shows what could be replaced immediately (also saving roughly 1000 lines):
7863 files changed, 19639 insertions(+), 20692 deletions(-)
This would take us from 24085 k*alloc assignments to 7467:
$ git grep ' = kv\?[mzcv]alloc\(\|_array\)(' | wc -l
24085
$ git reset --hard HEAD^
HEAD is now at 8bccc91e6cdf treewide: kmalloc_obj conversion
$ git grep ' = kv\?[mzcv]alloc\(\|_array\)(' | wc -l
7467
This treewide change could be done at the end of the merge window just
before -rc1 is released (as is common for treewide changes). Handling
this API change in backports to -stable should be possible without much
hassle by backporting the __flex_counter() patch and this patch, while
taking conversions as-needed.
The impact on my bootable testing image size (with the treewide patch
applied) is tiny. With both GCC 13 (no __counted_by support) and GCC 15
(with __counted_by) the images are actually very slightly smaller:
$ size -G gcc-boot/vmlinux.gcc*
text data bss total filename
29975593 21527689 16601200 68104482 gcc-boot/vmlinux.gcc13-before
29969263 21528663 16601112 68099038 gcc-boot/vmlinux.gcc13-after
30555626 21291299 17086620 68933545 gcc-boot/vmlinux.gcc15-before
30550144 21292039 17086540 68928723 gcc-boot/vmlinux.gcc15-after
Link: https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b08fc5277aaa1d8ea15470d38bf36f19dfb0e125 [1]
Link: https://lore.kernel.org/all/?q=s%3Akcalloc+-s%3ARe%3A [2]
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_objs.cocci [3]
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
Cc: Bill Wendling <morbo@google.com>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Cc: Marco Elver <elver@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Sasha Levin <sashal@kernel.org>
Cc: linux-mm@kvack.org
---
Documentation/process/deprecated.rst | 42 +++++++
include/linux/slab.h | 172 +++++++++++++++++++++++++++
2 files changed, 214 insertions(+)
diff --git a/Documentation/process/deprecated.rst b/Documentation/process/deprecated.rst
index 1f7f3e6c9cda..eb72b75f5419 100644
--- a/Documentation/process/deprecated.rst
+++ b/Documentation/process/deprecated.rst
@@ -372,3 +372,45 @@ The helper must be used::
DECLARE_FLEX_ARRAY(struct type2, two);
};
};
+
+Open-coded kmalloc assignments for struct objects
+-------------------------------------------------
+Performing open-coded kmalloc()-family allocation assignments prevents
+the kernel (and compiler) from being able to examine the type of the
+variable being assigned, which limits any related introspection that
+may help with alignment, wrap-around, or additional hardening. The
+kmalloc_obj()-family of macros provide this introspection, which can be
+used for the common code patterns for single, array, and flexible object
+allocations. For example, these open coded assignments::
+
+ ptr = kmalloc(sizeof(*ptr), gfp);
+ ptr = kmalloc(sizeof(struct the_type_of_ptr_obj), gfp);
+ ptr = kzalloc(sizeof(*ptr), gfp);
+ ptr = kmalloc_array(count, sizeof(*ptr), gfp);
+ ptr = kcalloc(count, sizeof(*ptr), gfp);
+ ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
+
+become, respectively::
+
+ ptr = kmalloc_obj(*ptr, gfp);
+ ptr = kzalloc_obj(*ptr, gfp);
+ ptr = kmalloc_objs(*ptr, count, gfp);
+ ptr = kzalloc_objs(*ptr, count, gfp);
+ ptr = kmalloc_flex(*ptr, flex_member, count, gfp);
+
+For the cases where the total size of the allocation is also needed,
+the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family of
+macros can be used. For example, converting these assignments::
+
+ total_size = struct_size(ptr, flex_member, count);
+ ptr = kmalloc(total_size, gfp);
+
+becomes::
+
+ ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &total_size);
+
+If `ptr->flex_member` is annotated with __counted_by(), the allocation
+will automatically fail if `count` is larger than the maximum
+representable value that can be stored in the counter member associated
+with `flex_member`. Similarly, the allocation will fail if the total
+size of the allocation exceeds the maximum value `*total_size` can hold.
diff --git a/include/linux/slab.h b/include/linux/slab.h
index cf443f064a66..1c5219d79cf1 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -12,6 +12,7 @@
#ifndef _LINUX_SLAB_H
#define _LINUX_SLAB_H
+#include <linux/bug.h>
#include <linux/cache.h>
#include <linux/gfp.h>
#include <linux/overflow.h>
@@ -965,6 +966,177 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node);
#define kmalloc_nolock(...) alloc_hooks(kmalloc_nolock_noprof(__VA_ARGS__))
+#define __alloc_objs(ALLOC, VAR, COUNT, SIZE) \
+({ \
+ size_t __obj_size = size_mul(sizeof(VAR), COUNT); \
+ const typeof(_Generic(SIZE, \
+ void *: (size_t *)NULL, \
+ default: SIZE)) __size_ptr = (SIZE); \
+ typeof(VAR) *__obj_ptr = NULL; \
+ /* Does the total size fit in the *SIZE variable? */ \
+ if (!WARN_ON_ONCE(__size_ptr && __obj_size > type_max(*__size_ptr))) \
+ __obj_ptr = ALLOC; \
+ if (!__obj_ptr) \
+ __obj_size = 0; \
+ if (__size_ptr) \
+ *__size_ptr = __obj_size; \
+ __obj_ptr; \
+})
+
+#define __alloc_flex(ALLOC, VAR, FAM, COUNT, SIZE) \
+({ \
+ const size_t __count = (COUNT); \
+ size_t __obj_size = struct_size_t(typeof(VAR), FAM, __count); \
+ /* "*SIZE = ...;" below is unbuildable when SIZE is "NULL" */ \
+ const typeof(_Generic(SIZE, \
+ void *: (size_t *)NULL, \
+ default: SIZE)) __size_ptr = (SIZE); \
+ typeof(VAR) *__obj_ptr = NULL; \
+ if (!WARN_ON_ONCE(!__can_set_flex_counter(__obj_ptr->FAM, __count)) && \
+ !WARN_ON_ONCE(__size_ptr && __obj_size > type_max(*__size_ptr))) \
+ __obj_ptr = ALLOC; \
+ if (__obj_ptr) { \
+ __set_flex_counter(__obj_ptr->FAM, __count); \
+ } else { \
+ __obj_size = 0; \
+ } \
+ if (__size_ptr) \
+ *__size_ptr = __obj_size; \
+ __obj_ptr; \
+})
+
+/**
+ * kmalloc_obj - Allocate a single instance of the given structure
+ * @VAR: Variable or type to allocate.
+ * @FLAGS: GFP flags for the allocation.
+ *
+ * Returns: newly allocated pointer to a @VAR on success, NULL on failure.
+ */
+#define kmalloc_obj(VAR, FLAGS) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, 1, NULL)
+
+/**
+ * kmalloc_obj_sz - Allocate a single instance of the given structure and
+ * store total size
+ * @VAR: Variable or type to allocate.
+ * @FLAGS: GFP flags for the allocation.
+ * @SIZE: Pointer to variable to hold the total allocation size.
+ *
+ * Returns: newly allocated pointer to @VAR on success, NULL on failure.
+ * If @SIZE is non-NULL, the allocation will immediately fail if the total
+ * allocation size is larger than what the type of *@SIZE can represent.
+ * If @SIZE is non-NULL, *@SIZE is set to either allocation size on success,
+ * or 0 on failure.
+ */
+#define kmalloc_obj_sz(VAR, FLAGS, SIZE) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, 1, SIZE)
+
+/**
+ * kmalloc_objs - Allocate an array of the given structure
+ * @VAR: Variable or type to allocate an array of.
+ * @COUNT: How many elements in the array.
+ * @FLAGS: GFP flags for the allocation.
+ *
+ * Returns: newly allocated pointer to array of @VAR on success, NULL on
+ * failure.
+ */
+#define kmalloc_objs(VAR, COUNT, FLAGS) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, COUNT, NULL)
+
+/**
+ * kmalloc_objs_sz - Allocate an array of the given structure and store
+ * total size
+ * @VAR: Variable or type to allocate an array of.
+ * @COUNT: How many elements in the array.
+ * @FLAGS: GFP flags for the allocation.
+ * @SIZE: Pointer to variable to hold the total allocation size.
+ *
+ * Returns: newly allocated pointer to array of @VAR on success, NULL on
+ * failure. If @SIZE is non-NULL, the allocation will immediately fail if
+ * the total allocation size is larger than what the type of *@SIZE can
+ * represent. If @SIZE is non-NULL, *@SIZE is set to either allocation size
+ * on success, or 0 on failure.
+ */
+#define kmalloc_objs_sz(VAR, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, COUNT, SIZE)
+
+/**
+ * kmalloc_flex - Allocate a single instance of the given flexible structure
+ * @VAR: Variable or type to allocate, along with its flexible array member.
+ * @FAM: The name of the flexible array member of the structure.
+ * @COUNT: How many flexible array member elements are desired.
+ * @FLAGS: GFP flags for the allocation.
+ *
+ * Returns: newly allocated pointer to @VAR on success, NULL on failure.
+ * If @FAM has been annotated with __counted_by(), the allocation will
+ * immediately fail if @COUNT is larger than what the type of the struct's
+ * counter variable can represent.
+ */
+#define kmalloc_flex(VAR, FAM, COUNT, FLAGS) \
+ __alloc_flex(kmalloc(__obj_size, FLAGS), VAR, FAM, COUNT, NULL)
+
+/**
+ * kmalloc_flex_sz - Allocate a single instance of the given flexible
+ * structure and store total size
+ * @VAR: Variable or type to allocate, along with its flexible array member.
+ * @FAM: The name of the flexible array member of the structure.
+ * @COUNT: How many flexible array member elements are desired.
+ * @FLAGS: GFP flags for the allocation.
+ * @SIZE: Pointer to variable to hold the total allocation size.
+ *
+ * Returns: newly allocated pointer to @VAR on success, NULL on failure.
+ * If @FAM has been annotated with __counted_by(), the allocation will
+ * immediately fail if @COUNT is larger than what the type of the struct's
+ * counter variable can represent. If @SIZE is non-NULL, the allocation
+ * will immediately fail if the total allocation size is larger than what
+ * the type of *@SIZE can represent. If @SIZE is non-NULL, *@SIZE is set
+ * to either allocation size on success, or 0 on failure.
+ */
+#define kmalloc_flex_sz(VAR, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kmalloc(__obj_size, FLAGS), VAR, FAM, COUNT, SIZE)
+
+/* All kzalloc aliases for kmalloc_(obj|objs|fam)(|_sz). */
+#define kzalloc_obj(P, FLAGS) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, 1, NULL)
+#define kzalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, 1, SIZE)
+#define kzalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, COUNT, NULL)
+#define kzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, COUNT, SIZE)
+#define kzalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kzalloc(__obj_size, FLAGS), P, FAM, COUNT, NULL)
+#define kzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kzalloc(__obj_size, FLAGS), P, FAM, COUNT, SIZE)
+
+/* All kvmalloc aliases for kmalloc_(obj|objs|fam)(|_sz). */
+#define kvmalloc_obj(P, FLAGS) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, 1, NULL)
+#define kvmalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, 1, SIZE)
+#define kvmalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, COUNT, NULL)
+#define kvmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, COUNT, SIZE)
+#define kvmalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kvmalloc(__obj_size, FLAGS), P, FAM, COUNT, NULL)
+#define kvmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kvmalloc(__obj_size, FLAGS), P, FAM, COUNT, SIZE)
+
+/* All kvzalloc aliases for kmalloc_(obj|objs|fam)(|_sz). */
+#define kvzalloc_obj(P, FLAGS) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, 1, NULL)
+#define kvzalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, 1, SIZE)
+#define kvzalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, COUNT, NULL)
+#define kvzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, COUNT, SIZE)
+#define kvzalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kvzalloc(__obj_size, FLAGS), P, FAM, COUNT, NULL)
+#define kvzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kvzalloc(__obj_size, FLAGS), P, FAM, COUNT, SIZE)
+
#define kmem_buckets_alloc(_b, _size, _flags) \
alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations
2025-11-22 1:42 [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family Kees Cook
2025-11-22 1:42 ` [PATCH v5 1/4] compiler_types: Introduce __flex_counter() " Kees Cook
2025-11-22 1:42 ` [PATCH v5 2/4] slab: Introduce kmalloc_obj() " Kees Cook
@ 2025-11-22 1:42 ` Kees Cook
2025-11-22 4:51 ` Joe Perches
2025-11-22 1:43 ` [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script Kees Cook
3 siblings, 1 reply; 20+ messages in thread
From: Kees Cook @ 2025-11-22 1:42 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Andy Whitcroft, Joe Perches, Dwaipayan Ray,
Lukas Bulwahn, Randy Dunlap, Miguel Ojeda, Przemek Kitszel,
Gustavo A. R. Silva, Linus Torvalds, Matthew Wilcox,
Christoph Lameter, Marco Elver, Vegard Nossum, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Harry Yoo, Bill Wendling, Justin Stitt, Jann Horn,
Greg Kroah-Hartman, Sasha Levin, linux-mm, Nathan Chancellor,
Peter Zijlstra, Nick Desaulniers, Jonathan Corbet, Jakub Kicinski,
Yafang Shao, Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
To support shifting away from sized allocation towards typed
allocations, suggest the kmalloc_obj family of macros when a sizeof() is
present in the argument lists.
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
---
scripts/checkpatch.pl | 39 +++++++++++++++++++++++++++++++++------
1 file changed, 33 insertions(+), 6 deletions(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index d58ca9655ab7..a8cdfb502ccc 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -7258,17 +7258,42 @@ sub process {
"Prefer $3(sizeof(*$1)...) over $3($4...)\n" . $herecurr);
}
-# check for (kv|k)[mz]alloc with multiplies that could be kmalloc_array/kvmalloc_array/kvcalloc/kcalloc
+# check for (kv|k)[mz]alloc that could be kmalloc_obj/kvmalloc_obj/kzalloc_obj/kvzalloc_obj
+ if ($perl_version_ok &&
+ defined $stat &&
+ $stat =~ /^\+\s*($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*,/) {
+ my $oldfunc = $3;
+ my $a1 = $4;
+ my $newfunc = "kmalloc_obj";
+ $newfunc = "kvmalloc_obj" if ($oldfunc eq "kvmalloc");
+ $newfunc = "kvzalloc_obj" if ($oldfunc eq "kvzalloc");
+ $newfunc = "kzalloc_obj" if ($oldfunc eq "kzalloc");
+
+ if ($a1 =~ s/^sizeof\s*\S\(?([^\)]*)\)?$/$1/) {
+ my $cnt = statement_rawlines($stat);
+ my $herectx = get_stat_here($linenr, $cnt, $here);
+
+ if (WARN("ALLOC_WITH_SIZEOF",
+ "Prefer $newfunc over $oldfunc with sizeof\n" . $herectx) &&
+ $cnt == 1 &&
+ $fix) {
+ $fixed[$fixlinenr] =~ s/\b($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*,/$1 = $newfunc($a1,/;
+ }
+ }
+ }
+
+
+# check for (kv|k)[mz]alloc with multiplies that could be kmalloc_objs/kvmalloc_objs/kzalloc_objs/kvzalloc_objs
if ($perl_version_ok &&
defined $stat &&
$stat =~ /^\+\s*($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*\*\s*($FuncArg)\s*,/) {
my $oldfunc = $3;
my $a1 = $4;
my $a2 = $10;
- my $newfunc = "kmalloc_array";
- $newfunc = "kvmalloc_array" if ($oldfunc eq "kvmalloc");
- $newfunc = "kvcalloc" if ($oldfunc eq "kvzalloc");
- $newfunc = "kcalloc" if ($oldfunc eq "kzalloc");
+ my $newfunc = "kmalloc_objs";
+ $newfunc = "kvmalloc_objs" if ($oldfunc eq "kvmalloc");
+ $newfunc = "kvzalloc_objs" if ($oldfunc eq "kvzalloc");
+ $newfunc = "kzalloc_objs" if ($oldfunc eq "kzalloc");
my $r1 = $a1;
my $r2 = $a2;
if ($a1 =~ /^sizeof\s*\S/) {
@@ -7284,7 +7309,9 @@ sub process {
"Prefer $newfunc over $oldfunc with multiply\n" . $herectx) &&
$cnt == 1 &&
$fix) {
- $fixed[$fixlinenr] =~ s/\b($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*\*\s*($FuncArg)/$1 . ' = ' . "$newfunc(" . trim($r1) . ', ' . trim($r2)/e;
+ my $sized = trim($r2);
+ $sized =~ s/^sizeof\s*\S\(?([^\)]*)\)?$/$1/;
+ $fixed[$fixlinenr] =~ s/\b($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*\*\s*($FuncArg)/$1 . ' = ' . "$newfunc(" . $sized . ', ' . trim($r1)/e;
}
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script
2025-11-22 1:42 [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family Kees Cook
` (2 preceding siblings ...)
2025-11-22 1:42 ` [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations Kees Cook
@ 2025-11-22 1:43 ` Kees Cook
2025-11-24 12:50 ` [cocci] " Markus Elfring
3 siblings, 1 reply; 20+ messages in thread
From: Kees Cook @ 2025-11-22 1:43 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Julia Lawall, Nicolas Palix, cocci, Randy Dunlap,
Miguel Ojeda, Przemek Kitszel, Gustavo A. R. Silva,
Linus Torvalds, Matthew Wilcox, Christoph Lameter, Marco Elver,
Vegard Nossum, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Roman Gushchin, Harry Yoo, Bill Wendling,
Justin Stitt, Jann Horn, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
Finds and converts sized kmalloc-family of allocations into the
typed kmalloc_obj-family of allocations.
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Julia Lawall <Julia.Lawall@inria.fr>
Cc: Nicolas Palix <nicolas.palix@imag.fr>
Cc: cocci@inria.fr
---
scripts/coccinelle/api/kmalloc_objs.cocci | 168 ++++++++++++++++++++++
1 file changed, 168 insertions(+)
create mode 100644 scripts/coccinelle/api/kmalloc_objs.cocci
diff --git a/scripts/coccinelle/api/kmalloc_objs.cocci b/scripts/coccinelle/api/kmalloc_objs.cocci
new file mode 100644
index 000000000000..39f82f014b17
--- /dev/null
+++ b/scripts/coccinelle/api/kmalloc_objs.cocci
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/// Use kmalloc_obj family of macros for allocations
+///
+// Confidence: High
+// Comments:
+// Options: --include-headers-for-types --all-includes --include-headers --keep-comments
+
+virtual patch
+
+@initialize:python@
+@@
+import sys
+
+def alloc_array(name):
+ func = "FAILED_RENAME"
+ if name == "kmalloc_array":
+ func = "kmalloc_objs"
+ elif name == "kvmalloc_array":
+ func = "kvmalloc_objs"
+ elif name == "kcalloc":
+ func = "kzalloc_objs"
+ elif name == "kvcalloc":
+ func = "kvzalloc_objs"
+ else:
+ print(f"Unknown transform for {name}", file=sys.stderr)
+ return func
+
+@assign_sizeof depends on patch && !(file in "tools") && !(file in "samples")@
+type TYPE;
+TYPE *P;
+TYPE INST;
+expression VAR;
+expression GFP;
+expression SIZE;
+identifier ALLOC =~ "^kv?[mz]alloc$";
+fresh identifier ALLOC_OBJ_SZ = ALLOC ## "_obj_sz";
+@@
+
+(
+- SIZE = sizeof(*VAR);
+ ... when != SIZE
+ VAR =
+- ALLOC(SIZE, GFP);
++ ALLOC_OBJ_SZ(*VAR, GFP, &SIZE);
+|
+- SIZE = (sizeof(TYPE));
+ ... when != SIZE
+ P =
+- ALLOC(SIZE, GFP);
++ ALLOC_OBJ_SZ(*P, GFP, &SIZE);
+|
+- SIZE = (sizeof(INST));
+ ... when != SIZE
+ P =
+- ALLOC(SIZE, GFP);
++ ALLOC_OBJ_SZ(*P, GFP, &SIZE);
+)
+
+@assign_struct_size depends on patch && !(file in "tools") && !(file in "samples")@
+type TYPE;
+TYPE *P;
+expression VAR;
+expression GFP;
+expression SIZE;
+expression FLEX;
+expression COUNT;
+identifier ALLOC =~ "^kv?[mz]alloc$";
+fresh identifier ALLOC_FLEX_SZ = ALLOC ## "_flex_sz";
+@@
+
+(
+- SIZE = struct_size(VAR, FLEX, COUNT);
+ ... when != SIZE
+ VAR =
+- ALLOC(SIZE, GFP);
++ ALLOC_FLEX_SZ(*VAR, FLEX, COUNT, GFP, &SIZE);
+|
+- SIZE = struct_size_t(TYPE, FLEX, COUNT);
+ ... when != SIZE
+ P =
+- ALLOC(SIZE, GFP);
++ ALLOC_FLEX_SZ(*P, FLEX, COUNT, GFP, &SIZE);
+)
+
+// This excludes anything that is assigning to or from integral types or
+// string literals. Everything else gets the sizeof() extracted for the
+// kmalloc_obj() type/var argument. sizeof(void *) is also excluded because
+// it will need case-by-case double-checking to make sure the right type is
+// being assigned.
+@direct depends on patch && !(file in "tools") && !(file in "samples")@
+typedef u8, u16, u32, u64;
+typedef __u8, __u16, __u32, __u64;
+typedef uint8_t, uint16_t, uint32_t, uint64_t;
+typedef __le16, __le32, __le64;
+typedef __be16, __be32, __be64;
+type INTEGRAL = {u8,__u8,uint8_t,char,unsigned char,
+ u16,__u16,uint16_t,unsigned short,
+ u32,__u32,uint32_t,unsigned int,
+ u64,__u64,uint64_t,unsigned long,
+ __le16,__le32,__le64,__be16,__be32,__be64};
+char [] STRING;
+INTEGRAL *BYTES;
+type TYPE;
+expression VAR;
+expression GFP;
+expression COUNT;
+expression FLEX;
+expression E;
+identifier ALLOC =~ "^kv?[mz]alloc$";
+fresh identifier ALLOC_OBJ = ALLOC ## "_obj";
+fresh identifier ALLOC_FLEX = ALLOC ## "_flex";
+identifier ALLOC_ARRAY = {kmalloc_array,kvmalloc_array,kcalloc,kvcalloc};
+fresh identifier ALLOC_OBJS = script:python(ALLOC_ARRAY) { alloc_array(ALLOC_ARRAY) };
+@@
+
+(
+- VAR = ALLOC((sizeof(*VAR)), GFP)
++ VAR = ALLOC_OBJ(*VAR, GFP)
+|
+ ALLOC((\(sizeof(STRING)\|sizeof(INTEGRAL)\|sizeof(INTEGRAL *)\)), GFP)
+|
+ BYTES = ALLOC((sizeof(E)), GFP)
+|
+ BYTES = ALLOC((sizeof(TYPE)), GFP)
+|
+ ALLOC((sizeof(void *)), GFP)
+|
+- ALLOC((sizeof(E)), GFP)
++ ALLOC_OBJ(E, GFP)
+|
+- ALLOC((sizeof(TYPE)), GFP)
++ ALLOC_OBJ(TYPE, GFP)
+|
+ ALLOC_ARRAY(COUNT, (\(sizeof(STRING)\|sizeof(INTEGRAL)\|sizeof(INTEGRAL *)\)), GFP)
+|
+ BYTES = ALLOC_ARRAY(COUNT, (sizeof(E)), GFP)
+|
+ BYTES = ALLOC_ARRAY(COUNT, (sizeof(TYPE)), GFP)
+|
+ ALLOC_ARRAY((\(sizeof(STRING)\|sizeof(INTEGRAL)\|sizeof(INTEGRAL *)\)), COUNT, GFP)
+|
+ BYTES = ALLOC_ARRAY((sizeof(E)), COUNT, GFP)
+|
+ BYTES = ALLOC_ARRAY((sizeof(TYPE)), COUNT, GFP)
+|
+ ALLOC_ARRAY(COUNT, (sizeof(void *)), GFP)
+|
+ ALLOC_ARRAY((sizeof(void *)), COUNT, GFP)
+|
+- ALLOC_ARRAY(COUNT, (sizeof(E)), GFP)
++ ALLOC_OBJS(E, COUNT, GFP)
+|
+- ALLOC_ARRAY(COUNT, (sizeof(TYPE)), GFP)
++ ALLOC_OBJS(TYPE, COUNT, GFP)
+|
+- ALLOC_ARRAY((sizeof(E)), COUNT, GFP)
++ ALLOC_OBJS(E, COUNT, GFP)
+|
+- ALLOC_ARRAY((sizeof(TYPE)), COUNT, GFP)
++ ALLOC_OBJS(TYPE, COUNT, GFP)
+|
+- ALLOC(struct_size(VAR, FLEX, COUNT), GFP)
++ ALLOC_FLEX(*VAR, FLEX, COUNT, GFP)
+|
+- ALLOC(struct_size_t(TYPE, FLEX, COUNT), GFP)
++ ALLOC_FLEX(TYPE, FLEX, COUNT, GFP)
+)
+
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations
2025-11-22 1:42 ` [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations Kees Cook
@ 2025-11-22 4:51 ` Joe Perches
0 siblings, 0 replies; 20+ messages in thread
From: Joe Perches @ 2025-11-22 4:51 UTC (permalink / raw)
To: Kees Cook, Vlastimil Babka
Cc: Andy Whitcroft, Dwaipayan Ray, Lukas Bulwahn, Randy Dunlap,
Miguel Ojeda, Przemek Kitszel, Gustavo A. R. Silva,
Linus Torvalds, Matthew Wilcox, Christoph Lameter, Marco Elver,
Vegard Nossum, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Roman Gushchin, Harry Yoo, Bill Wendling,
Justin Stitt, Jann Horn, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Fri, 2025-11-21 at 17:42 -0800, Kees Cook wrote:
> To support shifting away from sized allocation towards typed
> allocations, suggest the kmalloc_obj family of macros when a sizeof() is
> present in the argument lists.
[]
> diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
[]
> @@ -7258,17 +7258,42 @@ sub process {
> "Prefer $3(sizeof(*$1)...) over $3($4...)\n" . $herecurr);
> }
>
> -# check for (kv|k)[mz]alloc with multiplies that could be kmalloc_array/kvmalloc_array/kvcalloc/kcalloc
> +# check for (kv|k)[mz]alloc that could be kmalloc_obj/kvmalloc_obj/kzalloc_obj/kvzalloc_obj
There are _way_ too many of these existing uses to suggest this change
in existing files so please add '&& !$file' to these tests
> + if ($perl_version_ok &&
> + defined $stat &&
> + $stat =~ /^\+\s*($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*,/) {
> + my $oldfunc = $3;
> + my $a1 = $4;
> + my $newfunc = "kmalloc_obj";
> + $newfunc = "kvmalloc_obj" if ($oldfunc eq "kvmalloc");
> + $newfunc = "kvzalloc_obj" if ($oldfunc eq "kvzalloc");
> + $newfunc = "kzalloc_obj" if ($oldfunc eq "kzalloc");
> +
> + if ($a1 =~ s/^sizeof\s*\S\(?([^\)]*)\)?$/$1/) {
> + my $cnt = statement_rawlines($stat);
> + my $herectx = get_stat_here($linenr, $cnt, $here);
> +
> + if (WARN("ALLOC_WITH_SIZEOF",
> + "Prefer $newfunc over $oldfunc with sizeof\n" . $herectx) &&
> + $cnt == 1 &&
> + $fix) {
> + $fixed[$fixlinenr] =~ s/\b($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*,/$1 = $newfunc($a1,/;
> + }
> + }
> + }
> +
> +
> +# check for (kv|k)[mz]alloc with multiplies that could be kmalloc_objs/kvmalloc_objs/kzalloc_objs/kvzalloc_objs
> if ($perl_version_ok &&
> defined $stat &&
> $stat =~ /^\+\s*($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*\*\s*($FuncArg)\s*,/) {
> my $oldfunc = $3;
> my $a1 = $4;
> my $a2 = $10;
> - my $newfunc = "kmalloc_array";
> - $newfunc = "kvmalloc_array" if ($oldfunc eq "kvmalloc");
> - $newfunc = "kvcalloc" if ($oldfunc eq "kvzalloc");
> - $newfunc = "kcalloc" if ($oldfunc eq "kzalloc");
> + my $newfunc = "kmalloc_objs";
> + $newfunc = "kvmalloc_objs" if ($oldfunc eq "kvmalloc");
> + $newfunc = "kvzalloc_objs" if ($oldfunc eq "kvzalloc");
> + $newfunc = "kzalloc_objs" if ($oldfunc eq "kzalloc");
> my $r1 = $a1;
> my $r2 = $a2;
> if ($a1 =~ /^sizeof\s*\S/) {
> @@ -7284,7 +7309,9 @@ sub process {
> "Prefer $newfunc over $oldfunc with multiply\n" . $herectx) &&
> $cnt == 1 &&
> $fix) {
> - $fixed[$fixlinenr] =~ s/\b($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*\*\s*($FuncArg)/$1 . ' = ' . "$newfunc(" . trim($r1) . ', ' . trim($r2)/e;
> + my $sized = trim($r2);
> + $sized =~ s/^sizeof\s*\S\(?([^\)]*)\)?$/$1/;
> + $fixed[$fixlinenr] =~ s/\b($Lval)\s*\=\s*(?:$balanced_parens)?\s*((?:kv|k)[mz]alloc)\s*\(\s*($FuncArg)\s*\*\s*($FuncArg)/$1 . ' = ' . "$newfunc(" . $sized . ', ' . trim($r1)/e;
> }
> }
> }
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-22 1:42 ` [PATCH v5 2/4] slab: Introduce kmalloc_obj() " Kees Cook
@ 2025-11-22 19:53 ` Linus Torvalds
2025-11-22 20:54 ` Linus Torvalds
2025-11-24 20:38 ` Kees Cook
0 siblings, 2 replies; 20+ messages in thread
From: Linus Torvalds @ 2025-11-22 19:53 UTC (permalink / raw)
To: Kees Cook
Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
Honestly, I hate this.
In particular, I intensely dislike that horrendous 'SIZE' parameter to
those helper macros, and this just needs to die.
The argument for that horror is also just silly:
On Fri, 21 Nov 2025 at 17:43, Kees Cook <kees@kernel.org> wrote:
>
> These each return the newly allocated pointer to the type (which may be
> NULL on failure). For cases where the total size of the allocation is
> needed, the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz()
> family of macros can be used. For example:
>
> size = struct_size(ptr, flex_member, count);
> ptr = kmalloc(size, gfp);
>
> becomes:
>
> ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &size);
That thing is ACTIVELY WORSE than the code it replaces.
One of them makes sense and is legible toi a normal human.
The other does not.
The alleged advantage is apparently that you can do it on one line,
but when that one line is just horrible garbage, that is not an
advantyage at all.
And the impact of that crazy SIZE on the macro expansions makes the
whole thing entirely illegible.
I will not merge anything this broken.
The whole "limit to pre-defined size" argument is also just crazy,
because now the SIZE parameter suddenly gets a second meaning. EVEN
WORSE.
Finally, I think the parts of this that aren't wrong are too limited,
and do not go far enough.
Because once you give that "alloc_obj()" an actual type, it should
take the alignment of the type into account too.
I also think this part:
+ typeof(VAR) *__obj_ptr = NULL; \
+ if (!WARN_ON_ONCE(!__can_set_flex_counter(__obj_ptr->FAM, __count)) && \
absolutely needs to die. You just set __obj_ptr to NULL, and then you
use __obj_ptr->FAM. Now, it so happens that __can_set_flex_counter()
only cares about the *type*, but dammit, this kind of code sequence is
simply not acceptable, and it needs to make that *explicit* by using
sane syntax like perhaps just spelling that out, using VAR, not that
NULL value.
IOW. making it use something like "typeof(VAR.FAM)" might work. Not
that crazy garbage.
I never want to see this kind of horrendous patch again. Everything
about it just screamed "disgusting".
Linus
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-22 19:53 ` Linus Torvalds
@ 2025-11-22 20:54 ` Linus Torvalds
2025-11-24 20:38 ` Kees Cook
1 sibling, 0 replies; 20+ messages in thread
From: Linus Torvalds @ 2025-11-22 20:54 UTC (permalink / raw)
To: Kees Cook
Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
Btw, I realize that we don't have a good way to do the alignment with
the current kmalloc() interface (we do for some of the vmalloc
interfaces).
So for now, it should just have some static build-time warning if the
type of the object we allocate has a bigger alignment than the
guaranteed slab allocation alignment (ARCH_KMALLOC_MINALIGN or
whatever).
And I really think the first version should do the minimal thing that
actually matters, and strive to deal with the simple cases. The main
things that matter are
- the return type should be a proper pointer type (so that you get
warnings for mis-uses, but also so that you can use automatic typing)
- making the 'sizeof()' match the type
so honestly, I think 99% of the gain would come from something fairly
simple like
#define kmalloc_verify(type) \
BUILD_BUG_ON_ZERO(__alignof__(type) > ARCH_KMALLOC_MINALIGN)
#define kmalloc_size(type) \
(sizeof(type) + kmalloc_verify(type))
#define allocator(name, type, size, ...) \
(typeof(type) *)name(size, __VA_ARGS__)
#define kmalloc_obj(type, gfp) \
allocator(kmalloc, type, kmalloc_size(type), gfp)
#define kzalloc_obj(type, gfp) \
allocator(kzalloc, type, kmalloc_size(type), gfp)
#define kzalloc_struct(type, member, count, gfp) \
allocator(kzalloc, type, struct_size_t(typeof(type), member,
count), gfp)
The above macros are entirely untested. But they are simple enough
that even if they are buggy and I miscounted the parentheses or used
the wrong name somewhere, I think the idea is clear. No?
(And I made that "allocator()" macro use __VA_ARGS__ because
kzalloc_node() and friends would want that, but I think it's starting
to hit diminishing returns at that point)
Hmm?
Linus
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [cocci] [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script
2025-11-22 1:43 ` [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script Kees Cook
@ 2025-11-24 12:50 ` Markus Elfring
0 siblings, 0 replies; 20+ messages in thread
From: Markus Elfring @ 2025-11-24 12:50 UTC (permalink / raw)
To: Kees Cook, cocci, linux-hardening, linux-mm, Julia Lawall,
Nicolas Palix
Cc: LKML, linux-doc, llvm, Alexander Lobakin, Alexander Potapenko,
Andrew Morton, Bill Wendling, Christoph Lameter, David Rientjes,
Greg Kroah-Hartman, Gustavo A. R. Silva, Harry Yoo,
Jakub Kicinski, Jan Hendrik Farr, Jann Horn, Jonathan Corbet,
Joonsoo Kim, Justin Stitt, Linus Torvalds, Marco Elver,
Matthew Wilcox, Miguel Ojeda, Nathan Chancellor, Nick Desaulniers,
Pekka Enberg, Peter Zijlstra, Przemek Kitszel, Randy Dunlap,
Roman Gushchin, Sasha Levin, Tony Ambardar, Vegard Nossum,
Vlastimil Babka, Yafang Shao
> Finds and converts sized kmalloc-family of allocations into the
> typed kmalloc_obj-family of allocations.
See also:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/submitting-patches.rst?h=v6.18-rc7#n94
…
> +++ b/scripts/coccinelle/api/kmalloc_objs.cocci
> @@ -0,0 +1,168 @@
…
> +// Comments:
…
* Please omit such an empty information line.
* Would a field like “Keywords” become helpful?
> +virtual patch
Will additional operation modes become relevant after clarification of implementation details?
…
> +def alloc_array(name):
> + func = "FAILED_RENAME"
> + if name == "kmalloc_array":
> + func = "kmalloc_objs"
…
* I suggest to avoid duplicate variable assignments.
* How do you think about to collaborate with the Python data structure “dictionary”?
…
> +type TYPE;
> +TYPE *P;
> +TYPE INST;
> +expression VAR;
> +expression GFP;
…
Such repetition of SmPL key words can eventually be also avoided.
Regards,
Markus
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-22 19:53 ` Linus Torvalds
2025-11-22 20:54 ` Linus Torvalds
@ 2025-11-24 20:38 ` Kees Cook
2025-11-24 21:12 ` Matthew Wilcox
2025-11-24 21:35 ` Linus Torvalds
1 sibling, 2 replies; 20+ messages in thread
From: Kees Cook @ 2025-11-24 20:38 UTC (permalink / raw)
To: Linus Torvalds
Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
I extracted -- from your kind and loving feedback :) -- a number of
specific concerns you seem to have about this proposal:
- You don't like the additional ..._sz set of helpers.
- Alignment isn't taken into account.
- You found the ...set_flex_counter() usage unreadable.
To me, it seems like the primary issue with this patch is that it
probably needs to be at least 2 patches if not more, where we can
iterate on each aspect.
The area of _agreement_, I think, is that type-based allocation is
a good idea. You even recently used it as an example in an unrelated
thread[1] discussing variable declarations in the middle of functions,
and used (a form of) this API proposal, which I whole-heartedly agree
with. i.e. that once we have type-based allocators, we can do things like:
__auto_type var = alloc_obj(struct whatever, gfp);
So, taking each issue in turn...
On Sat, Nov 22, 2025 at 11:53:33AM -0800, Linus Torvalds wrote:
> In particular, I intensely dislike that horrendous 'SIZE' parameter to
> those helper macros
> [...]
> The argument for that horror is also just silly:
>
> On Fri, 21 Nov 2025 at 17:43, Kees Cook <kees@kernel.org> wrote:
> >
> > These each return the newly allocated pointer to the type (which may be
> > NULL on failure). For cases where the total size of the allocation is
> > needed, the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz()
> > family of macros can be used. For example:
> >
> > size = struct_size(ptr, flex_member, count);
> > ptr = kmalloc(size, gfp);
> >
> > becomes:
> >
> > ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &size);
>
> That thing is ACTIVELY WORSE than the code it replaces.
>
> One of them makes sense and is legible toi a normal human.
>
> The other does not.
>
> The alleged advantage is apparently that you can do it on one line,
> but when that one line is just horrible garbage, that is not an
> advantyage at all.
>
> And the impact of that crazy SIZE on the macro expansions makes the
> whole thing entirely illegible.
>
> I will not merge anything this broken.
>
> The whole "limit to pre-defined size" argument is also just crazy,
> because now the SIZE parameter suddenly gets a second meaning. EVEN
> WORSE.
(Size calculation)
The benefit is not that it is done in one line, but rather than the size
calculations (which there is no exception handling for) gets built into
the allocation so that wrapping and truncations get communicated back
to the caller in the only way we have available: returning NULL from
the allocation.
I'm not sure what you mean by "limit to pre-defined size". There's no
such design in those helpers, except from the perspective of "detect
and refuse to truncate overflows into too-small storage". Is that what
you meant?
The persistent problem we have over and over in Linux is the lack of
feedback from overflowed arithmetic. This is being worked on[2] at the
same time, but I'm trying to find a way that we can get at "normal"
error handling, since just exploding if an overflow happens isn't a very
friendly way to deal with such conditions, but C doesn't give us much to
work with. If it's _part_ of the allocation, though, we have a natural
way to say "but you can't store 1024 into a u8".
For code like:
u8 size;
...
size = struct_size(ptr, flex_member, count);
ptr = kmalloc(size, gfp);
While struct_size() is designed to deal with overflows beyond SIZE_MAX,
it can't do anything about truncation of its return value since it has
no visibility into the lvalue type. So this code pattern happily
truncates, allocates too little memory, and then usually does stuff like
runs a for-loop based on "count" instead of "size" and walks right off
the end of the heap allocation, clobbering whatever follows it.
If we have a clean way to build the size into the allocation, then we
can report back a NULL allocation when something goes wrong with the
calculation or the storage of the calculation.
Now, an alternative could be to keep the allocation and the size reporting
separate with some kind of __must_check "give me the size of this thing"
function, but I don't really like this because it feels much less
ergonomic to me:
u8 size;
...
__auto_type ptr = kmalloc_flex(struct whatever, counter, count, gfp);
if (flex_size(ptr, counter, &size)) {
kfree(ptr);
return -EINVAL;
}
> [...]
> Because once you give that "alloc_obj()" an actual type, it should
> take the alignment of the type into account too.
(Alignment)
Sure, but that's the whole point of trying to switch to type-based
allocation: so that we CAN get at the alignment. That would be a "next
step" approach in my mind, since we could bucketize allocations by type
(or alignment), instead of by size. But that's more of a follow-on, in
my opinion. I'd like to get some agreement on the exposed interface for
this API first, and then move to enhancing the internals to take
advantage of the new information.
> I also think this part:
>
> + typeof(VAR) *__obj_ptr = NULL; \
> + if (!WARN_ON_ONCE(!__can_set_flex_counter(__obj_ptr->FAM, __count)) && \
>
> absolutely needs to die. You just set __obj_ptr to NULL, and then you
> use __obj_ptr->FAM. Now, it so happens that __can_set_flex_counter()
> only cares about the *type*, but dammit, this kind of code sequence is
> simply not acceptable, and it needs to make that *explicit* by using
> sane syntax like perhaps just spelling that out, using VAR, not that
> NULL value.
>
> IOW. making it use something like "typeof(VAR.FAM)" might work. Not
> that crazy garbage.
(set_flex_counter)
I get your objection. We have other places where we do "((TYPE *)NULL)->member"
explicitly to get at types, but I see what you mean that is feels very
wrong to see the "->" after setting something to NULL in that it's
separated from the NULL-ness.
One of the reasons for using __obj_ptr was because "VAR" may be a type
and not a variable, so "VAR.FAM" isn't possible. Doing it the following
way seemed uglier to me than using __obj_ptr->FAM:
if (!WARN_ON_ONCE(!__can_set_flex_counter(((typeof(VAR) *)NULL)->FAM, __count)) &&
since we literally already have "((typeof(VAR) *)NULL)" with __obj_ptr.
But perhaps much of this could be collapsed into the
__can_set_flex_counter() helper instead.
What do you think?
-Kees
[1] https://lore.kernel.org/all/CAHk-=wiCOTW5UftUrAnvJkr6769D29tF7Of79gUjdQHS_TkF5A@mail.gmail.com/
[2] https://github.com/llvm/llvm-project/pull/148914
--
Kees Cook
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 20:38 ` Kees Cook
@ 2025-11-24 21:12 ` Matthew Wilcox
2025-11-24 21:20 ` Kees Cook
2025-11-24 21:35 ` Linus Torvalds
1 sibling, 1 reply; 20+ messages in thread
From: Matthew Wilcox @ 2025-11-24 21:12 UTC (permalink / raw)
To: Kees Cook
Cc: Linus Torvalds, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, Gustavo A . R . Silva, Bill Wendling, Justin Stitt,
Jann Horn, Przemek Kitszel, Marco Elver, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Vegard Nossum,
Harry Yoo, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Mon, Nov 24, 2025 at 12:38:57PM -0800, Kees Cook wrote:
> For code like:
>
> u8 size;
> ...
> size = struct_size(ptr, flex_member, count);
> ptr = kmalloc(size, gfp);
>
> While struct_size() is designed to deal with overflows beyond SIZE_MAX,
> it can't do anything about truncation of its return value since it has
> no visibility into the lvalue type. So this code pattern happily
> truncates, allocates too little memory, and then usually does stuff like
> runs a for-loop based on "count" instead of "size" and walks right off
> the end of the heap allocation, clobbering whatever follows it.
Have we investigated a compiler warning like
-Wimplicit-arithmetic-truncation that would complain about this kind of
thing and could be shut up by an explicit cast:
size = (u8)struct_size(ptr, flex_member, count);
or arithmetic that can be proven to not overflow:
size = struct_size(ptr, flex_member, count) & 0xff;
Maybe such a warning already exists and it's just too noisy to even
start thinking about turning it on?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 21:12 ` Matthew Wilcox
@ 2025-11-24 21:20 ` Kees Cook
2025-11-24 21:33 ` Matthew Wilcox
2025-11-24 21:44 ` Matthew Wilcox
0 siblings, 2 replies; 20+ messages in thread
From: Kees Cook @ 2025-11-24 21:20 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Linus Torvalds, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, Gustavo A . R . Silva, Bill Wendling, Justin Stitt,
Jann Horn, Przemek Kitszel, Marco Elver, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Vegard Nossum,
Harry Yoo, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Mon, Nov 24, 2025 at 09:12:14PM +0000, Matthew Wilcox wrote:
> On Mon, Nov 24, 2025 at 12:38:57PM -0800, Kees Cook wrote:
> > For code like:
> >
> > u8 size;
> > ...
> > size = struct_size(ptr, flex_member, count);
> > ptr = kmalloc(size, gfp);
> >
> > While struct_size() is designed to deal with overflows beyond SIZE_MAX,
> > it can't do anything about truncation of its return value since it has
> > no visibility into the lvalue type. So this code pattern happily
> > truncates, allocates too little memory, and then usually does stuff like
> > runs a for-loop based on "count" instead of "size" and walks right off
> > the end of the heap allocation, clobbering whatever follows it.
>
> Have we investigated a compiler warning like
> -Wimplicit-arithmetic-truncation that would complain about this kind of
> thing and could be shut up by an explicit cast:
>
> size = (u8)struct_size(ptr, flex_member, count);
>
> or arithmetic that can be proven to not overflow:
> size = struct_size(ptr, flex_member, count) & 0xff;
>
> Maybe such a warning already exists and it's just too noisy to even
> start thinking about turning it on?
Yes, -Wconversion (W=3) is mind-blowingly noisy, unfortunately.
--
Kees Cook
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 21:20 ` Kees Cook
@ 2025-11-24 21:33 ` Matthew Wilcox
2025-11-24 21:44 ` Matthew Wilcox
1 sibling, 0 replies; 20+ messages in thread
From: Matthew Wilcox @ 2025-11-24 21:33 UTC (permalink / raw)
To: Kees Cook
Cc: Linus Torvalds, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, Gustavo A . R . Silva, Bill Wendling, Justin Stitt,
Jann Horn, Przemek Kitszel, Marco Elver, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Vegard Nossum,
Harry Yoo, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Mon, Nov 24, 2025 at 01:20:21PM -0800, Kees Cook wrote:
> > Maybe such a warning already exists and it's just too noisy to even
> > start thinking about turning it on?
>
> Yes, -Wconversion (W=3) is mind-blowingly noisy, unfortunately.
It looks like GCC isn't smart enough. The first warning I saw was legit
and easy to fix. The second one is bogus:
include/linux/err.h: In function ‘PTR_ERR_OR_ZERO’:
include/linux/err.h:120:24: error: conversion from ‘long int’ to ‘int’ may change value [-Werror=conversion]
120 | return PTR_ERR(ptr);
But GCC can prove that this isn't true; it just chooses not to:
#define IS_ERR_VALUE(x) unlikely((unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO)
static inline bool __must_check IS_ERR(__force const void *ptr)
{
return IS_ERR_VALUE((unsigned long)ptr);
}
static inline int __must_check PTR_ERR_OR_ZERO(__force const void *ptr)
{
if (IS_ERR(ptr))
return PTR_ERR(ptr);
So GCC knows in this path that 'ptr' is in the range [-4095..-1] and
the conversion from long to int will not change the value.
I imagine that fixing this is not high on the GCC developer priority
list, but if we filed a bug that might change?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 20:38 ` Kees Cook
2025-11-24 21:12 ` Matthew Wilcox
@ 2025-11-24 21:35 ` Linus Torvalds
2025-11-25 0:29 ` Kees Cook
1 sibling, 1 reply; 20+ messages in thread
From: Linus Torvalds @ 2025-11-24 21:35 UTC (permalink / raw)
To: Kees Cook
Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
On Mon, 24 Nov 2025 at 12:39, Kees Cook <kees@kernel.org> wrote:
>
> I'm not sure what you mean by "limit to pre-defined size". There's no
> such design in those helpers, except from the perspective of "detect
> and refuse to truncate overflows into too-small storage". Is that what
> you meant?
I meant that odd combination of checking both for minimal size and
then assigning to it, but upon re-reading it, I realize that the
"check for minimal size" was actually checking the size of the result
variable.
Those macros are illegible. And 99% of all users DO NOT WANT ANY OF
THAT COMPLEXITY.
Yes, the wrapper macros then pass in NULL, which then - using yet more
complexity - turns into a dummy thing.
Basically, if *I* find those macros unreadable - and I'm actually
fairly good at parsing those things - then they are way too
complicated.
And they aren't even complicated for a good reason. My alternate ones
did *more*, and did it with less code and less confusion.
And you added the complication to make the users less legible.
So no. We're not doing *any* of that. You make it simple and targeted
to the *common* case, of you don't do this at all. Because that
over-designed mess that actually turned some users *less* readable,
but one line shorter, was bad.
Linus
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 21:20 ` Kees Cook
2025-11-24 21:33 ` Matthew Wilcox
@ 2025-11-24 21:44 ` Matthew Wilcox
2025-11-24 21:50 ` Kees Cook
2025-11-24 23:30 ` Linus Torvalds
1 sibling, 2 replies; 20+ messages in thread
From: Matthew Wilcox @ 2025-11-24 21:44 UTC (permalink / raw)
To: Kees Cook
Cc: Linus Torvalds, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, Gustavo A . R . Silva, Bill Wendling, Justin Stitt,
Jann Horn, Przemek Kitszel, Marco Elver, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Vegard Nossum,
Harry Yoo, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Mon, Nov 24, 2025 at 01:20:21PM -0800, Kees Cook wrote:
> Yes, -Wconversion (W=3) is mind-blowingly noisy, unfortunately.
This third one is interesting.
include/linux/jump_label.h:126:44: error: conversion to ‘long unsigned int’ from ‘s32’ {aka ‘int’} may change the sign of the result [-Werror=sign-conversion]
126 | return (unsigned long)&entry->code + entry->code;
static inline unsigned long jump_entry_code(const struct jump_entry *entry)
{
return (unsigned long)&entry->code + entry->code;
}
The warning is ... not the best phrased, but in terms of divining the
programmer's intent, I genuinely don't know if this code is supposed
to zero-extend or sign-extend the s32 to unsigned long. I know what it
*does*, but I don't know if it was *supposed to do that*. So I wuold be
in favour of enabling this warning ... if we have a small army of people
on tap to get the kernel to build. There's 374 lines of errors to fix
from the header files included by scripts/mod/devicetable-offsets.s alone.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 21:44 ` Matthew Wilcox
@ 2025-11-24 21:50 ` Kees Cook
2025-11-24 23:30 ` Linus Torvalds
1 sibling, 0 replies; 20+ messages in thread
From: Kees Cook @ 2025-11-24 21:50 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Linus Torvalds, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, Gustavo A . R . Silva, Bill Wendling, Justin Stitt,
Jann Horn, Przemek Kitszel, Marco Elver, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Vegard Nossum,
Harry Yoo, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Mon, Nov 24, 2025 at 09:44:16PM +0000, Matthew Wilcox wrote:
> On Mon, Nov 24, 2025 at 01:20:21PM -0800, Kees Cook wrote:
> > Yes, -Wconversion (W=3) is mind-blowingly noisy, unfortunately.
>
> This third one is interesting.
>
> include/linux/jump_label.h:126:44: error: conversion to ‘long unsigned int’ from ‘s32’ {aka ‘int’} may change the sign of the result [-Werror=sign-conversion]
> 126 | return (unsigned long)&entry->code + entry->code;
>
> static inline unsigned long jump_entry_code(const struct jump_entry *entry)
> {
> return (unsigned long)&entry->code + entry->code;
> }
>
> The warning is ... not the best phrased, but in terms of divining the
> programmer's intent, I genuinely don't know if this code is supposed
> to zero-extend or sign-extend the s32 to unsigned long. I know what it
> *does*, but I don't know if it was *supposed to do that*.
This is my core frustration with C: we have SO many things where we have
ambiguous intent. Yes, C may do exactly 1 thing with a given construct,
but it isn't clear that the author's intent matches what actually
happens.
> So I wuold be
> in favour of enabling this warning ... if we have a small army of people
> on tap to get the kernel to build. There's 374 lines of errors to fix
> from the header files included by scripts/mod/devicetable-offsets.s alone.
I'm for it, but that is a LONG road. I have so many other hills to die
on first. ;)
--
Kees Cook
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 21:44 ` Matthew Wilcox
2025-11-24 21:50 ` Kees Cook
@ 2025-11-24 23:30 ` Linus Torvalds
2025-11-25 1:09 ` Matthew Wilcox
1 sibling, 1 reply; 20+ messages in thread
From: Linus Torvalds @ 2025-11-24 23:30 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Kees Cook, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, Gustavo A . R . Silva, Bill Wendling, Justin Stitt,
Jann Horn, Przemek Kitszel, Marco Elver, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Vegard Nossum,
Harry Yoo, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Mon, 24 Nov 2025 at 13:44, Matthew Wilcox <willy@infradead.org> wrote:
>
> This third one is interesting.
Why?
> include/linux/jump_label.h:126:44: error: conversion to ‘long unsigned int’ from ‘s32’ {aka ‘int’} may change the sign of the result [-Werror=sign-conversion]
> 126 | return (unsigned long)&entry->code + entry->code;
>
> static inline unsigned long jump_entry_code(const struct jump_entry *entry)
> {
> return (unsigned long)&entry->code + entry->code;
> }
I'm not seeing the confusion. 'entry->code' is a signed 32-bit entry,
and we're adding it to the address of the entry itself (well, the
address of the 'code' member of the entry, but it's the first thing in
that struct, so same thing in this case).
The next function in that file (which uses "target", which is *not*
the first entry in the struct) then makes it clear that we actually do
all these relative offsets relative to the address of the relative
offset itself, not the base address of the struct.
But that's actually just a random implementation decision.
That all a very standard thing in assembly programming, which this is
all about. 'entry' is a signed offset from its own address.
And yes, the exact thing you are relative to may differ. When looking
at instruction sets, sometimes the relative address is relative to the
beginning of the current instruction (I think m68k did that), and
sometimes it's relative to the *end* of the instruction (I think x86
does that). It could even be relative to the actual byte *inside* the
instruction, although I can't think of an example of that.
Sometimes it's relative with a shift (basically all fixed-size
instruction CPU's do that since instructions are all mutually
aligned).
So that's whole "exactly what is it relative to, what are the sizes
involved, and are there maybe shifts" etc just a design choice.
The whole "offset is relative to its own address" that this code uses
is probably the simplest form it can have.
> The warning is ... not the best phrased, but in terms of divining the
> programmer's intent, I genuinely don't know if this code is supposed
> to zero-extend or sign-extend the s32 to unsigned long.
What?
A signed value gets sign-extended when cast to a larger type. That's
how all of this always works. Casting a signed value to 'unsigned
long' will set the high bits in the result.
That's pretty much the *definition* of a signed value. It gets
sign-extended when used, and then obviously it becomes a large
unsigned value, but this is how two's complement addition
fundamentally works.
This is bog-standard and happens all over the place. We have things like
unsigned long xyz = -1;
in various places, this is not some kind of unclear area of the standard.
And for all the same reasons, the programmers intent is obvious too.
'code' is s32 and is signed for a reason - so that you have a +-2GB
relative offset. And this is not some kind of unusual pattern when
we're talking about relative branch targets.
Now, if you don't know about things like relative branch targets in
low-level assembly, maybe this is code you have to look over several
times, but this code is literally ABOUT re-writing branches in
assembly language, and this kind of pattern where you use relative
offsets is traditional.
And yes, the offsets are literally smaller than the address space, and
the signed offsets get sign-extended as part of this all: that's very
traditional too.
Basically no 64-bit CPU has 64-bit branch offsets (and few 32-bit
CPU's have full 32-bit branch offsets - x86 happens to do it, but most
RISC CPU's tend to have branch offsets that are in the ~20 bit range
and if you have big relative branches you need to do those with a
separate constant section or something like that).
So honestly, what's the problem with this code?
The warning makes no sense, and is garbage. Are we not allowed to add
signed integers to unsigned 64-bit values now, because that addition
involves that cast of a signed 32-bit entry to an unsigned 64-bit one?
There is NO WAY that warning is valid, it's; not *ever* something we
should enable, and the fact that you people are discussing it as such
is just crazy.
That code would not be improved at all by adding another cast (to
first cast that s32 to 'long', in order to then add it to 'unsigned
long').
Imagine how many other places you add integers to 'unsigned long'.
EVERY SINGLE ONE of those places involves sign-extending the integer
and then doing arithmetic in unsigned.
Linus
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 21:35 ` Linus Torvalds
@ 2025-11-25 0:29 ` Kees Cook
2025-11-25 1:25 ` Linus Torvalds
0 siblings, 1 reply; 20+ messages in thread
From: Kees Cook @ 2025-11-25 0:29 UTC (permalink / raw)
To: Linus Torvalds
Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
On Mon, Nov 24, 2025 at 01:35:12PM -0800, Linus Torvalds wrote:
> Those macros are illegible. And 99% of all users DO NOT WANT ANY OF
> THAT COMPLEXITY.
Okay, I think you're saying "I don't want the common helpers to include
the infrastructure for supporting the ..._sz() variants"?
> So no. We're not doing *any* of that. You make it simple and targeted
> to the *common* case, of you don't do this at all. Because that
> over-designed mess that actually turned some users *less* readable,
> but one line shorter, was bad.
Fair enough. Looking at the treewide change I prepared[1], it's less
than 1% of those mechanical replacements:
$ git show f79ee96ad6a3 | grep -E 'k.*alloc_(objs?|flex)\(' | wc -l
17473
$ git show f79ee96ad6a3 | grep '_sz(' | wc -l
114
I'll respin without the _sz variants, and try to improve the flex stuff.
-Kees
[1] https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git/commit/?h=dev/v6.18-rc6/alloc_obj/v5
--
Kees Cook
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-24 23:30 ` Linus Torvalds
@ 2025-11-25 1:09 ` Matthew Wilcox
0 siblings, 0 replies; 20+ messages in thread
From: Matthew Wilcox @ 2025-11-25 1:09 UTC (permalink / raw)
To: Linus Torvalds
Cc: Kees Cook, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
Hyeonggon Yoo, Gustavo A . R . Silva, Bill Wendling, Justin Stitt,
Jann Horn, Przemek Kitszel, Marco Elver, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Vegard Nossum,
Harry Yoo, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
On Mon, Nov 24, 2025 at 03:30:19PM -0800, Linus Torvalds wrote:
> That all a very standard thing in assembly programming, which this is
> all about. 'entry' is a signed offset from its own address.
I used to be an assembly programmer ... 28 years ago. I've mostly put
that world out of my mind (and being able to write a 20,000 instruction
ARM32 program entirely in assembly is just not that useful an
accomplishment to put on my CV). Anyway, this isn't the point ...
> > The warning is ... not the best phrased, but in terms of divining the
> > programmer's intent, I genuinely don't know if this code is supposed
> > to zero-extend or sign-extend the s32 to unsigned long.
>
> What?
>
> A signed value gets sign-extended when cast to a larger type. That's
> how all of this always works. Casting a signed value to 'unsigned
> long' will set the high bits in the result.
>
> That's pretty much the *definition* of a signed value. It gets
> sign-extended when used, and then obviously it becomes a large
> unsigned value, but this is how two's complement addition
> fundamentally works.
Yes, agreed.
> So honestly, what's the problem with this code?
>
> The warning makes no sense, and is garbage. Are we not allowed to add
> signed integers to unsigned 64-bit values now, because that addition
> involves that cast of a signed 32-bit entry to an unsigned 64-bit one?
>
> There is NO WAY that warning is valid, it's; not *ever* something we
> should enable, and the fact that you people are discussing it as such
> is just crazy.
>
> That code would not be improved at all by adding another cast (to
> first cast that s32 to 'long', in order to then add it to 'unsigned
> long').
>
> Imagine how many other places you add integers to 'unsigned long'.
> EVERY SINGLE ONE of those places involves sign-extending the integer
> and then doing arithmetic in unsigned.
I have bad news. Rust requires it.
fn add(base: u64, off: i32) -> u64 {
base + off
}
error[E0308]: mismatched types
--> add.rs:2:12
|
2 | base + off
| ^^^ expected `u64`, found `i32`
error[E0277]: cannot add `i32` to `u64`
--> add.rs:2:10
|
2 | base + off
| ^ no implementation for `u64 + i32`
|
= help: the trait `Add<i32>` is not implemented for `u64`
= help: the following other types implement trait `Add<Rhs>`:
<u64 as Add>
<u64 as Add<&u64>>
<&'a u64 as Add<u64>>
<&u64 as Add<&u64>>
so the Rust language people have clearly decided that this is too
complicated for your average programmer to figure out, and you need
explicit casts to make it work.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-25 0:29 ` Kees Cook
@ 2025-11-25 1:25 ` Linus Torvalds
0 siblings, 0 replies; 20+ messages in thread
From: Linus Torvalds @ 2025-11-25 1:25 UTC (permalink / raw)
To: Kees Cook
Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
On Mon, 24 Nov 2025 at 16:29, Kees Cook <kees@kernel.org> wrote:
>
> On Mon, Nov 24, 2025 at 01:35:12PM -0800, Linus Torvalds wrote:
> > Those macros are illegible. And 99% of all users DO NOT WANT ANY OF
> > THAT COMPLEXITY.
>
> Okay, I think you're saying "I don't want the common helpers to include
> the infrastructure for supporting the ..._sz() variants"?
I don't want the macros to expand the code more than necessary,
because that just makes everything worse (including very much my build
times - which have gotten worse over the years, but historically my
build environment got faster at an even higher rate - that is no
longer true).
I also want the macros to be somewhat understandable.
Yes, we have some macros that are works of art because they are *so*
obscure and are fundamentally hard to understand because they play
games with really really subtle language semantics.
And then I enjoy them and appreciate the insanity of them. I still
think our __is_constexpr() macro is truly a wonder, but we do have 40
lines of comment (!) for that one strange macro.
But this was not *that* kind of hard-to-understand macro. This was
just overly complicated.
> Fair enough. Looking at the treewide change I prepared[1], it's less
> than 1% of those mechanical replacements:
Yeah, I did some numbers on my simple version, and just the three
simple ones were the majority of cases by far.
I also believe that when we're doing more complicated size
calculations (ie the whole "struct_size()" patterns), then the
advantage of merging them with the allocation itself is questionable,
simply because it takes two different pieces and makes them one
*compilcated* piece.
Doing
size = struct_size(ptr, flex_member, count);
ptr = kmalloc(size, gfp);
is basically two fairly straightforward things and easy to understand.
You can *scan* that code, and each piece is simple enough that it
makes intuitive sense.
No, 'struct_size()' isn't exactly a very intuitive thing, but written
that way it's also not very surprising or complicated to parse at a
glance.
In contrast,
ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &size);
does *not* make intuitive sense, I believe. Now you have five
different arguments, and it's actually somewhat hard to parse, and
there are odd semantics.
In contrast, the simple versions that take
ptr = kmalloc(sizeof(*ptr), gfp);
and turn it into
ptr = kmalloc_obj(*ptr, gfp);
actually become *simpler* to read and understand.
Yes, there are some other advantages to the combined version (ie that
whole thing where 'kmalloc_obj()' now returns a proper _type_ - type
safety is always good, and avoiding void pointers is a real thing),
but I do think that the major advantage is "simpler to read".
Because in the end, simple code that does what you intuitively expect
it to do, is good code, and hopefully less buggy.
Linus
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-11-25 1:32 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-22 1:42 [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family Kees Cook
2025-11-22 1:42 ` [PATCH v5 1/4] compiler_types: Introduce __flex_counter() " Kees Cook
2025-11-22 1:42 ` [PATCH v5 2/4] slab: Introduce kmalloc_obj() " Kees Cook
2025-11-22 19:53 ` Linus Torvalds
2025-11-22 20:54 ` Linus Torvalds
2025-11-24 20:38 ` Kees Cook
2025-11-24 21:12 ` Matthew Wilcox
2025-11-24 21:20 ` Kees Cook
2025-11-24 21:33 ` Matthew Wilcox
2025-11-24 21:44 ` Matthew Wilcox
2025-11-24 21:50 ` Kees Cook
2025-11-24 23:30 ` Linus Torvalds
2025-11-25 1:09 ` Matthew Wilcox
2025-11-24 21:35 ` Linus Torvalds
2025-11-25 0:29 ` Kees Cook
2025-11-25 1:25 ` Linus Torvalds
2025-11-22 1:42 ` [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations Kees Cook
2025-11-22 4:51 ` Joe Perches
2025-11-22 1:43 ` [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script Kees Cook
2025-11-24 12:50 ` [cocci] " Markus Elfring
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).