* [PATCH v5 2/4] slab: Introduce kmalloc_obj() and family
2025-11-22 1:42 [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family Kees Cook
2025-11-22 1:42 ` [PATCH v5 1/4] compiler_types: Introduce __flex_counter() " Kees Cook
@ 2025-11-22 1:42 ` Kees Cook
2025-11-22 19:53 ` Linus Torvalds
2025-11-22 1:42 ` [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations Kees Cook
2025-11-22 1:43 ` [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script Kees Cook
3 siblings, 1 reply; 8+ messages in thread
From: Kees Cook @ 2025-11-22 1:42 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, Linus Torvalds, Greg Kroah-Hartman,
Sasha Levin, linux-mm, Randy Dunlap, Miguel Ojeda, Matthew Wilcox,
Vegard Nossum, Harry Yoo, Nathan Chancellor, Peter Zijlstra,
Nick Desaulniers, Jonathan Corbet, Jakub Kicinski, Yafang Shao,
Tony Ambardar, Alexander Lobakin, Jan Hendrik Farr,
Alexander Potapenko, linux-kernel, linux-hardening, linux-doc,
llvm
Introduce type-aware kmalloc-family helpers to replace the common
idioms for single, array, and flexible object allocations:
ptr = kmalloc(sizeof(*ptr), gfp);
ptr = kmalloc(sizeof(struct some_obj_name), gfp);
ptr = kzalloc(sizeof(*ptr), gfp);
ptr = kmalloc_array(count, sizeof(*ptr), gfp);
ptr = kcalloc(count, sizeof(*ptr), gfp);
ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
These become, respectively:
ptr = kmalloc_obj(*ptr, gfp);
ptr = kmalloc_obj(*ptr, gfp);
ptr = kzalloc_obj(*ptr, gfp);
ptr = kmalloc_objs(*ptr, count, gfp);
ptr = kzalloc_objs(*ptr, count, gfp);
ptr = kmalloc_flex(*ptr, flex_member, count, gfp);
Beyond the other benefits outlined below, the primary ergonomic benefit
is the elimination of needing "sizeof" nor the type name, and the
enforcement of assignment types (they do not return "void *", but rather
a pointer to the type of the first argument). The type name _can_ be
used, though, in the case where an assignment is indirect (e.g. via
"return").
These each return the newly allocated pointer to the type (which may be
NULL on failure). For cases where the total size of the allocation is
needed, the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz()
family of macros can be used. For example:
size = struct_size(ptr, flex_member, count);
ptr = kmalloc(size, gfp);
becomes:
ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &size);
With the *_sz() helpers, it becomes possible to do bounds checking of
the final size to make sure no arithmetic overflow has happened that
exceeds the storage size of the target size variable. E.g. it was possible
before to end up wrapping an allocation size and not noticing, there by
allocating too small a size. (Most of Linux's exposure on that particular
problem is via newly written code as we already did bulk conversions[1],
but we continue to have a steady stream of patches catching additional
cases[2] that would just go away with this API.)
Internal introspection of the allocated type now becomes possible,
allowing for future alignment-aware choices to be made by the allocator
and future hardening work that can be type sensitive. For example,
adding __alignof(*ptr) as an argument to the internal allocators so that
appropriate/efficient alignment choices can be made, or being able to
correctly choose per-allocation offset randomization within a bucket
that does not break alignment requirements.
For the flexible array helpers, the internal use of __flex_counter()
allows for automatically setting the counter member of a struct's flexible
array member when it has been annotated with __counted_by(), avoiding
any missed early size initializations while __counted_by() annotations
are added to the kernel. Additionally, this also checks for "too large"
allocations based on the type size of the counter variable. For example:
if (count > type_max(ptr->flex_count))
fail...;
size = struct_size(ptr, flex_member, count);
ptr = kmalloc(size, gfp);
ptr->flex_count = count;
becomes (n.b. unchanged from earlier example):
ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &size);
ptr->flex_count = count;
Note that manual initialization of the flexible array counter is still
required (at some point) after allocation as not all compiler versions
support the __counted_by annotation yet. But doing it internally makes
sure they cannot be missed when __counted_by _is_ available, meaning
that the bounds checker will not trip due to the lack of "early enough"
initializations that used to work before enabling the stricter bounds
checking. For example:
ptr = kmalloc_flex(*ptr, flex_member, count);
fill(ptr->flex, count);
ptr->flex_count = count;
This works correctly before adding a __counted_by annotation (since
nothing is checking ptr->flex accesses against ptr->flex_count). After
adding the annotation, the bounds sanitizer would trip during fill()
because ptr->flex_count wasn't set yet. But with kmalloc_flex() setting
ptr->flex_count internally at allocation time, the existing code works
without needing to move the ptr->flex_count assignment before the call
to fill(). (This has been a stumbling block for __counted_by adoption.)
Replacing all existing simple code patterns found via Coccinelle[3]
shows what could be replaced immediately (also saving roughly 1000 lines):
7863 files changed, 19639 insertions(+), 20692 deletions(-)
This would take us from 24085 k*alloc assignments to 7467:
$ git grep ' = kv\?[mzcv]alloc\(\|_array\)(' | wc -l
24085
$ git reset --hard HEAD^
HEAD is now at 8bccc91e6cdf treewide: kmalloc_obj conversion
$ git grep ' = kv\?[mzcv]alloc\(\|_array\)(' | wc -l
7467
This treewide change could be done at the end of the merge window just
before -rc1 is released (as is common for treewide changes). Handling
this API change in backports to -stable should be possible without much
hassle by backporting the __flex_counter() patch and this patch, while
taking conversions as-needed.
The impact on my bootable testing image size (with the treewide patch
applied) is tiny. With both GCC 13 (no __counted_by support) and GCC 15
(with __counted_by) the images are actually very slightly smaller:
$ size -G gcc-boot/vmlinux.gcc*
text data bss total filename
29975593 21527689 16601200 68104482 gcc-boot/vmlinux.gcc13-before
29969263 21528663 16601112 68099038 gcc-boot/vmlinux.gcc13-after
30555626 21291299 17086620 68933545 gcc-boot/vmlinux.gcc15-before
30550144 21292039 17086540 68928723 gcc-boot/vmlinux.gcc15-after
Link: https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b08fc5277aaa1d8ea15470d38bf36f19dfb0e125 [1]
Link: https://lore.kernel.org/all/?q=s%3Akcalloc+-s%3ARe%3A [2]
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_objs.cocci [3]
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
Cc: Bill Wendling <morbo@google.com>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Cc: Marco Elver <elver@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Sasha Levin <sashal@kernel.org>
Cc: linux-mm@kvack.org
---
Documentation/process/deprecated.rst | 42 +++++++
include/linux/slab.h | 172 +++++++++++++++++++++++++++
2 files changed, 214 insertions(+)
diff --git a/Documentation/process/deprecated.rst b/Documentation/process/deprecated.rst
index 1f7f3e6c9cda..eb72b75f5419 100644
--- a/Documentation/process/deprecated.rst
+++ b/Documentation/process/deprecated.rst
@@ -372,3 +372,45 @@ The helper must be used::
DECLARE_FLEX_ARRAY(struct type2, two);
};
};
+
+Open-coded kmalloc assignments for struct objects
+-------------------------------------------------
+Performing open-coded kmalloc()-family allocation assignments prevents
+the kernel (and compiler) from being able to examine the type of the
+variable being assigned, which limits any related introspection that
+may help with alignment, wrap-around, or additional hardening. The
+kmalloc_obj()-family of macros provide this introspection, which can be
+used for the common code patterns for single, array, and flexible object
+allocations. For example, these open coded assignments::
+
+ ptr = kmalloc(sizeof(*ptr), gfp);
+ ptr = kmalloc(sizeof(struct the_type_of_ptr_obj), gfp);
+ ptr = kzalloc(sizeof(*ptr), gfp);
+ ptr = kmalloc_array(count, sizeof(*ptr), gfp);
+ ptr = kcalloc(count, sizeof(*ptr), gfp);
+ ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
+
+become, respectively::
+
+ ptr = kmalloc_obj(*ptr, gfp);
+ ptr = kzalloc_obj(*ptr, gfp);
+ ptr = kmalloc_objs(*ptr, count, gfp);
+ ptr = kzalloc_objs(*ptr, count, gfp);
+ ptr = kmalloc_flex(*ptr, flex_member, count, gfp);
+
+For the cases where the total size of the allocation is also needed,
+the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family of
+macros can be used. For example, converting these assignments::
+
+ total_size = struct_size(ptr, flex_member, count);
+ ptr = kmalloc(total_size, gfp);
+
+becomes::
+
+ ptr = kmalloc_flex_sz(*ptr, flex_member, count, gfp, &total_size);
+
+If `ptr->flex_member` is annotated with __counted_by(), the allocation
+will automatically fail if `count` is larger than the maximum
+representable value that can be stored in the counter member associated
+with `flex_member`. Similarly, the allocation will fail if the total
+size of the allocation exceeds the maximum value `*total_size` can hold.
diff --git a/include/linux/slab.h b/include/linux/slab.h
index cf443f064a66..1c5219d79cf1 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -12,6 +12,7 @@
#ifndef _LINUX_SLAB_H
#define _LINUX_SLAB_H
+#include <linux/bug.h>
#include <linux/cache.h>
#include <linux/gfp.h>
#include <linux/overflow.h>
@@ -965,6 +966,177 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node);
#define kmalloc_nolock(...) alloc_hooks(kmalloc_nolock_noprof(__VA_ARGS__))
+#define __alloc_objs(ALLOC, VAR, COUNT, SIZE) \
+({ \
+ size_t __obj_size = size_mul(sizeof(VAR), COUNT); \
+ const typeof(_Generic(SIZE, \
+ void *: (size_t *)NULL, \
+ default: SIZE)) __size_ptr = (SIZE); \
+ typeof(VAR) *__obj_ptr = NULL; \
+ /* Does the total size fit in the *SIZE variable? */ \
+ if (!WARN_ON_ONCE(__size_ptr && __obj_size > type_max(*__size_ptr))) \
+ __obj_ptr = ALLOC; \
+ if (!__obj_ptr) \
+ __obj_size = 0; \
+ if (__size_ptr) \
+ *__size_ptr = __obj_size; \
+ __obj_ptr; \
+})
+
+#define __alloc_flex(ALLOC, VAR, FAM, COUNT, SIZE) \
+({ \
+ const size_t __count = (COUNT); \
+ size_t __obj_size = struct_size_t(typeof(VAR), FAM, __count); \
+ /* "*SIZE = ...;" below is unbuildable when SIZE is "NULL" */ \
+ const typeof(_Generic(SIZE, \
+ void *: (size_t *)NULL, \
+ default: SIZE)) __size_ptr = (SIZE); \
+ typeof(VAR) *__obj_ptr = NULL; \
+ if (!WARN_ON_ONCE(!__can_set_flex_counter(__obj_ptr->FAM, __count)) && \
+ !WARN_ON_ONCE(__size_ptr && __obj_size > type_max(*__size_ptr))) \
+ __obj_ptr = ALLOC; \
+ if (__obj_ptr) { \
+ __set_flex_counter(__obj_ptr->FAM, __count); \
+ } else { \
+ __obj_size = 0; \
+ } \
+ if (__size_ptr) \
+ *__size_ptr = __obj_size; \
+ __obj_ptr; \
+})
+
+/**
+ * kmalloc_obj - Allocate a single instance of the given structure
+ * @VAR: Variable or type to allocate.
+ * @FLAGS: GFP flags for the allocation.
+ *
+ * Returns: newly allocated pointer to a @VAR on success, NULL on failure.
+ */
+#define kmalloc_obj(VAR, FLAGS) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, 1, NULL)
+
+/**
+ * kmalloc_obj_sz - Allocate a single instance of the given structure and
+ * store total size
+ * @VAR: Variable or type to allocate.
+ * @FLAGS: GFP flags for the allocation.
+ * @SIZE: Pointer to variable to hold the total allocation size.
+ *
+ * Returns: newly allocated pointer to @VAR on success, NULL on failure.
+ * If @SIZE is non-NULL, the allocation will immediately fail if the total
+ * allocation size is larger than what the type of *@SIZE can represent.
+ * If @SIZE is non-NULL, *@SIZE is set to either allocation size on success,
+ * or 0 on failure.
+ */
+#define kmalloc_obj_sz(VAR, FLAGS, SIZE) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, 1, SIZE)
+
+/**
+ * kmalloc_objs - Allocate an array of the given structure
+ * @VAR: Variable or type to allocate an array of.
+ * @COUNT: How many elements in the array.
+ * @FLAGS: GFP flags for the allocation.
+ *
+ * Returns: newly allocated pointer to array of @VAR on success, NULL on
+ * failure.
+ */
+#define kmalloc_objs(VAR, COUNT, FLAGS) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, COUNT, NULL)
+
+/**
+ * kmalloc_objs_sz - Allocate an array of the given structure and store
+ * total size
+ * @VAR: Variable or type to allocate an array of.
+ * @COUNT: How many elements in the array.
+ * @FLAGS: GFP flags for the allocation.
+ * @SIZE: Pointer to variable to hold the total allocation size.
+ *
+ * Returns: newly allocated pointer to array of @VAR on success, NULL on
+ * failure. If @SIZE is non-NULL, the allocation will immediately fail if
+ * the total allocation size is larger than what the type of *@SIZE can
+ * represent. If @SIZE is non-NULL, *@SIZE is set to either allocation size
+ * on success, or 0 on failure.
+ */
+#define kmalloc_objs_sz(VAR, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kmalloc(__obj_size, FLAGS), VAR, COUNT, SIZE)
+
+/**
+ * kmalloc_flex - Allocate a single instance of the given flexible structure
+ * @VAR: Variable or type to allocate, along with its flexible array member.
+ * @FAM: The name of the flexible array member of the structure.
+ * @COUNT: How many flexible array member elements are desired.
+ * @FLAGS: GFP flags for the allocation.
+ *
+ * Returns: newly allocated pointer to @VAR on success, NULL on failure.
+ * If @FAM has been annotated with __counted_by(), the allocation will
+ * immediately fail if @COUNT is larger than what the type of the struct's
+ * counter variable can represent.
+ */
+#define kmalloc_flex(VAR, FAM, COUNT, FLAGS) \
+ __alloc_flex(kmalloc(__obj_size, FLAGS), VAR, FAM, COUNT, NULL)
+
+/**
+ * kmalloc_flex_sz - Allocate a single instance of the given flexible
+ * structure and store total size
+ * @VAR: Variable or type to allocate, along with its flexible array member.
+ * @FAM: The name of the flexible array member of the structure.
+ * @COUNT: How many flexible array member elements are desired.
+ * @FLAGS: GFP flags for the allocation.
+ * @SIZE: Pointer to variable to hold the total allocation size.
+ *
+ * Returns: newly allocated pointer to @VAR on success, NULL on failure.
+ * If @FAM has been annotated with __counted_by(), the allocation will
+ * immediately fail if @COUNT is larger than what the type of the struct's
+ * counter variable can represent. If @SIZE is non-NULL, the allocation
+ * will immediately fail if the total allocation size is larger than what
+ * the type of *@SIZE can represent. If @SIZE is non-NULL, *@SIZE is set
+ * to either allocation size on success, or 0 on failure.
+ */
+#define kmalloc_flex_sz(VAR, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kmalloc(__obj_size, FLAGS), VAR, FAM, COUNT, SIZE)
+
+/* All kzalloc aliases for kmalloc_(obj|objs|fam)(|_sz). */
+#define kzalloc_obj(P, FLAGS) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, 1, NULL)
+#define kzalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, 1, SIZE)
+#define kzalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, COUNT, NULL)
+#define kzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kzalloc(__obj_size, FLAGS), P, COUNT, SIZE)
+#define kzalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kzalloc(__obj_size, FLAGS), P, FAM, COUNT, NULL)
+#define kzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kzalloc(__obj_size, FLAGS), P, FAM, COUNT, SIZE)
+
+/* All kvmalloc aliases for kmalloc_(obj|objs|fam)(|_sz). */
+#define kvmalloc_obj(P, FLAGS) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, 1, NULL)
+#define kvmalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, 1, SIZE)
+#define kvmalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, COUNT, NULL)
+#define kvmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kvmalloc(__obj_size, FLAGS), P, COUNT, SIZE)
+#define kvmalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kvmalloc(__obj_size, FLAGS), P, FAM, COUNT, NULL)
+#define kvmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kvmalloc(__obj_size, FLAGS), P, FAM, COUNT, SIZE)
+
+/* All kvzalloc aliases for kmalloc_(obj|objs|fam)(|_sz). */
+#define kvzalloc_obj(P, FLAGS) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, 1, NULL)
+#define kvzalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, 1, SIZE)
+#define kvzalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, COUNT, NULL)
+#define kvzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kvzalloc(__obj_size, FLAGS), P, COUNT, SIZE)
+#define kvzalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kvzalloc(__obj_size, FLAGS), P, FAM, COUNT, NULL)
+#define kvzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kvzalloc(__obj_size, FLAGS), P, FAM, COUNT, SIZE)
+
#define kmem_buckets_alloc(_b, _size, _flags) \
alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH v5 4/4] coccinelle: Add kmalloc_objs conversion script
2025-11-22 1:42 [PATCH v5 0/4] slab: Introduce kmalloc_obj() and family Kees Cook
` (2 preceding siblings ...)
2025-11-22 1:42 ` [PATCH v5 3/4] checkpatch: Suggest kmalloc_obj family for sizeof allocations Kees Cook
@ 2025-11-22 1:43 ` Kees Cook
3 siblings, 0 replies; 8+ messages in thread
From: Kees Cook @ 2025-11-22 1:43 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Julia Lawall, Nicolas Palix, cocci, Randy Dunlap,
Miguel Ojeda, Przemek Kitszel, Gustavo A. R. Silva,
Linus Torvalds, Matthew Wilcox, Christoph Lameter, Marco Elver,
Vegard Nossum, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Roman Gushchin, Harry Yoo, Bill Wendling,
Justin Stitt, Jann Horn, Greg Kroah-Hartman, Sasha Levin,
linux-mm, Nathan Chancellor, Peter Zijlstra, Nick Desaulniers,
Jonathan Corbet, Jakub Kicinski, Yafang Shao, Tony Ambardar,
Alexander Lobakin, Jan Hendrik Farr, Alexander Potapenko,
linux-kernel, linux-hardening, linux-doc, llvm
Finds and converts sized kmalloc-family of allocations into the
typed kmalloc_obj-family of allocations.
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Julia Lawall <Julia.Lawall@inria.fr>
Cc: Nicolas Palix <nicolas.palix@imag.fr>
Cc: cocci@inria.fr
---
scripts/coccinelle/api/kmalloc_objs.cocci | 168 ++++++++++++++++++++++
1 file changed, 168 insertions(+)
create mode 100644 scripts/coccinelle/api/kmalloc_objs.cocci
diff --git a/scripts/coccinelle/api/kmalloc_objs.cocci b/scripts/coccinelle/api/kmalloc_objs.cocci
new file mode 100644
index 000000000000..39f82f014b17
--- /dev/null
+++ b/scripts/coccinelle/api/kmalloc_objs.cocci
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/// Use kmalloc_obj family of macros for allocations
+///
+// Confidence: High
+// Comments:
+// Options: --include-headers-for-types --all-includes --include-headers --keep-comments
+
+virtual patch
+
+@initialize:python@
+@@
+import sys
+
+def alloc_array(name):
+ func = "FAILED_RENAME"
+ if name == "kmalloc_array":
+ func = "kmalloc_objs"
+ elif name == "kvmalloc_array":
+ func = "kvmalloc_objs"
+ elif name == "kcalloc":
+ func = "kzalloc_objs"
+ elif name == "kvcalloc":
+ func = "kvzalloc_objs"
+ else:
+ print(f"Unknown transform for {name}", file=sys.stderr)
+ return func
+
+@assign_sizeof depends on patch && !(file in "tools") && !(file in "samples")@
+type TYPE;
+TYPE *P;
+TYPE INST;
+expression VAR;
+expression GFP;
+expression SIZE;
+identifier ALLOC =~ "^kv?[mz]alloc$";
+fresh identifier ALLOC_OBJ_SZ = ALLOC ## "_obj_sz";
+@@
+
+(
+- SIZE = sizeof(*VAR);
+ ... when != SIZE
+ VAR =
+- ALLOC(SIZE, GFP);
++ ALLOC_OBJ_SZ(*VAR, GFP, &SIZE);
+|
+- SIZE = (sizeof(TYPE));
+ ... when != SIZE
+ P =
+- ALLOC(SIZE, GFP);
++ ALLOC_OBJ_SZ(*P, GFP, &SIZE);
+|
+- SIZE = (sizeof(INST));
+ ... when != SIZE
+ P =
+- ALLOC(SIZE, GFP);
++ ALLOC_OBJ_SZ(*P, GFP, &SIZE);
+)
+
+@assign_struct_size depends on patch && !(file in "tools") && !(file in "samples")@
+type TYPE;
+TYPE *P;
+expression VAR;
+expression GFP;
+expression SIZE;
+expression FLEX;
+expression COUNT;
+identifier ALLOC =~ "^kv?[mz]alloc$";
+fresh identifier ALLOC_FLEX_SZ = ALLOC ## "_flex_sz";
+@@
+
+(
+- SIZE = struct_size(VAR, FLEX, COUNT);
+ ... when != SIZE
+ VAR =
+- ALLOC(SIZE, GFP);
++ ALLOC_FLEX_SZ(*VAR, FLEX, COUNT, GFP, &SIZE);
+|
+- SIZE = struct_size_t(TYPE, FLEX, COUNT);
+ ... when != SIZE
+ P =
+- ALLOC(SIZE, GFP);
++ ALLOC_FLEX_SZ(*P, FLEX, COUNT, GFP, &SIZE);
+)
+
+// This excludes anything that is assigning to or from integral types or
+// string literals. Everything else gets the sizeof() extracted for the
+// kmalloc_obj() type/var argument. sizeof(void *) is also excluded because
+// it will need case-by-case double-checking to make sure the right type is
+// being assigned.
+@direct depends on patch && !(file in "tools") && !(file in "samples")@
+typedef u8, u16, u32, u64;
+typedef __u8, __u16, __u32, __u64;
+typedef uint8_t, uint16_t, uint32_t, uint64_t;
+typedef __le16, __le32, __le64;
+typedef __be16, __be32, __be64;
+type INTEGRAL = {u8,__u8,uint8_t,char,unsigned char,
+ u16,__u16,uint16_t,unsigned short,
+ u32,__u32,uint32_t,unsigned int,
+ u64,__u64,uint64_t,unsigned long,
+ __le16,__le32,__le64,__be16,__be32,__be64};
+char [] STRING;
+INTEGRAL *BYTES;
+type TYPE;
+expression VAR;
+expression GFP;
+expression COUNT;
+expression FLEX;
+expression E;
+identifier ALLOC =~ "^kv?[mz]alloc$";
+fresh identifier ALLOC_OBJ = ALLOC ## "_obj";
+fresh identifier ALLOC_FLEX = ALLOC ## "_flex";
+identifier ALLOC_ARRAY = {kmalloc_array,kvmalloc_array,kcalloc,kvcalloc};
+fresh identifier ALLOC_OBJS = script:python(ALLOC_ARRAY) { alloc_array(ALLOC_ARRAY) };
+@@
+
+(
+- VAR = ALLOC((sizeof(*VAR)), GFP)
++ VAR = ALLOC_OBJ(*VAR, GFP)
+|
+ ALLOC((\(sizeof(STRING)\|sizeof(INTEGRAL)\|sizeof(INTEGRAL *)\)), GFP)
+|
+ BYTES = ALLOC((sizeof(E)), GFP)
+|
+ BYTES = ALLOC((sizeof(TYPE)), GFP)
+|
+ ALLOC((sizeof(void *)), GFP)
+|
+- ALLOC((sizeof(E)), GFP)
++ ALLOC_OBJ(E, GFP)
+|
+- ALLOC((sizeof(TYPE)), GFP)
++ ALLOC_OBJ(TYPE, GFP)
+|
+ ALLOC_ARRAY(COUNT, (\(sizeof(STRING)\|sizeof(INTEGRAL)\|sizeof(INTEGRAL *)\)), GFP)
+|
+ BYTES = ALLOC_ARRAY(COUNT, (sizeof(E)), GFP)
+|
+ BYTES = ALLOC_ARRAY(COUNT, (sizeof(TYPE)), GFP)
+|
+ ALLOC_ARRAY((\(sizeof(STRING)\|sizeof(INTEGRAL)\|sizeof(INTEGRAL *)\)), COUNT, GFP)
+|
+ BYTES = ALLOC_ARRAY((sizeof(E)), COUNT, GFP)
+|
+ BYTES = ALLOC_ARRAY((sizeof(TYPE)), COUNT, GFP)
+|
+ ALLOC_ARRAY(COUNT, (sizeof(void *)), GFP)
+|
+ ALLOC_ARRAY((sizeof(void *)), COUNT, GFP)
+|
+- ALLOC_ARRAY(COUNT, (sizeof(E)), GFP)
++ ALLOC_OBJS(E, COUNT, GFP)
+|
+- ALLOC_ARRAY(COUNT, (sizeof(TYPE)), GFP)
++ ALLOC_OBJS(TYPE, COUNT, GFP)
+|
+- ALLOC_ARRAY((sizeof(E)), COUNT, GFP)
++ ALLOC_OBJS(E, COUNT, GFP)
+|
+- ALLOC_ARRAY((sizeof(TYPE)), COUNT, GFP)
++ ALLOC_OBJS(TYPE, COUNT, GFP)
+|
+- ALLOC(struct_size(VAR, FLEX, COUNT), GFP)
++ ALLOC_FLEX(*VAR, FLEX, COUNT, GFP)
+|
+- ALLOC(struct_size_t(TYPE, FLEX, COUNT), GFP)
++ ALLOC_FLEX(TYPE, FLEX, COUNT, GFP)
+)
+
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread