* [PATCH v3] slab: Introduce kmalloc_obj() and family
@ 2024-08-22 23:13 Kees Cook
2024-08-23 4:27 ` Przemek Kitszel
2024-08-27 21:32 ` Vlastimil Babka
0 siblings, 2 replies; 6+ messages in thread
From: Kees Cook @ 2024-08-22 23:13 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kees Cook, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, linux-mm, Nathan Chancellor,
Nick Desaulniers, linux-kernel, llvm, linux-hardening
Introduce type-aware kmalloc-family helpers to replace the common
idioms for single, array, and flexible object allocations:
ptr = kmalloc(sizeof(*ptr), gfp);
ptr = kzalloc(sizeof(*ptr), gfp);
ptr = kmalloc_array(count, sizeof(*ptr), gfp);
ptr = kcalloc(count, sizeof(*ptr), gfp);
ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
These become, respectively:
kmalloc_obj(ptr, gfp);
kzalloc_obj(ptr, gfp);
kmalloc_objs(ptr, count, gfp);
kzalloc_objs(ptr, count, gfp);
kmalloc_flex(ptr, flex_member, count, gfp);
These each return the assigned value of ptr (which may be NULL on
failure). For cases where the total size of the allocation is needed,
the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family
of macros can be used. For example:
info->size = struct_size(ptr, flex_member, count);
ptr = kmalloc(info->size, gfp);
becomes:
kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
Internal introspection of allocated type now becomes possible, allowing
for future alignment-aware choices and hardening work. For example,
adding __alignof(*ptr) as an argument to the internal allocators so that
appropriate/efficient alignment choices can be made, or being able to
correctly choose per-allocation offset randomization within a bucket
that does not break alignment requirements.
Introduces __flex_count() for when __builtin_get_counted_by() is added
by GCC[1] and Clang[2]. The internal use of __flex_count() allows for
automatically setting the counter member of a struct's flexible array
member when it has been annotated with __counted_by(), avoiding any
missed early size initializations while __counted_by() annotations are
added to the kernel. Additionally, this also checks for "too large"
allocations based on the type size of the counter variable. For example:
if (count > type_max(ptr->flex_count))
fail...;
info->size = struct_size(ptr, flex_member, count);
ptr = kmalloc(info->size, gfp);
ptr->flex_count = count;
becomes (i.e. unchanged from earlier example):
kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
Replacing all existing simple code patterns found via Coccinelle[3]
shows what could be replaced immediately (saving roughly 1,500 lines):
7040 files changed, 14128 insertions(+), 15557 deletions(-)
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116016 [1]
Link: https://github.com/llvm/llvm-project/issues/99774 [2]
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_obj-assign-size.cocci [3]
Signed-off-by: Kees Cook <kees@kernel.org>
---
Initial testing looks good. Before I write all the self-tests, I just
wanted to validate that the new API is reasonable (i.e. it is no longer
using optional argument counts for choosing the internal API).
v3:
- Add .rst documentation
- Add kern-doc
- Return ptr instead of size by default
- Add *_sz() variants that provide allocation size output
- Implement __flex_counter() logic
v2: https://lore.kernel.org/linux-hardening/20240807235433.work.317-kees@kernel.org/
v1: https://lore.kernel.org/linux-hardening/20240719192744.work.264-kees@kernel.org/
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
Cc: Bill Wendling <morbo@google.com>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Cc: Marco Elver <elver@google.com>
Cc: linux-mm@kvack.org
---
Documentation/process/deprecated.rst | 41 +++++++
include/linux/compiler_types.h | 22 ++++
include/linux/slab.h | 174 +++++++++++++++++++++++++++
3 files changed, 237 insertions(+)
diff --git a/Documentation/process/deprecated.rst b/Documentation/process/deprecated.rst
index 1f7f3e6c9cda..b22ec088a044 100644
--- a/Documentation/process/deprecated.rst
+++ b/Documentation/process/deprecated.rst
@@ -372,3 +372,44 @@ The helper must be used::
DECLARE_FLEX_ARRAY(struct type2, two);
};
};
+
+Open-coded kmalloc assignments
+------------------------------
+Performing open-coded kmalloc()-family allocation assignments prevents
+the kernel (and compiler) from being able to examine the type of the
+variable being assigned, which limits any related introspection that
+may help with alignment, wrap-around, or additional hardening. The
+kmalloc_obj()-family of macros provide this introspection, which can be
+used for the common code patterns for single, array, and flexible object
+allocations. For example, these open coded assignments::
+
+ ptr = kmalloc(sizeof(*ptr), gfp);
+ ptr = kzalloc(sizeof(*ptr), gfp);
+ ptr = kmalloc_array(count, sizeof(*ptr), gfp);
+ ptr = kcalloc(count, sizeof(*ptr), gfp);
+ ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
+
+become, respectively::
+
+ kmalloc_obj(ptr, gfp);
+ kzalloc_obj(ptr, gfp);
+ kmalloc_objs(ptr, count, gfp);
+ kzalloc_objs(ptr, count, gfp);
+ kmalloc_flex(ptr, flex_member, count, gfp);
+
+For the cases where the total size of the allocation is also needed,
+the kmalloc_obj_size(), kmalloc_objs_sz(), and kmalloc_flex_sz() family of
+macros can be used. For example, converting these assignments::
+
+ total_size = struct_size(ptr, flex_member, count);
+ ptr = kmalloc(total_size, gfp);
+
+becomes::
+
+ kmalloc_flex_sz(ptr, flex_member, count, gfp, &total_size);
+
+If `ptr->flex_member` is annotated with __counted_by(), the allocation
+will automatically fail if `count` is larger than the maximum
+representable value that can be stored in the counter member associated
+with `flex_member`. Similarly, the allocation will fail if the total
+size of the allocation exceeds the maximum value `*total_size` can hold.
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index f14c275950b5..b99deae45210 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -421,6 +421,28 @@ struct ftrace_likely_data {
#define __member_size(p) __builtin_object_size(p, 1)
#endif
+#if __has_builtin(__builtin_get_counted_by)
+/**
+ * __flex_counter - Get pointer to counter member for the given
+ * flexible array, if it was annotated with __counted_by()
+ * @flex: Pointer to flexible array member of an addressable struct instance
+ *
+ * For example, with:
+ *
+ * struct foo {
+ * int counter;
+ * short array[] __counted_by(counter);
+ * } *p;
+ *
+ * __flex_counter(p->array) will resolve to &p->counter.
+ *
+ * If p->array is unannotated, this returns (void *)NULL.
+ */
+#define __flex_counter(flex) __builtin_get_counted_by(flex)
+#else
+#define __flex_counter(flex) ((void *)NULL)
+#endif
+
/*
* Some versions of gcc do not mark 'asm goto' volatile:
*
diff --git a/include/linux/slab.h b/include/linux/slab.h
index eb2bf4629157..c37606b9e248 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -686,6 +686,180 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
}
#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__))
+#define __alloc_objs(ALLOC, P, COUNT, FLAGS, SIZE) \
+({ \
+ size_t __obj_size = size_mul(sizeof(*P), COUNT); \
+ const typeof(_Generic(SIZE, \
+ void *: (size_t *)NULL, \
+ default: SIZE)) __size_ptr = (SIZE); \
+ typeof(P) __obj_ptr = NULL; \
+ /* Does the total size fit in the *SIZE variable? */ \
+ if (!__size_ptr || __obj_size <= type_max(*__size_ptr)) \
+ __obj_ptr = ALLOC(__obj_size, FLAGS); \
+ if (!__obj_ptr) \
+ __obj_size = 0; \
+ if (__size_ptr) \
+ *__size_ptr = __obj_size; \
+ (P) = __obj_ptr; \
+})
+
+#define __alloc_flex(ALLOC, P, FAM, COUNT, FLAGS, SIZE) \
+({ \
+ size_t __count = (COUNT); \
+ size_t __obj_size = struct_size(P, FAM, __count); \
+ const typeof(_Generic(SIZE, \
+ void *: (size_t *)NULL, \
+ default: SIZE)) __size_ptr = (SIZE); \
+ typeof(P) __obj_ptr = NULL; \
+ /* Just query the counter type for type_max checking. */ \
+ typeof(_Generic(__flex_counter(__obj_ptr->FAM), \
+ void *: (size_t *)NULL, \
+ default: __flex_counter(__obj_ptr->FAM))) \
+ __counter_type_ptr = NULL; \
+ /* Does the count fit in the __counted_by counter member? */ \
+ if ((__count <= type_max(*__counter_type_ptr)) && \
+ /* Does the total size fit in the *SIZE variable? */ \
+ (!__size_ptr || __obj_size <= type_max(*__size_ptr))) \
+ __obj_ptr = ALLOC(__obj_size, FLAGS); \
+ if (__obj_ptr) { \
+ /* __obj_ptr now allocated so get real counter ptr. */ \
+ typeof(_Generic(__flex_counter(__obj_ptr->FAM), \
+ void *: (size_t *)NULL, \
+ default: __flex_counter(__obj_ptr->FAM))) \
+ __counter_ptr = __flex_counter(__obj_ptr->FAM); \
+ if (__counter_ptr) \
+ *__counter_ptr = __count; \
+ } else { \
+ __obj_size = 0; \
+ } \
+ if (__size_ptr) \
+ *__size_ptr = __obj_size; \
+ (P) = __obj_ptr; \
+})
+
+/**
+ * kmalloc_obj - Allocate a single instance of the given structure
+ * @P: Pointer to hold allocation of the structure
+ * @FLAGS: GFP flags for the allocation
+ *
+ * Returns the newly allocated value of @P on success, NULL on failure.
+ * @P is assigned the result, either way.
+ */
+#define kmalloc_obj(P, FLAGS) \
+ __alloc_objs(kmalloc, P, 1, FLAGS, NULL)
+/**
+ * kmalloc_obj_sz - Allocate a single instance of the given structure and
+ * store total size
+ * @P: Pointer to hold allocation of the structure
+ * @FLAGS: GFP flags for the allocation
+ * @SIZE: Pointer to variable to hold the total allocation size
+ *
+ * Returns the newly allocated value of @P on success, NULL on failure.
+ * @P is assigned the result, either way. If @SIZE is non-NULL, the
+ * allocation will immediately fail if the total allocation size is larger
+ * than what the type of *@SIZE can represent.
+ */
+#define kmalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kmalloc, P, 1, FLAGS, SIZE)
+/**
+ * kmalloc_objs - Allocate an array of the given structure
+ * @P: Pointer to hold allocation of the structure array
+ * @COUNT: How many elements in the array
+ * @FLAGS: GFP flags for the allocation
+ *
+ * Returns the newly allocated value of @P on success, NULL on failure.
+ * @P is assigned the result, either way.
+ */
+#define kmalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kmalloc, P, COUNT, FLAGS, NULL)
+/**
+ * kmalloc_objs_sz - Allocate an array of the given structure and store
+ * total size
+ * @P: Pointer to hold allocation of the structure array
+ * @COUNT: How many elements in the array
+ * @FLAGS: GFP flags for the allocation
+ * @SIZE: Pointer to variable to hold the total allocation size
+ *
+ * Returns the newly allocated value of @P on success, NULL on failure.
+ * @P is assigned the result, either way. If @SIZE is non-NULL, the
+ * allocation will immediately fail if the total allocation size is larger
+ * than what the type of *@SIZE can represent.
+ */
+#define kmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kmalloc, P, COUNT, FLAGS, SIZE)
+/**
+ * kmalloc_flex - Allocate a single instance of the given flexible structure
+ * @P: Pointer to hold allocation of the structure
+ * @FAM: The name of the flexible array member of the structure
+ * @COUNT: How many flexible array member elements are desired
+ * @FLAGS: GFP flags for the allocation
+ *
+ * Returns the newly allocated value of @P on success, NULL on failure.
+ * @P is assigned the result, either way. If @FAM has been annotated with
+ * __counted_by(), the allocation will immediately fail if @COUNT is larger
+ * than what the type of the struct's counter variable can represent.
+ */
+#define kmalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kmalloc, P, FAM, COUNT, FLAGS, NULL)
+
+/**
+ * kmalloc_flex_sz - Allocate a single instance of the given flexible
+ * structure and store total size
+ * @P: Pointer to hold allocation of the structure
+ * @FAM: The name of the flexible array member of the structure
+ * @COUNT: How many flexible array member elements are desired
+ * @FLAGS: GFP flags for the allocation
+ * @SIZE: Pointer to variable to hold the total allocation size
+ *
+ * Returns the newly allocated value of @P on success, NULL on failure.
+ * @P is assigned the result, either way. If @FAM has been annotated with
+ * __counted_by(), the allocation will immediately fail if @COUNT is larger
+ * than what the type of the struct's counter variable can represent. If
+ * @SIZE is non-NULL, the allocation will immediately fail if the total
+ * allocation size is larger than what the type of *@SIZE can represent.
+ */
+#define kmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kmalloc, P, FAM, COUNT, FLAGS, SIZE)
+
+#define kzalloc_obj(P, FLAGS) \
+ __alloc_objs(kzalloc, P, 1, FLAGS, NULL)
+#define kzalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kzalloc, P, 1, FLAGS, SIZE)
+#define kzalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kzalloc, P, COUNT, FLAGS, NULL)
+#define kzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kzalloc, P, COUNT, FLAGS, SIZE)
+#define kzalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kzalloc, P, FAM, COUNT, FLAGS, NULL)
+#define kzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kzalloc, P, FAM, COUNT, FLAGS, SIZE)
+
+#define kvmalloc_obj(P, FLAGS) \
+ __alloc_objs(kvmalloc, P, 1, FLAGS, NULL)
+#define kvmalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kvmalloc, P, 1, FLAGS, SIZE)
+#define kvmalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kvmalloc, P, COUNT, FLAGS, NULL)
+#define kvmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kvmalloc, P, COUNT, FLAGS, SIZE)
+#define kvmalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kvmalloc, P, FAM, COUNT, FLAGS, NULL)
+#define kvmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kvmalloc, P, FAM, COUNT, FLAGS, SIZE)
+
+#define kvzalloc_obj(P, FLAGS) \
+ __alloc_objs(kvzalloc, P, 1, FLAGS, NULL)
+#define kvzalloc_obj_sz(P, FLAGS, SIZE) \
+ __alloc_objs(kvzalloc, P, 1, FLAGS, SIZE)
+#define kvzalloc_objs(P, COUNT, FLAGS) \
+ __alloc_objs(kvzalloc, P, COUNT, FLAGS, NULL)
+#define kvzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
+ __alloc_objs(kvzalloc, P, COUNT, FLAGS, SIZE)
+#define kvzalloc_flex(P, FAM, COUNT, FLAGS) \
+ __alloc_flex(kvzalloc, P, FAM, COUNT, FLAGS, NULL)
+#define kvzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
+ __alloc_flex(kvzalloc, P, FAM, COUNT, FLAGS, SIZE)
+
#define kmem_buckets_alloc(_b, _size, _flags) \
alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3] slab: Introduce kmalloc_obj() and family
2024-08-22 23:13 [PATCH v3] slab: Introduce kmalloc_obj() and family Kees Cook
@ 2024-08-23 4:27 ` Przemek Kitszel
2024-10-04 17:23 ` Kees Cook
2024-08-27 21:32 ` Vlastimil Babka
1 sibling, 1 reply; 6+ messages in thread
From: Przemek Kitszel @ 2024-08-23 4:27 UTC (permalink / raw)
To: Kees Cook, Vlastimil Babka
Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Marco Elver, linux-mm, Nathan Chancellor, Nick Desaulniers,
linux-kernel, llvm, linux-hardening
On 8/23/24 01:13, Kees Cook wrote:
> (...) For cases where the total size of the allocation is needed,
> the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family
> of macros can be used. For example:
>
> info->size = struct_size(ptr, flex_member, count);
> ptr = kmalloc(info->size, gfp);
>
> becomes:
>
> kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
>
> Internal introspection of allocated type now becomes possible, allowing
> for future alignment-aware choices and hardening work. For example,
> adding __alignof(*ptr) as an argument to the internal allocators so that
> appropriate/efficient alignment choices can be made, or being able to
> correctly choose per-allocation offset randomization within a bucket
> that does not break alignment requirements.
>
> Introduces __flex_count() for when __builtin_get_counted_by() is added
> by GCC[1] and Clang[2]. The internal use of __flex_count() allows for
> automatically setting the counter member of a struct's flexible array
> member when it has been annotated with __counted_by(), avoiding any
> missed early size initializations while __counted_by() annotations are
> added to the kernel. Additionally, this also checks for "too large"
> allocations based on the type size of the counter variable. For example:
>
> if (count > type_max(ptr->flex_count))
> fail...;
> info->size = struct_size(ptr, flex_member, count);
> ptr = kmalloc(info->size, gfp);
> ptr->flex_count = count;
>
> becomes (i.e. unchanged from earlier example):
>
> kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
As there could be no __builtin_get_counted_by() available, caller still
needs to fill the counted-by variable, right? So it is possible to just
pass the in the struct pointer to fill? (last argument "&f->cnt" of the
snippet below):
struct foo {
int cnt;
struct bar[] __counted_by(cnt);
};
//...
struct foo *f;
kmalloc_flex_sz(f, cnt, 42, gfp, &f->cnt);
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] slab: Introduce kmalloc_obj() and family
2024-08-23 4:27 ` Przemek Kitszel
@ 2024-10-04 17:23 ` Kees Cook
0 siblings, 0 replies; 6+ messages in thread
From: Kees Cook @ 2024-10-04 17:23 UTC (permalink / raw)
To: Przemek Kitszel
Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Marco Elver, linux-mm, Nathan Chancellor, Nick Desaulniers,
linux-kernel, llvm, linux-hardening
On Fri, Aug 23, 2024 at 06:27:58AM +0200, Przemek Kitszel wrote:
> On 8/23/24 01:13, Kees Cook wrote:
>
> > (...) For cases where the total size of the allocation is needed,
> > the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family
> > of macros can be used. For example:
> >
> > info->size = struct_size(ptr, flex_member, count);
> > ptr = kmalloc(info->size, gfp);
> >
> > becomes:
> >
> > kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
> >
> > Internal introspection of allocated type now becomes possible, allowing
> > for future alignment-aware choices and hardening work. For example,
> > adding __alignof(*ptr) as an argument to the internal allocators so that
> > appropriate/efficient alignment choices can be made, or being able to
> > correctly choose per-allocation offset randomization within a bucket
> > that does not break alignment requirements.
> >
> > Introduces __flex_count() for when __builtin_get_counted_by() is added
> > by GCC[1] and Clang[2]. The internal use of __flex_count() allows for
> > automatically setting the counter member of a struct's flexible array
> > member when it has been annotated with __counted_by(), avoiding any
> > missed early size initializations while __counted_by() annotations are
> > added to the kernel. Additionally, this also checks for "too large"
> > allocations based on the type size of the counter variable. For example:
> >
> > if (count > type_max(ptr->flex_count))
> > fail...;
> > info->size = struct_size(ptr, flex_member, count);
> > ptr = kmalloc(info->size, gfp);
> > ptr->flex_count = count;
> >
> > becomes (i.e. unchanged from earlier example):
> >
> > kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
>
> As there could be no __builtin_get_counted_by() available, caller still
> needs to fill the counted-by variable, right? So it is possible to just
> pass the in the struct pointer to fill? (last argument "&f->cnt" of the
> snippet below):
>
> struct foo {
> int cnt;
> struct bar[] __counted_by(cnt);
> };
>
> //...
> struct foo *f;
>
> kmalloc_flex_sz(f, cnt, 42, gfp, &f->cnt);
I specifically want to avoid this because it makes adding the
counted_by attribute more difficult -- requiring manual auditing of
all allocation sites, even if we switch all the alloc macros. But if
allocation macros are all replaced with a treewide change, it becomes
trivial to add counted_by annotations without missing "too late" counter
assignments. (And note that the "too late" counter assignments are only
a problem for code built with compilers that support counted_by, so
there's no problem that __builtin_get_counted_by() isn't available.)
Right now we have two cases in kernel code:
case 1:
- allocate
- assign counter
- access array
case 2:
- allocate
- access array
- assign counter
When we add a counted_by annotation, all "case 2" code but be found and
refactored into "case 1". This has proven error-prone already, and we're
still pretty early in adding annotations. The reason refactoring is
needed is because when the compiler supports counted_by instrumentation,
at run-time, we get:
case 1:
- allocate
- assign counter
- access array // no problem!
case 2:
- allocate
- access array // trap!
- assign counter
I want to change this to be:
case 1:
- allocate & assign counter
- assign counter
- access array
case 2:
- allocate & assign counter
- access array
- assign counter
Once the kernel reaches a minimum compiler version where counted_by is
universally available, we can remove all the "open coded" counter
assignments.
-Kees
--
Kees Cook
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] slab: Introduce kmalloc_obj() and family
2024-08-22 23:13 [PATCH v3] slab: Introduce kmalloc_obj() and family Kees Cook
2024-08-23 4:27 ` Przemek Kitszel
@ 2024-08-27 21:32 ` Vlastimil Babka
2024-08-28 0:19 ` Kees Cook
1 sibling, 1 reply; 6+ messages in thread
From: Vlastimil Babka @ 2024-08-27 21:32 UTC (permalink / raw)
To: Kees Cook, Linus Torvalds
Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, linux-mm, Nathan Chancellor,
Nick Desaulniers, linux-kernel, llvm, linux-hardening
+Cc Linus
On 8/23/24 01:13, Kees Cook wrote:
> Introduce type-aware kmalloc-family helpers to replace the common
> idioms for single, array, and flexible object allocations:
>
> ptr = kmalloc(sizeof(*ptr), gfp);
> ptr = kzalloc(sizeof(*ptr), gfp);
> ptr = kmalloc_array(count, sizeof(*ptr), gfp);
> ptr = kcalloc(count, sizeof(*ptr), gfp);
> ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
>
> These become, respectively:
>
> kmalloc_obj(ptr, gfp);
> kzalloc_obj(ptr, gfp);
> kmalloc_objs(ptr, count, gfp);
> kzalloc_objs(ptr, count, gfp);
> kmalloc_flex(ptr, flex_member, count, gfp);
This is indeed better than the previous version. The hidden assignment to
ptr seems still very counter-intuitive, but if it's the only way to do those
validations, the question is then just whether it's worth the getting used
to it, or not.
> These each return the assigned value of ptr (which may be NULL on
> failure). For cases where the total size of the allocation is needed,
> the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family
> of macros can be used. For example:
>
> info->size = struct_size(ptr, flex_member, count);
> ptr = kmalloc(info->size, gfp);
>
> becomes:
>
> kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
>
> Internal introspection of allocated type now becomes possible, allowing
> for future alignment-aware choices and hardening work. For example,
> adding __alignof(*ptr) as an argument to the internal allocators so that
> appropriate/efficient alignment choices can be made, or being able to
> correctly choose per-allocation offset randomization within a bucket
> that does not break alignment requirements.
>
> Introduces __flex_count() for when __builtin_get_counted_by() is added
"Also introduce __flex_counter() ..."?
> by GCC[1] and Clang[2]. The internal use of __flex_count() allows for
> automatically setting the counter member of a struct's flexible array
But if it's a to-be-implemented feature, perhaps it would be too early to
include it here? Were you able to even test that part right now?
> member when it has been annotated with __counted_by(), avoiding any
> missed early size initializations while __counted_by() annotations are
> added to the kernel. Additionally, this also checks for "too large"
> allocations based on the type size of the counter variable. For example:
>
> if (count > type_max(ptr->flex_count))
> fail...;
> info->size = struct_size(ptr, flex_member, count);
> ptr = kmalloc(info->size, gfp);
> ptr->flex_count = count;
>
> becomes (i.e. unchanged from earlier example):
>
> kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size);
>
> Replacing all existing simple code patterns found via Coccinelle[3]
> shows what could be replaced immediately (saving roughly 1,500 lines):
>
> 7040 files changed, 14128 insertions(+), 15557 deletions(-)
Since that could be feasible to apply only if Linus ran that directly
himself, including him now. Because doing it any other way would leave us
semi-converted forever and not bring the full benefits?
> Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116016 [1]
> Link: https://github.com/llvm/llvm-project/issues/99774 [2]
> Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_obj-assign-size.cocci [3]
> Signed-off-by: Kees Cook <kees@kernel.org>
> ---
> Initial testing looks good. Before I write all the self-tests, I just
> wanted to validate that the new API is reasonable (i.e. it is no longer
> using optional argument counts for choosing the internal API).
>
> v3:
> - Add .rst documentation
> - Add kern-doc
> - Return ptr instead of size by default
> - Add *_sz() variants that provide allocation size output
> - Implement __flex_counter() logic
> v2: https://lore.kernel.org/linux-hardening/20240807235433.work.317-kees@kernel.org/
> v1: https://lore.kernel.org/linux-hardening/20240719192744.work.264-kees@kernel.org/
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
> Cc: Bill Wendling <morbo@google.com>
> Cc: Justin Stitt <justinstitt@google.com>
> Cc: Jann Horn <jannh@google.com>
> Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> Cc: Marco Elver <elver@google.com>
> Cc: linux-mm@kvack.org
> ---
> Documentation/process/deprecated.rst | 41 +++++++
> include/linux/compiler_types.h | 22 ++++
> include/linux/slab.h | 174 +++++++++++++++++++++++++++
> 3 files changed, 237 insertions(+)
>
> diff --git a/Documentation/process/deprecated.rst b/Documentation/process/deprecated.rst
> index 1f7f3e6c9cda..b22ec088a044 100644
> --- a/Documentation/process/deprecated.rst
> +++ b/Documentation/process/deprecated.rst
> @@ -372,3 +372,44 @@ The helper must be used::
> DECLARE_FLEX_ARRAY(struct type2, two);
> };
> };
> +
> +Open-coded kmalloc assignments
> +------------------------------
> +Performing open-coded kmalloc()-family allocation assignments prevents
> +the kernel (and compiler) from being able to examine the type of the
> +variable being assigned, which limits any related introspection that
> +may help with alignment, wrap-around, or additional hardening. The
> +kmalloc_obj()-family of macros provide this introspection, which can be
> +used for the common code patterns for single, array, and flexible object
> +allocations. For example, these open coded assignments::
> +
> + ptr = kmalloc(sizeof(*ptr), gfp);
> + ptr = kzalloc(sizeof(*ptr), gfp);
> + ptr = kmalloc_array(count, sizeof(*ptr), gfp);
> + ptr = kcalloc(count, sizeof(*ptr), gfp);
> + ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
> +
> +become, respectively::
> +
> + kmalloc_obj(ptr, gfp);
> + kzalloc_obj(ptr, gfp);
> + kmalloc_objs(ptr, count, gfp);
> + kzalloc_objs(ptr, count, gfp);
> + kmalloc_flex(ptr, flex_member, count, gfp);
> +
> +For the cases where the total size of the allocation is also needed,
> +the kmalloc_obj_size(), kmalloc_objs_sz(), and kmalloc_flex_sz() family of
> +macros can be used. For example, converting these assignments::
> +
> + total_size = struct_size(ptr, flex_member, count);
> + ptr = kmalloc(total_size, gfp);
> +
> +becomes::
> +
> + kmalloc_flex_sz(ptr, flex_member, count, gfp, &total_size);
> +
> +If `ptr->flex_member` is annotated with __counted_by(), the allocation
> +will automatically fail if `count` is larger than the maximum
> +representable value that can be stored in the counter member associated
> +with `flex_member`. Similarly, the allocation will fail if the total
> +size of the allocation exceeds the maximum value `*total_size` can hold.
> diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
> index f14c275950b5..b99deae45210 100644
> --- a/include/linux/compiler_types.h
> +++ b/include/linux/compiler_types.h
> @@ -421,6 +421,28 @@ struct ftrace_likely_data {
> #define __member_size(p) __builtin_object_size(p, 1)
> #endif
>
> +#if __has_builtin(__builtin_get_counted_by)
> +/**
> + * __flex_counter - Get pointer to counter member for the given
> + * flexible array, if it was annotated with __counted_by()
> + * @flex: Pointer to flexible array member of an addressable struct instance
> + *
> + * For example, with:
> + *
> + * struct foo {
> + * int counter;
> + * short array[] __counted_by(counter);
> + * } *p;
> + *
> + * __flex_counter(p->array) will resolve to &p->counter.
> + *
> + * If p->array is unannotated, this returns (void *)NULL.
> + */
> +#define __flex_counter(flex) __builtin_get_counted_by(flex)
> +#else
> +#define __flex_counter(flex) ((void *)NULL)
> +#endif
> +
> /*
> * Some versions of gcc do not mark 'asm goto' volatile:
> *
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index eb2bf4629157..c37606b9e248 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -686,6 +686,180 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
> }
> #define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__))
>
> +#define __alloc_objs(ALLOC, P, COUNT, FLAGS, SIZE) \
> +({ \
> + size_t __obj_size = size_mul(sizeof(*P), COUNT); \
> + const typeof(_Generic(SIZE, \
> + void *: (size_t *)NULL, \
> + default: SIZE)) __size_ptr = (SIZE); \
> + typeof(P) __obj_ptr = NULL; \
> + /* Does the total size fit in the *SIZE variable? */ \
> + if (!__size_ptr || __obj_size <= type_max(*__size_ptr)) \
> + __obj_ptr = ALLOC(__obj_size, FLAGS); \
> + if (!__obj_ptr) \
> + __obj_size = 0; \
> + if (__size_ptr) \
> + *__size_ptr = __obj_size; \
> + (P) = __obj_ptr; \
> +})
> +
> +#define __alloc_flex(ALLOC, P, FAM, COUNT, FLAGS, SIZE) \
> +({ \
> + size_t __count = (COUNT); \
> + size_t __obj_size = struct_size(P, FAM, __count); \
> + const typeof(_Generic(SIZE, \
> + void *: (size_t *)NULL, \
> + default: SIZE)) __size_ptr = (SIZE); \
> + typeof(P) __obj_ptr = NULL; \
> + /* Just query the counter type for type_max checking. */ \
> + typeof(_Generic(__flex_counter(__obj_ptr->FAM), \
> + void *: (size_t *)NULL, \
> + default: __flex_counter(__obj_ptr->FAM))) \
> + __counter_type_ptr = NULL; \
> + /* Does the count fit in the __counted_by counter member? */ \
> + if ((__count <= type_max(*__counter_type_ptr)) && \
> + /* Does the total size fit in the *SIZE variable? */ \
> + (!__size_ptr || __obj_size <= type_max(*__size_ptr))) \
> + __obj_ptr = ALLOC(__obj_size, FLAGS); \
> + if (__obj_ptr) { \
> + /* __obj_ptr now allocated so get real counter ptr. */ \
> + typeof(_Generic(__flex_counter(__obj_ptr->FAM), \
> + void *: (size_t *)NULL, \
> + default: __flex_counter(__obj_ptr->FAM))) \
> + __counter_ptr = __flex_counter(__obj_ptr->FAM); \
> + if (__counter_ptr) \
> + *__counter_ptr = __count; \
> + } else { \
> + __obj_size = 0; \
> + } \
> + if (__size_ptr) \
> + *__size_ptr = __obj_size; \
> + (P) = __obj_ptr; \
> +})
> +
> +/**
> + * kmalloc_obj - Allocate a single instance of the given structure
> + * @P: Pointer to hold allocation of the structure
> + * @FLAGS: GFP flags for the allocation
> + *
> + * Returns the newly allocated value of @P on success, NULL on failure.
> + * @P is assigned the result, either way.
> + */
> +#define kmalloc_obj(P, FLAGS) \
> + __alloc_objs(kmalloc, P, 1, FLAGS, NULL)
> +/**
> + * kmalloc_obj_sz - Allocate a single instance of the given structure and
> + * store total size
> + * @P: Pointer to hold allocation of the structure
> + * @FLAGS: GFP flags for the allocation
> + * @SIZE: Pointer to variable to hold the total allocation size
> + *
> + * Returns the newly allocated value of @P on success, NULL on failure.
> + * @P is assigned the result, either way. If @SIZE is non-NULL, the
> + * allocation will immediately fail if the total allocation size is larger
> + * than what the type of *@SIZE can represent.
> + */
> +#define kmalloc_obj_sz(P, FLAGS, SIZE) \
> + __alloc_objs(kmalloc, P, 1, FLAGS, SIZE)
> +/**
> + * kmalloc_objs - Allocate an array of the given structure
> + * @P: Pointer to hold allocation of the structure array
> + * @COUNT: How many elements in the array
> + * @FLAGS: GFP flags for the allocation
> + *
> + * Returns the newly allocated value of @P on success, NULL on failure.
> + * @P is assigned the result, either way.
> + */
> +#define kmalloc_objs(P, COUNT, FLAGS) \
> + __alloc_objs(kmalloc, P, COUNT, FLAGS, NULL)
> +/**
> + * kmalloc_objs_sz - Allocate an array of the given structure and store
> + * total size
> + * @P: Pointer to hold allocation of the structure array
> + * @COUNT: How many elements in the array
> + * @FLAGS: GFP flags for the allocation
> + * @SIZE: Pointer to variable to hold the total allocation size
> + *
> + * Returns the newly allocated value of @P on success, NULL on failure.
> + * @P is assigned the result, either way. If @SIZE is non-NULL, the
> + * allocation will immediately fail if the total allocation size is larger
> + * than what the type of *@SIZE can represent.
> + */
> +#define kmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
> + __alloc_objs(kmalloc, P, COUNT, FLAGS, SIZE)
> +/**
> + * kmalloc_flex - Allocate a single instance of the given flexible structure
> + * @P: Pointer to hold allocation of the structure
> + * @FAM: The name of the flexible array member of the structure
> + * @COUNT: How many flexible array member elements are desired
> + * @FLAGS: GFP flags for the allocation
> + *
> + * Returns the newly allocated value of @P on success, NULL on failure.
> + * @P is assigned the result, either way. If @FAM has been annotated with
> + * __counted_by(), the allocation will immediately fail if @COUNT is larger
> + * than what the type of the struct's counter variable can represent.
> + */
> +#define kmalloc_flex(P, FAM, COUNT, FLAGS) \
> + __alloc_flex(kmalloc, P, FAM, COUNT, FLAGS, NULL)
> +
> +/**
> + * kmalloc_flex_sz - Allocate a single instance of the given flexible
> + * structure and store total size
> + * @P: Pointer to hold allocation of the structure
> + * @FAM: The name of the flexible array member of the structure
> + * @COUNT: How many flexible array member elements are desired
> + * @FLAGS: GFP flags for the allocation
> + * @SIZE: Pointer to variable to hold the total allocation size
> + *
> + * Returns the newly allocated value of @P on success, NULL on failure.
> + * @P is assigned the result, either way. If @FAM has been annotated with
> + * __counted_by(), the allocation will immediately fail if @COUNT is larger
> + * than what the type of the struct's counter variable can represent. If
> + * @SIZE is non-NULL, the allocation will immediately fail if the total
> + * allocation size is larger than what the type of *@SIZE can represent.
> + */
> +#define kmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
> + __alloc_flex(kmalloc, P, FAM, COUNT, FLAGS, SIZE)
> +
> +#define kzalloc_obj(P, FLAGS) \
> + __alloc_objs(kzalloc, P, 1, FLAGS, NULL)
> +#define kzalloc_obj_sz(P, FLAGS, SIZE) \
> + __alloc_objs(kzalloc, P, 1, FLAGS, SIZE)
> +#define kzalloc_objs(P, COUNT, FLAGS) \
> + __alloc_objs(kzalloc, P, COUNT, FLAGS, NULL)
> +#define kzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
> + __alloc_objs(kzalloc, P, COUNT, FLAGS, SIZE)
> +#define kzalloc_flex(P, FAM, COUNT, FLAGS) \
> + __alloc_flex(kzalloc, P, FAM, COUNT, FLAGS, NULL)
> +#define kzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
> + __alloc_flex(kzalloc, P, FAM, COUNT, FLAGS, SIZE)
> +
> +#define kvmalloc_obj(P, FLAGS) \
> + __alloc_objs(kvmalloc, P, 1, FLAGS, NULL)
Wonder if there is really a single struct (not array) with no flex array
that could need kvmalloc? :)
> +#define kvmalloc_obj_sz(P, FLAGS, SIZE) \
> + __alloc_objs(kvmalloc, P, 1, FLAGS, SIZE)
> +#define kvmalloc_objs(P, COUNT, FLAGS) \
> + __alloc_objs(kvmalloc, P, COUNT, FLAGS, NULL)
> +#define kvmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
> + __alloc_objs(kvmalloc, P, COUNT, FLAGS, SIZE)
> +#define kvmalloc_flex(P, FAM, COUNT, FLAGS) \
> + __alloc_flex(kvmalloc, P, FAM, COUNT, FLAGS, NULL)
> +#define kvmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
> + __alloc_flex(kvmalloc, P, FAM, COUNT, FLAGS, SIZE)
> +
> +#define kvzalloc_obj(P, FLAGS) \
> + __alloc_objs(kvzalloc, P, 1, FLAGS, NULL)
> +#define kvzalloc_obj_sz(P, FLAGS, SIZE) \
> + __alloc_objs(kvzalloc, P, 1, FLAGS, SIZE)
> +#define kvzalloc_objs(P, COUNT, FLAGS) \
> + __alloc_objs(kvzalloc, P, COUNT, FLAGS, NULL)
> +#define kvzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \
> + __alloc_objs(kvzalloc, P, COUNT, FLAGS, SIZE)
> +#define kvzalloc_flex(P, FAM, COUNT, FLAGS) \
> + __alloc_flex(kvzalloc, P, FAM, COUNT, FLAGS, NULL)
> +#define kvzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \
> + __alloc_flex(kvzalloc, P, FAM, COUNT, FLAGS, SIZE)
> +
> #define kmem_buckets_alloc(_b, _size, _flags) \
> alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] slab: Introduce kmalloc_obj() and family
2024-08-27 21:32 ` Vlastimil Babka
@ 2024-08-28 0:19 ` Kees Cook
2024-08-28 15:37 ` Christoph Lameter (Ampere)
0 siblings, 1 reply; 6+ messages in thread
From: Kees Cook @ 2024-08-28 0:19 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Linus Torvalds, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, linux-mm, Nathan Chancellor,
Nick Desaulniers, linux-kernel, llvm, linux-hardening
On Tue, Aug 27, 2024 at 11:32:14PM +0200, Vlastimil Babka wrote:
> +Cc Linus
>
> On 8/23/24 01:13, Kees Cook wrote:
> > Introduce type-aware kmalloc-family helpers to replace the common
> > idioms for single, array, and flexible object allocations:
> >
> > ptr = kmalloc(sizeof(*ptr), gfp);
> > ptr = kzalloc(sizeof(*ptr), gfp);
> > ptr = kmalloc_array(count, sizeof(*ptr), gfp);
> > ptr = kcalloc(count, sizeof(*ptr), gfp);
> > ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
> >
> > These become, respectively:
> >
> > kmalloc_obj(ptr, gfp);
> > kzalloc_obj(ptr, gfp);
> > kmalloc_objs(ptr, count, gfp);
> > kzalloc_objs(ptr, count, gfp);
> > kmalloc_flex(ptr, flex_member, count, gfp);
>
> This is indeed better than the previous version. The hidden assignment to
> ptr seems still very counter-intuitive, but if it's the only way to do those
> validations, the question is then just whether it's worth the getting used
> to it, or not.
We could make the syntax require "&ptr"?
As for alternatives, one thing I investigated for a while that made
several compiler people unhappy was to introduce an builtin named
something like __builtin_lvalue() which could be used on the RHS of an
assignment to discover the lvalue in some way. Then we could, for
example, add alignment discovery like so:
#define kmalloc(_size, _gfp) \
__kmalloc(_size, __alignof(typeof(__builtin_lvalue())), _gfp)
or do the FAM struct allocations:
#define kmalloc_flex(_member, _count, _gfp) \
__kmalloc(sizeof(*typeof(__builtin_lvalue())) +
sizeof(*__builtin_lvalue()->_member) * (_count), gfp)
Compiler folks seems very unhappy with this, though. As I can recognize
it creates problems for stuff like:
return kmalloc(...)
Of course the proposed macros still have the above problem, and both to
use a temporary variable to deal with it.
So, really it's a question of "how best to introspect the lvalue?"
> [...]
> > by GCC[1] and Clang[2]. The internal use of __flex_count() allows for
> > automatically setting the counter member of a struct's flexible array
>
> But if it's a to-be-implemented feature, perhaps it would be too early to
> include it here? Were you able to even test that part right now?
There are RFC patches for both GCC and Clang that I tested against.
However, yes, it is still pretty early. I just wanted to show that it
can work, etc. (i.e. not propose a macro with no "real" benefit over the
existing assignments).
> [...]
> > Replacing all existing simple code patterns found via Coccinelle[3]
> > shows what could be replaced immediately (saving roughly 1,500 lines):
> >
> > 7040 files changed, 14128 insertions(+), 15557 deletions(-)
>
> Since that could be feasible to apply only if Linus ran that directly
> himself, including him now. Because doing it any other way would leave us
> semi-converted forever and not bring the full benefits?
Right -- I'd want to do a mass conversion and follow it up with any
remaining ones. There are a lot in the style of "return k*alloc(...)"
for example.
> [...]
> > +#define kvmalloc_obj(P, FLAGS) \
> > + __alloc_objs(kvmalloc, P, 1, FLAGS, NULL)
>
> Wonder if there is really a single struct (not array) with no flex array
> that could need kvmalloc? :)
Ah, yes, Good point. I was going for "full" macro coverage. :P
Thanks for looking at this!
-Kees
--
Kees Cook
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] slab: Introduce kmalloc_obj() and family
2024-08-28 0:19 ` Kees Cook
@ 2024-08-28 15:37 ` Christoph Lameter (Ampere)
0 siblings, 0 replies; 6+ messages in thread
From: Christoph Lameter (Ampere) @ 2024-08-28 15:37 UTC (permalink / raw)
To: Kees Cook
Cc: Vlastimil Babka, Linus Torvalds, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
Gustavo A . R . Silva, Bill Wendling, Justin Stitt, Jann Horn,
Przemek Kitszel, Marco Elver, linux-mm, Nathan Chancellor,
Nick Desaulniers, linux-kernel, llvm, linux-hardening
On Tue, 27 Aug 2024, Kees Cook wrote:
> > Since that could be feasible to apply only if Linus ran that directly
> > himself, including him now. Because doing it any other way would leave us
> > semi-converted forever and not bring the full benefits?
>
> Right -- I'd want to do a mass conversion and follow it up with any
> remaining ones. There are a lot in the style of "return k*alloc(...)"
> for example.
I believe Andrew has dealt with these issues in the past?
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-10-04 17:23 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-22 23:13 [PATCH v3] slab: Introduce kmalloc_obj() and family Kees Cook
2024-08-23 4:27 ` Przemek Kitszel
2024-10-04 17:23 ` Kees Cook
2024-08-27 21:32 ` Vlastimil Babka
2024-08-28 0:19 ` Kees Cook
2024-08-28 15:37 ` Christoph Lameter (Ampere)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).