linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 0/4] support large align and nid in Rust allocators
@ 2025-07-07 16:47 Vitaly Wool
  2025-07-07 16:48 ` [PATCH v11 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
                   ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: Vitaly Wool @ 2025-07-07 16:47 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Vitaly Wool

The coming patches provide the ability for Rust allocators to set
NUMA node and large alignment.

Changelog:
v2 -> v3:
* fixed the build breakage for non-MMU configs
v3 -> v4:
* added NUMA node support for k[v]realloc (patch #2)
* removed extra logic in Rust helpers
* patch for Rust allocators split into 2 (align: patch #3 and
  NUMA ids: patch #4)
v4 -> v5:
* reworked NUMA node support for k[v]realloc for all 3 <alloc>_node
  functions to have the same signature
* all 3 <alloc>_node slab/vmalloc functions now support alignment
  specification
* Rust helpers are extended with new functions, the old ones are left
  intact
* Rust support for NUMA nodes comes first now (as patch #3)
v5 -> v6:
* added <alloc>_node_align functions to keep the existing interfaces
  intact
* clearer separation for Rust support of MUNA ids and large alignments
v6 -> v7:
* NUMA identifier as a new Rust type (NumaNode)
* better documentation for changed and new functions and constants
v7 -> v8:
* removed NumaError
* small cleanups per reviewers' comments
v8 -> v9:
* realloc functions can now reallocate memory for a different NUMA
  node
* better comments/explanations in the Rust part
v9 -> v10:
* refined behavior when memory is being reallocated for a different
  NUMA node, comments added
* cleanups in the Rust part, rustfmt ran
* typos corrected
v10 -> v11:
* added documentation for the NO_NODE constant
* added node parameter to Allocator's alloc/realloc instead of adding
  separate alloc_node resp. realloc_node functions, modified users of
  alloc/realloc in accordance with that

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v11 1/4] mm/vmalloc: allow to set node and align in vrealloc
  2025-07-07 16:47 [PATCH v11 0/4] support large align and nid in Rust allocators Vitaly Wool
@ 2025-07-07 16:48 ` Vitaly Wool
  2025-07-08 12:12   ` Vlastimil Babka
  2025-07-07 16:49 ` [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc Vitaly Wool
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Vitaly Wool @ 2025-07-07 16:48 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Vitaly Wool

Reimplement vrealloc() to be able to set node and alignment should
a user need to do so. Rename the function to vrealloc_node_align()
to better match what it actually does now and introduce macros for
vrealloc() and friends for backward compatibility.

With that change we also provide the ability for the Rust part of
the kernel to set node and alignment in its allocations.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/vmalloc.h | 12 +++++++++---
 mm/nommu.c              |  3 ++-
 mm/vmalloc.c            | 28 +++++++++++++++++++++++-----
 3 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index fdc9aeb74a44..68791f7cb3ba 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -197,9 +197,15 @@ extern void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1
 extern void *vcalloc_noprof(size_t n, size_t size) __alloc_size(1, 2);
 #define vcalloc(...)		alloc_hooks(vcalloc_noprof(__VA_ARGS__))
 
-void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags)
-		__realloc_size(2);
-#define vrealloc(...)		alloc_hooks(vrealloc_noprof(__VA_ARGS__))
+void *__must_check vrealloc_node_align_noprof(const void *p, size_t size,
+		unsigned long align, gfp_t flags, int nid) __realloc_size(2);
+#define vrealloc_node_noprof(_p, _s, _f, _nid)	\
+	vrealloc_node_align_noprof(_p, _s, 1, _f, _nid)
+#define vrealloc_noprof(_p, _s, _f)		\
+	vrealloc_node_align_noprof(_p, _s, 1, _f, NUMA_NO_NODE)
+#define vrealloc_node_align(...)		alloc_hooks(vrealloc_node_align_noprof(__VA_ARGS__))
+#define vrealloc_node(...)			alloc_hooks(vrealloc_node_noprof(__VA_ARGS__))
+#define vrealloc(...)				alloc_hooks(vrealloc_noprof(__VA_ARGS__))
 
 extern void vfree(const void *addr);
 extern void vfree_atomic(const void *addr);
diff --git a/mm/nommu.c b/mm/nommu.c
index 87e1acab0d64..8359b2025b9f 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -119,7 +119,8 @@ void *__vmalloc_noprof(unsigned long size, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(__vmalloc_noprof);
 
-void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				 gfp_t flags, int node)
 {
 	return krealloc_noprof(p, size, (flags | __GFP_COMP) & ~__GFP_HIGHMEM);
 }
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6dbcdceecae1..412664656870 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4089,13 +4089,22 @@ void *vzalloc_node_noprof(unsigned long size, int node)
 EXPORT_SYMBOL(vzalloc_node_noprof);
 
 /**
- * vrealloc - reallocate virtually contiguous memory; contents remain unchanged
+ * vrealloc_node_align_noprof - reallocate virtually contiguous memory; contents
+ * remain unchanged
  * @p: object to reallocate memory for
  * @size: the size to reallocate
+ * @align: requested alignment
  * @flags: the flags for the page level allocator
+ * @nid: node number of the target node
+ *
+ * If @p is %NULL, vrealloc_XXX() behaves exactly like vmalloc(). If @size is
+ * 0 and @p is not a %NULL pointer, the object pointed to is freed.
  *
- * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and
- * @p is not a %NULL pointer, the object pointed to is freed.
+ * if @nid is not NUMA_NO_NODE, this function will try to allocate memory on
+ * the given node. If reallocation is not necessary (e. g. the new size is less
+ * than the current allocated size), the current allocation will be preserved
+ * unless __GFP_THISNODE is set. In the latter case a new allocation on the
+ * requested node will be attempted.
  *
  * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
  * initial memory allocation, every subsequent call to this API for the same
@@ -4111,7 +4120,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
  * Return: pointer to the allocated memory; %NULL if @size is zero or in case of
  *         failure
  */
-void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				 gfp_t flags, int nid)
 {
 	struct vm_struct *vm = NULL;
 	size_t alloced_size = 0;
@@ -4135,6 +4145,12 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
 		if (WARN(alloced_size < old_size,
 			 "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
 			return NULL;
+		if (WARN(!IS_ALIGNED((unsigned long)p, align),
+			 "will not reallocate with a bigger alignment (0x%lx)\n", align))
+			return NULL;
+		if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
+			     nid != page_to_nid(vmalloc_to_page(p)))
+			goto need_realloc;
 	}
 
 	/*
@@ -4165,8 +4181,10 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
 		return (void *)p;
 	}
 
+need_realloc:
 	/* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
-	n = __vmalloc_noprof(size, flags);
+	n = __vmalloc_node_noprof(size, align, flags, nid, __builtin_return_address(0));
+
 	if (!n)
 		return NULL;
 
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc
  2025-07-07 16:47 [PATCH v11 0/4] support large align and nid in Rust allocators Vitaly Wool
  2025-07-07 16:48 ` [PATCH v11 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
@ 2025-07-07 16:49 ` Vitaly Wool
  2025-07-08 12:52   ` Vlastimil Babka
  2025-07-07 16:49 ` [PATCH v11 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Vitaly Wool @ 2025-07-07 16:49 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Vitaly Wool

Reimplement k[v]realloc_node() to be able to set node and
alignment should a user need to do so. In order to do that while
retaining the maximal backward compatibility, add
k[v]realloc_node_align() functions and redefine the rest of API
using these new ones.

With that change we also provide the ability for the Rust part of
the kernel to set node and alignment in its K[v]xxx
[re]allocations.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
---
 include/linux/slab.h | 40 ++++++++++++++++++---------
 mm/slub.c            | 64 ++++++++++++++++++++++++++++++--------------
 2 files changed, 71 insertions(+), 33 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index d5a8ab98035c..13abcf4ada22 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -465,9 +465,15 @@ int kmem_cache_shrink(struct kmem_cache *s);
 /*
  * Common kmalloc functions provided by all allocators
  */
-void * __must_check krealloc_noprof(const void *objp, size_t new_size,
-				    gfp_t flags) __realloc_size(2);
-#define krealloc(...)				alloc_hooks(krealloc_noprof(__VA_ARGS__))
+void * __must_check krealloc_node_align_noprof(const void *objp, size_t new_size,
+					       unsigned long align,
+					       gfp_t flags, int nid) __realloc_size(2);
+#define krealloc_node_noprof(_p, _s, _f, _n) \
+	krealloc_node_align_noprof(_p, _s, 1, _f, _n)
+#define krealloc_noprof(...)		krealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
+#define krealloc_node_align(...)	alloc_hooks(krealloc_node_align_noprof(__VA_ARGS__))
+#define krealloc_node(...)		alloc_hooks(krealloc_node_noprof(__VA_ARGS__))
+#define krealloc(...)			alloc_hooks(krealloc_noprof(__VA_ARGS__))
 
 void kfree(const void *objp);
 void kfree_sensitive(const void *objp);
@@ -1041,18 +1047,23 @@ static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flags)
 #define kzalloc(...)				alloc_hooks(kzalloc_noprof(__VA_ARGS__))
 #define kzalloc_node(_size, _flags, _node)	kmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
 
-void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node) __alloc_size(1);
-#define kvmalloc_node_noprof(size, flags, node)	\
-	__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(size, NULL), flags, node)
+void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
+			     gfp_t flags, int node) __alloc_size(1);
+#define kvmalloc_node_align_noprof(_size, _align, _flags, _node)	\
+	__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, NULL), _align, _flags, _node)
+#define kvmalloc_node_noprof(_size, _flags, _node)	\
+	kvmalloc_node_align_noprof(_size, 1, _flags, _node)
+#define kvmalloc_node_align(...)		\
+	alloc_hooks(kvmalloc_node_align_noprof(__VA_ARGS__))
 #define kvmalloc_node(...)			alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__))
 
-#define kvmalloc(_size, _flags)			kvmalloc_node(_size, _flags, NUMA_NO_NODE)
-#define kvmalloc_noprof(_size, _flags)		kvmalloc_node_noprof(_size, _flags, NUMA_NO_NODE)
+#define kvmalloc_noprof(...)			kvmalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
+#define kvmalloc(...)				alloc_hooks(kvmalloc_noprof(__VA_ARGS__))
 #define kvzalloc(_size, _flags)			kvmalloc(_size, (_flags)|__GFP_ZERO)
 
-#define kvzalloc_node(_size, _flags, _node)	kvmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
+#define kvzalloc_node(_s, _f, _n)		kvmalloc_node(_s, (_f)|__GFP_ZERO, _n)
 #define kmem_buckets_valloc(_b, _size, _flags)	\
-	alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
+	alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), 1, _flags, NUMA_NO_NODE))
 
 static inline __alloc_size(1, 2) void *
 kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
@@ -1068,13 +1079,16 @@ kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
 #define kvmalloc_array_noprof(...)		kvmalloc_array_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
 #define kvcalloc_node_noprof(_n,_s,_f,_node)	kvmalloc_array_node_noprof(_n,_s,(_f)|__GFP_ZERO,_node)
 #define kvcalloc_noprof(...)			kvcalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
-
 #define kvmalloc_array(...)			alloc_hooks(kvmalloc_array_noprof(__VA_ARGS__))
 #define kvcalloc_node(...)			alloc_hooks(kvcalloc_node_noprof(__VA_ARGS__))
 #define kvcalloc(...)				alloc_hooks(kvcalloc_noprof(__VA_ARGS__))
 
-void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
-		__realloc_size(2);
+void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				  gfp_t flags, int nid) __realloc_size(2);
+#define kvrealloc_node_align(...)		kvrealloc_node_align_noprof(__VA_ARGS__)
+#define kvrealloc_node_noprof(_p, _s, _f, _n)	kvrealloc_node_align_noprof(_p, _s, 1, _f, _n)
+#define kvrealloc_node(...)			alloc_hooks(kvrealloc_node_noprof(__VA_ARGS__))
+#define kvrealloc_noprof(...)			kvrealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
 #define kvrealloc(...)				alloc_hooks(kvrealloc_noprof(__VA_ARGS__))
 
 extern void kvfree(const void *addr);
diff --git a/mm/slub.c b/mm/slub.c
index c4b64821e680..881244c357dd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4845,7 +4845,7 @@ void kfree(const void *object)
 EXPORT_SYMBOL(kfree);
 
 static __always_inline __realloc_size(2) void *
-__do_krealloc(const void *p, size_t new_size, gfp_t flags)
+__do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags, int nid)
 {
 	void *ret;
 	size_t ks = 0;
@@ -4859,6 +4859,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 	if (!kasan_check_byte(p))
 		return NULL;
 
+	/* refuse to proceed if alignment is bigger than what kmalloc() provides */
+	if (!IS_ALIGNED((unsigned long)p, align) || new_size < align)
+		return NULL;
+
+	/*
+	 * If reallocation is not necessary (e. g. the new size is less
+	 * than the current allocated size), the current allocation will be
+	 * preserved unless __GFP_THISNODE is set. In the latter case a new
+	 * allocation on the requested node will be attempted.
+	 */
+	if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
+		     nid != page_to_nid(vmalloc_to_page(p)))
+		goto alloc_new;
+
 	if (is_kfence_address(p)) {
 		ks = orig_size = kfence_ksize(p);
 	} else {
@@ -4903,7 +4917,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 	return (void *)p;
 
 alloc_new:
-	ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
+	ret = kmalloc_node_track_caller_noprof(new_size, flags, nid, _RET_IP_);
 	if (ret && p) {
 		/* Disable KASAN checks as the object's redzone is accessed. */
 		kasan_disable_current();
@@ -4915,10 +4929,12 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 }
 
 /**
- * krealloc - reallocate memory. The contents will remain unchanged.
+ * krealloc_node_align - reallocate memory. The contents will remain unchanged.
  * @p: object to reallocate memory for.
  * @new_size: how many bytes of memory are required.
+ * @align: desired alignment.
  * @flags: the type of memory to allocate.
+ * @nid: NUMA node or NUMA_NO_NODE
  *
  * If @p is %NULL, krealloc() behaves exactly like kmalloc().  If @new_size
  * is 0 and @p is not a %NULL pointer, the object pointed to is freed.
@@ -4947,7 +4963,8 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
  *
  * Return: pointer to the allocated memory or %NULL in case of error
  */
-void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
+void *krealloc_node_align_noprof(const void *p, size_t new_size, unsigned long align,
+				 gfp_t flags, int nid)
 {
 	void *ret;
 
@@ -4956,13 +4973,13 @@ void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
 		return ZERO_SIZE_PTR;
 	}
 
-	ret = __do_krealloc(p, new_size, flags);
+	ret = __do_krealloc(p, new_size, align, flags, nid);
 	if (ret && kasan_reset_tag(p) != kasan_reset_tag(ret))
 		kfree(p);
 
 	return ret;
 }
-EXPORT_SYMBOL(krealloc_noprof);
+EXPORT_SYMBOL(krealloc_node_align_noprof);
 
 static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
 {
@@ -4993,6 +5010,7 @@ static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
  * failure, fall back to non-contiguous (vmalloc) allocation.
  * @size: size of the request.
  * @b: which set of kmalloc buckets to allocate from.
+ * @align: desired alignment.
  * @flags: gfp mask for the allocation - must be compatible (superset) with GFP_KERNEL.
  * @node: numa node to allocate from
  *
@@ -5005,19 +5023,22 @@ static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
  *
  * Return: pointer to the allocated memory of %NULL in case of failure
  */
-void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
+void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
+			     gfp_t flags, int node)
 {
 	void *ret;
 
 	/*
 	 * It doesn't really make sense to fallback to vmalloc for sub page
-	 * requests
+	 * requests and small alignments
 	 */
-	ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
-				kmalloc_gfp_adjust(flags, size),
-				node, _RET_IP_);
-	if (ret || size <= PAGE_SIZE)
-		return ret;
+	if (size >= align) {
+		ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
+					kmalloc_gfp_adjust(flags, size),
+					node, _RET_IP_);
+		if (ret || size <= PAGE_SIZE)
+			return ret;
+	}
 
 	/* non-sleeping allocations are not supported by vmalloc */
 	if (!gfpflags_allow_blocking(flags))
@@ -5035,7 +5056,7 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
 	 * about the resulting pointer, and cannot play
 	 * protection games.
 	 */
-	return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END,
+	return __vmalloc_node_range_noprof(size, align, VMALLOC_START, VMALLOC_END,
 			flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
 			node, __builtin_return_address(0));
 }
@@ -5079,10 +5100,12 @@ void kvfree_sensitive(const void *addr, size_t len)
 EXPORT_SYMBOL(kvfree_sensitive);
 
 /**
- * kvrealloc - reallocate memory; contents remain unchanged
+ * kvrealloc_node_align - reallocate memory; contents remain unchanged
  * @p: object to reallocate memory for
  * @size: the size to reallocate
+ * @align: desired alignment
  * @flags: the flags for the page level allocator
+ * @nid: NUMA node id
  *
  * If @p is %NULL, kvrealloc() behaves exactly like kvmalloc(). If @size is 0
  * and @p is not a %NULL pointer, the object pointed to is freed.
@@ -5100,17 +5123,18 @@ EXPORT_SYMBOL(kvfree_sensitive);
  *
  * Return: pointer to the allocated memory or %NULL in case of error
  */
-void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
+void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				  gfp_t flags, int nid)
 {
 	void *n;
 
 	if (is_vmalloc_addr(p))
-		return vrealloc_noprof(p, size, flags);
+		return vrealloc_node_align_noprof(p, size, align, flags, nid);
 
-	n = krealloc_noprof(p, size, kmalloc_gfp_adjust(flags, size));
+	n = krealloc_node_align_noprof(p, size, align, kmalloc_gfp_adjust(flags, size), nid);
 	if (!n) {
 		/* We failed to krealloc(), fall back to kvmalloc(). */
-		n = kvmalloc_noprof(size, flags);
+		n = kvmalloc_node_align_noprof(size, align, flags, nid);
 		if (!n)
 			return NULL;
 
@@ -5126,7 +5150,7 @@ void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
 
 	return n;
 }
-EXPORT_SYMBOL(kvrealloc_noprof);
+EXPORT_SYMBOL(kvrealloc_node_align_noprof);
 
 struct detached_freelist {
 	struct slab *slab;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 3/4] rust: add support for NUMA ids in allocations
  2025-07-07 16:47 [PATCH v11 0/4] support large align and nid in Rust allocators Vitaly Wool
  2025-07-07 16:48 ` [PATCH v11 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
  2025-07-07 16:49 ` [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc Vitaly Wool
@ 2025-07-07 16:49 ` Vitaly Wool
  2025-07-08 12:15   ` Danilo Krummrich
  2025-07-07 16:49 ` [PATCH v11 4/4] rust: support large alignments " Vitaly Wool
  2025-07-08 10:58 ` [PATCH v11 0/4] support large align and nid in Rust allocators Lorenzo Stoakes
  4 siblings, 1 reply; 25+ messages in thread
From: Vitaly Wool @ 2025-07-07 16:49 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Vitaly Wool

Add a new type to support specifying NUMA identifiers in Rust
allocators and extend the allocators to have NUMA id as a
parameter. Thus, modify ReallocFunc to use the new extended realloc
primitives from the C side of the kernel (i. e.
k[v]realloc_node_align/vrealloc_node_align) and add the new function
alloc_node to the Allocator trait while keeping the existing one
(alloc) for backward compatibility.

This will allow to specify node to use for allocation of e. g.
{KV}Box, as well as for future NUMA aware users of the API.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
---
 rust/helpers/slab.c            |  8 +++---
 rust/helpers/vmalloc.c         |  4 +--
 rust/kernel/alloc.rs           | 52 ++++++++++++++++++++++++++++++----
 rust/kernel/alloc/allocator.rs | 33 +++++++++++++--------
 rust/kernel/alloc/kbox.rs      |  4 +--
 rust/kernel/alloc/kvec.rs      | 11 +++++--
 6 files changed, 85 insertions(+), 27 deletions(-)

diff --git a/rust/helpers/slab.c b/rust/helpers/slab.c
index a842bfbddcba..8472370a4338 100644
--- a/rust/helpers/slab.c
+++ b/rust/helpers/slab.c
@@ -3,13 +3,13 @@
 #include <linux/slab.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags)
+rust_helper_krealloc_node(const void *objp, size_t new_size, gfp_t flags, int node)
 {
-	return krealloc(objp, new_size, flags);
+	return krealloc_node(objp, new_size, flags, node);
 }
 
 void * __must_check __realloc_size(2)
-rust_helper_kvrealloc(const void *p, size_t size, gfp_t flags)
+rust_helper_kvrealloc_node(const void *p, size_t size, gfp_t flags, int node)
 {
-	return kvrealloc(p, size, flags);
+	return kvrealloc_node(p, size, flags, node);
 }
diff --git a/rust/helpers/vmalloc.c b/rust/helpers/vmalloc.c
index 80d34501bbc0..62d30db9a1a6 100644
--- a/rust/helpers/vmalloc.c
+++ b/rust/helpers/vmalloc.c
@@ -3,7 +3,7 @@
 #include <linux/vmalloc.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_vrealloc(const void *p, size_t size, gfp_t flags)
+rust_helper_vrealloc_node(const void *p, size_t size, gfp_t flags, int node)
 {
-	return vrealloc(p, size, flags);
+	return vrealloc_node(p, size, flags, node);
 }
diff --git a/rust/kernel/alloc.rs b/rust/kernel/alloc.rs
index a2c49e5494d3..69cf0c5f7702 100644
--- a/rust/kernel/alloc.rs
+++ b/rust/kernel/alloc.rs
@@ -28,6 +28,8 @@
 /// Indicates an allocation error.
 #[derive(Copy, Clone, PartialEq, Eq, Debug)]
 pub struct AllocError;
+
+use crate::error::{code::EINVAL, Result};
 use core::{alloc::Layout, ptr::NonNull};
 
 /// Flags to be used when allocating memory.
@@ -115,6 +117,29 @@ pub mod flags {
     pub const __GFP_NOWARN: Flags = Flags(bindings::__GFP_NOWARN);
 }
 
+/// Non Uniform Memory Access (NUMA) node identifier
+#[derive(Clone, Copy, PartialEq)]
+pub struct NumaNode(i32);
+
+impl NumaNode {
+    /// create a new NUMA node identifer (non-negative integer)
+    /// returns EINVAL if a negative id or an id exceeding MAX_NUMNODES is specified
+    pub fn new(node: i32) -> Result<Self> {
+        // SAFETY: MAX_NUMNODES never exceeds 2**10 because NODES_SHIFT is 0..10
+        if node < 0 || node >= bindings::MAX_NUMNODES as i32 {
+            return Err(EINVAL);
+        }
+        Ok(Self(node))
+    }
+}
+
+/// Specify necessary constant to pass the information to Allocator that the caller doesn't care
+/// about the NUMA node to allocate memory from.
+impl NumaNode {
+    /// No node preference
+    pub const NO_NODE: NumaNode = NumaNode(bindings::NUMA_NO_NODE);
+}
+
 /// The kernel's [`Allocator`] trait.
 ///
 /// An implementation of [`Allocator`] can allocate, re-allocate and free memory buffers described
@@ -137,7 +162,7 @@ pub mod flags {
 /// - Implementers must ensure that all trait functions abide by the guarantees documented in the
 ///   `# Guarantees` sections.
 pub unsafe trait Allocator {
-    /// Allocate memory based on `layout` and `flags`.
+    /// Allocate memory based on `layout`, `flags` and `nid`.
     ///
     /// On success, returns a buffer represented as `NonNull<[u8]>` that satisfies the layout
     /// constraints (i.e. minimum size and alignment as specified by `layout`).
@@ -153,13 +178,21 @@ pub unsafe trait Allocator {
     ///
     /// Additionally, `Flags` are honored as documented in
     /// <https://docs.kernel.org/core-api/mm-api.html#mm-api-gfp-flags>.
-    fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> {
+    fn alloc(layout: Layout, flags: Flags, nid: NumaNode) -> Result<NonNull<[u8]>, AllocError> {
         // SAFETY: Passing `None` to `realloc` is valid by its safety requirements and asks for a
         // new memory allocation.
-        unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags) }
+        unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags, nid) }
     }
 
-    /// Re-allocate an existing memory allocation to satisfy the requested `layout`.
+    /// Re-allocate an existing memory allocation to satisfy the requested `layout` and
+    /// a specific NUMA node request to allocate the memory for.
+    ///
+    /// Systems employing a Non Uniform Memory Access (NUMA) architecture contain collections of
+    /// hardware resources including processors, memory, and I/O buses, that comprise what is
+    /// commonly known as a NUMA node.
+    ///
+    /// `nid` stands for NUMA id, i. e. NUMA node identifier, which is a non-negative
+    /// integer if a node needs to be specified, or NO_NODE if the caller doesn't care.
     ///
     /// If the requested size is zero, `realloc` behaves equivalent to `free`.
     ///
@@ -196,6 +229,7 @@ unsafe fn realloc(
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError>;
 
     /// Free an existing memory allocation.
@@ -211,7 +245,15 @@ unsafe fn free(ptr: NonNull<u8>, layout: Layout) {
         // SAFETY: The caller guarantees that `ptr` points at a valid allocation created by this
         // allocator. We are passing a `Layout` with the smallest possible alignment, so it is
         // smaller than or equal to the alignment previously used with this allocation.
-        let _ = unsafe { Self::realloc(Some(ptr), Layout::new::<()>(), layout, Flags(0)) };
+        let _ = unsafe {
+            Self::realloc(
+                Some(ptr),
+                Layout::new::<()>(),
+                layout,
+                Flags(0),
+                NumaNode::NO_NODE,
+            )
+        };
     }
 }
 
diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
index aa2dfa9dca4c..7f9f4900da33 100644
--- a/rust/kernel/alloc/allocator.rs
+++ b/rust/kernel/alloc/allocator.rs
@@ -13,7 +13,7 @@
 use core::ptr;
 use core::ptr::NonNull;
 
-use crate::alloc::{AllocError, Allocator};
+use crate::alloc::{AllocError, Allocator, NumaNode};
 use crate::bindings;
 use crate::pr_warn;
 
@@ -58,18 +58,23 @@ fn aligned_size(new_layout: Layout) -> usize {
 ///
 /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`.
 struct ReallocFunc(
-    unsafe extern "C" fn(*const crate::ffi::c_void, usize, u32) -> *mut crate::ffi::c_void,
+    unsafe extern "C" fn(
+        *const crate::ffi::c_void,
+        usize,
+        u32,
+        crate::ffi::c_int,
+    ) -> *mut crate::ffi::c_void,
 );
 
 impl ReallocFunc {
-    // INVARIANT: `krealloc` satisfies the type invariants.
-    const KREALLOC: Self = Self(bindings::krealloc);
+    // INVARIANT: `krealloc_node` satisfies the type invariants.
+    const KREALLOC: Self = Self(bindings::krealloc_node);
 
-    // INVARIANT: `vrealloc` satisfies the type invariants.
-    const VREALLOC: Self = Self(bindings::vrealloc);
+    // INVARIANT: `vrealloc_node` satisfies the type invariants.
+    const VREALLOC: Self = Self(bindings::vrealloc_node);
 
-    // INVARIANT: `kvrealloc` satisfies the type invariants.
-    const KVREALLOC: Self = Self(bindings::kvrealloc);
+    // INVARIANT: `kvrealloc_node` satisfies the type invariants.
+    const KVREALLOC: Self = Self(bindings::kvrealloc_node);
 
     /// # Safety
     ///
@@ -87,6 +92,7 @@ unsafe fn call(
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         let size = aligned_size(layout);
         let ptr = match ptr {
@@ -110,7 +116,7 @@ unsafe fn call(
         // - Those functions provide the guarantees of this function.
         let raw_ptr = unsafe {
             // If `size == 0` and `ptr != NULL` the memory behind the pointer is freed.
-            self.0(ptr.cast(), size, flags.0).cast()
+            self.0(ptr.cast(), size, flags.0, nid.0).cast()
         };
 
         let ptr = if size == 0 {
@@ -134,9 +140,10 @@ unsafe fn realloc(
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`.
-        unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) }
+        unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags, nid) }
     }
 }
 
@@ -151,6 +158,7 @@ unsafe fn realloc(
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         // TODO: Support alignments larger than PAGE_SIZE.
         if layout.align() > bindings::PAGE_SIZE {
@@ -160,7 +168,7 @@ unsafe fn realloc(
 
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
-        unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags) }
+        unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags, nid) }
     }
 }
 
@@ -175,6 +183,7 @@ unsafe fn realloc(
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         // TODO: Support alignments larger than PAGE_SIZE.
         if layout.align() > bindings::PAGE_SIZE {
@@ -184,6 +193,6 @@ unsafe fn realloc(
 
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
-        unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags) }
+        unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags, nid) }
     }
 }
diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs
index c386ff771d50..5c0b020fb2a4 100644
--- a/rust/kernel/alloc/kbox.rs
+++ b/rust/kernel/alloc/kbox.rs
@@ -4,7 +4,7 @@
 
 #[allow(unused_imports)] // Used in doc comments.
 use super::allocator::{KVmalloc, Kmalloc, Vmalloc};
-use super::{AllocError, Allocator, Flags};
+use super::{AllocError, Allocator, Flags, NumaNode};
 use core::alloc::Layout;
 use core::fmt;
 use core::marker::PhantomData;
@@ -271,7 +271,7 @@ pub fn new(x: T, flags: Flags) -> Result<Self, AllocError> {
     /// ```
     pub fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>, A>, AllocError> {
         let layout = Layout::new::<MaybeUninit<T>>();
-        let ptr = A::alloc(layout, flags)?;
+        let ptr = A::alloc(layout, flags, NumaNode::NO_NODE)?;
 
         // INVARIANT: `ptr` is either a dangling pointer or points to memory allocated with `A`,
         // which is sufficient in size and alignment for storing a `T`.
diff --git a/rust/kernel/alloc/kvec.rs b/rust/kernel/alloc/kvec.rs
index 1a0dd852a468..aa5d27176d9c 100644
--- a/rust/kernel/alloc/kvec.rs
+++ b/rust/kernel/alloc/kvec.rs
@@ -5,7 +5,7 @@
 use super::{
     allocator::{KVmalloc, Kmalloc, Vmalloc},
     layout::ArrayLayout,
-    AllocError, Allocator, Box, Flags,
+    AllocError, Allocator, Box, Flags, NumaNode,
 };
 use core::{
     fmt,
@@ -633,6 +633,7 @@ pub fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocEr
                 layout.into(),
                 self.layout.into(),
                 flags,
+                NumaNode::NO_NODE,
             )?
         };
 
@@ -1058,7 +1059,13 @@ pub fn collect(self, flags: Flags) -> Vec<T, A> {
             // the type invariant to be smaller than `cap`. Depending on `realloc` this operation
             // may shrink the buffer or leave it as it is.
             ptr = match unsafe {
-                A::realloc(Some(buf.cast()), layout.into(), old_layout.into(), flags)
+                A::realloc(
+                    Some(buf.cast()),
+                    layout.into(),
+                    old_layout.into(),
+                    flags,
+                    NumaNode::NO_NODE,
+                )
             } {
                 // If we fail to shrink, which likely can't even happen, continue with the existing
                 // buffer.
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v11 4/4] rust: support large alignments in allocations
  2025-07-07 16:47 [PATCH v11 0/4] support large align and nid in Rust allocators Vitaly Wool
                   ` (2 preceding siblings ...)
  2025-07-07 16:49 ` [PATCH v11 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
@ 2025-07-07 16:49 ` Vitaly Wool
  2025-07-08 12:16   ` Danilo Krummrich
  2025-07-08 10:58 ` [PATCH v11 0/4] support large align and nid in Rust allocators Lorenzo Stoakes
  4 siblings, 1 reply; 25+ messages in thread
From: Vitaly Wool @ 2025-07-07 16:49 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Vitaly Wool

Add support for large (> PAGE_SIZE) alignments in Rust allocators.
All the preparations on the C side are already done, we just need
to add bindings for <alloc>_node_align() functions and start
using those.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
---
 rust/helpers/slab.c            | 10 ++++++----
 rust/helpers/vmalloc.c         |  5 +++--
 rust/kernel/alloc/allocator.rs | 27 ++++++++-------------------
 3 files changed, 17 insertions(+), 25 deletions(-)

diff --git a/rust/helpers/slab.c b/rust/helpers/slab.c
index 8472370a4338..d729be798f31 100644
--- a/rust/helpers/slab.c
+++ b/rust/helpers/slab.c
@@ -3,13 +3,15 @@
 #include <linux/slab.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_krealloc_node(const void *objp, size_t new_size, gfp_t flags, int node)
+rust_helper_krealloc_node_align(const void *objp, size_t new_size, unsigned long align,
+				gfp_t flags, int node)
 {
-	return krealloc_node(objp, new_size, flags, node);
+	return krealloc_node_align(objp, new_size, align, flags, node);
 }
 
 void * __must_check __realloc_size(2)
-rust_helper_kvrealloc_node(const void *p, size_t size, gfp_t flags, int node)
+rust_helper_kvrealloc_node_align(const void *p, size_t size, unsigned long align,
+				gfp_t flags, int node)
 {
-	return kvrealloc_node(p, size, flags, node);
+	return kvrealloc_node_align(p, size, align, flags, node);
 }
diff --git a/rust/helpers/vmalloc.c b/rust/helpers/vmalloc.c
index 62d30db9a1a6..7d7f7336b3d2 100644
--- a/rust/helpers/vmalloc.c
+++ b/rust/helpers/vmalloc.c
@@ -3,7 +3,8 @@
 #include <linux/vmalloc.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_vrealloc_node(const void *p, size_t size, gfp_t flags, int node)
+rust_helper_vrealloc_node_align(const void *p, size_t size, unsigned long align,
+				gfp_t flags, int node)
 {
-	return vrealloc_node(p, size, flags, node);
+	return vrealloc_node_align(p, size, align, flags, node);
 }
diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
index 7f9f4900da33..2a61372d878d 100644
--- a/rust/kernel/alloc/allocator.rs
+++ b/rust/kernel/alloc/allocator.rs
@@ -61,20 +61,21 @@ struct ReallocFunc(
     unsafe extern "C" fn(
         *const crate::ffi::c_void,
         usize,
+        crate::ffi::c_ulong,
         u32,
         crate::ffi::c_int,
     ) -> *mut crate::ffi::c_void,
 );
 
 impl ReallocFunc {
-    // INVARIANT: `krealloc_node` satisfies the type invariants.
-    const KREALLOC: Self = Self(bindings::krealloc_node);
+    // INVARIANT: `krealloc_node_align` satisfies the type invariants.
+    const KREALLOC: Self = Self(bindings::krealloc_node_align);
 
-    // INVARIANT: `vrealloc_node` satisfies the type invariants.
-    const VREALLOC: Self = Self(bindings::vrealloc_node);
+    // INVARIANT: `vrealloc_node_align` satisfies the type invariants.
+    const VREALLOC: Self = Self(bindings::vrealloc_node_align);
 
-    // INVARIANT: `kvrealloc_node` satisfies the type invariants.
-    const KVREALLOC: Self = Self(bindings::kvrealloc_node);
+    // INVARIANT: `kvrealloc_node_align` satisfies the type invariants.
+    const KVREALLOC: Self = Self(bindings::kvrealloc_node_align);
 
     /// # Safety
     ///
@@ -116,7 +117,7 @@ unsafe fn call(
         // - Those functions provide the guarantees of this function.
         let raw_ptr = unsafe {
             // If `size == 0` and `ptr != NULL` the memory behind the pointer is freed.
-            self.0(ptr.cast(), size, flags.0, nid.0).cast()
+            self.0(ptr.cast(), size, layout.align(), flags.0, nid.0).cast()
         };
 
         let ptr = if size == 0 {
@@ -160,12 +161,6 @@ unsafe fn realloc(
         flags: Flags,
         nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
-        // TODO: Support alignments larger than PAGE_SIZE.
-        if layout.align() > bindings::PAGE_SIZE {
-            pr_warn!("Vmalloc does not support alignments larger than PAGE_SIZE yet.\n");
-            return Err(AllocError);
-        }
-
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
         unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags, nid) }
@@ -185,12 +180,6 @@ unsafe fn realloc(
         flags: Flags,
         nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
-        // TODO: Support alignments larger than PAGE_SIZE.
-        if layout.align() > bindings::PAGE_SIZE {
-            pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n");
-            return Err(AllocError);
-        }
-
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
         unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags, nid) }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-07 16:47 [PATCH v11 0/4] support large align and nid in Rust allocators Vitaly Wool
                   ` (3 preceding siblings ...)
  2025-07-07 16:49 ` [PATCH v11 4/4] rust: support large alignments " Vitaly Wool
@ 2025-07-08 10:58 ` Lorenzo Stoakes
  2025-07-08 11:12   ` Lorenzo Stoakes
  2025-07-08 11:55   ` Danilo Krummrich
  4 siblings, 2 replies; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-08 10:58 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

+cc Liam

Hi guys,

We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
it's slightly concerning to find a series (at v11!) like this that changes
mm-related stuff and it involves files not listed there and nobody bothered
to cc- the people listed there.

I can fully understand there being some process fail here meaning you
missed it - fine if so - but let's fix it please moving forwards.

It's really important to me that the rust efforts in mm are collaborative -
I really believe in your mission (well - for me it's about the compiler
_helping_ me not shooting me in the foot :) - and have put substantial
effort in assisting initial work there. So let's make sure we're
collaborative in both directions please.

We have rust/kernel/mm/ under MEMORY MANAGEMENT - RUST too, I'm not au fait
with your approach to structuring in these folders but seems to me these
helpers should be there? I may be unaware of some rust aspect of this
however.

Can we please add these files to this section and in future cc people
listed there? We're here to help!

A side-note I wonder if we also need to put specific files also in relevant
mm sections? E.g. the slab helper should also be put under the slab section
perhaps?

You can have files in multiple entries in MAINTAINERS so it's flexible
enough to allow it to be both in the mm rust section and also the slab
section for instance.

Thanks, Lorenzo

On Mon, Jul 07, 2025 at 06:47:55PM +0200, Vitaly Wool wrote:
> The coming patches provide the ability for Rust allocators to set
> NUMA node and large alignment.
>
> Changelog:
> v2 -> v3:
> * fixed the build breakage for non-MMU configs
> v3 -> v4:
> * added NUMA node support for k[v]realloc (patch #2)
> * removed extra logic in Rust helpers
> * patch for Rust allocators split into 2 (align: patch #3 and
>   NUMA ids: patch #4)
> v4 -> v5:
> * reworked NUMA node support for k[v]realloc for all 3 <alloc>_node
>   functions to have the same signature
> * all 3 <alloc>_node slab/vmalloc functions now support alignment
>   specification
> * Rust helpers are extended with new functions, the old ones are left
>   intact
> * Rust support for NUMA nodes comes first now (as patch #3)
> v5 -> v6:
> * added <alloc>_node_align functions to keep the existing interfaces
>   intact
> * clearer separation for Rust support of MUNA ids and large alignments
> v6 -> v7:
> * NUMA identifier as a new Rust type (NumaNode)
> * better documentation for changed and new functions and constants
> v7 -> v8:
> * removed NumaError
> * small cleanups per reviewers' comments
> v8 -> v9:
> * realloc functions can now reallocate memory for a different NUMA
>   node
> * better comments/explanations in the Rust part
> v9 -> v10:
> * refined behavior when memory is being reallocated for a different
>   NUMA node, comments added
> * cleanups in the Rust part, rustfmt ran
> * typos corrected
> v10 -> v11:
> * added documentation for the NO_NODE constant
> * added node parameter to Allocator's alloc/realloc instead of adding
>   separate alloc_node resp. realloc_node functions, modified users of
>   alloc/realloc in accordance with that
>
> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
>
>

Odd you don't have a diffstat here?


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 10:58 ` [PATCH v11 0/4] support large align and nid in Rust allocators Lorenzo Stoakes
@ 2025-07-08 11:12   ` Lorenzo Stoakes
  2025-07-08 11:55   ` Danilo Krummrich
  1 sibling, 0 replies; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-08 11:12 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

FYI I ran scripts/get_maintainers.pl on your series and there are 22 people you
should have cc'd and you cc'd 5.

While we'd all love to read all of linux-mm, for the majority of us that isn't
really practical so it's vital you follow correct kernel procedure here.

So I think you need to resend this with correct CC's at this point.

Thanks.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 10:58 ` [PATCH v11 0/4] support large align and nid in Rust allocators Lorenzo Stoakes
  2025-07-08 11:12   ` Lorenzo Stoakes
@ 2025-07-08 11:55   ` Danilo Krummrich
  2025-07-08 12:36     ` Lorenzo Stoakes
  2025-07-08 13:19     ` Lorenzo Stoakes
  1 sibling, 2 replies; 25+ messages in thread
From: Danilo Krummrich @ 2025-07-08 11:55 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
> +cc Liam
> 
> Hi guys,
> 
> We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
> it's slightly concerning to find a series (at v11!) like this that changes
> mm-related stuff and it involves files not listed there and nobody bothered
> to cc- the people listed there.

What files are you referring to? Are you referring to:

	rust/kernel/alloc.rs
	rust/kernel/alloc/*

If so, they're indeed not under the "MEMORY MANAGEMENT - RUST" entry, which
so far seems correct.

Please also note that we had "RUST [ALLOC]" before "MEMORY MANAGEMENT - RUST"
did exist.

> I can fully understand there being some process fail here meaning you
> missed it - fine if so - but let's fix it please moving forwards.

I agree that this series should have a couple more people in Cc.

Given the existing entries in the MAINTAINERS file the Rust parts seems to be
correct though.

> It's really important to me that the rust efforts in mm are collaborative -
> I really believe in your mission (well - for me it's about the compiler
> _helping_ me not shooting me in the foot :) - and have put substantial
> effort in assisting initial work there. So let's make sure we're
> collaborative in both directions please.

AFAICT, those efforts are collaborative.

Back then I sent patches to introduce vrealloc() and improve and align
kvrealloc() and krealloc() [1]; it was also mentioned that this was, besides the
other advantages, prerequisite work for the Rust allocator patch series [2].

The subsequent Rust allocator patch series [2] was also sent to Andrew and the
-mm mailing list; the previous code replaced by this series was maintained under
the "RUST" entry in the maintainers file.

With the introduction of the new Rust allocator code I took over maintainership.

So, Andrew is aware of the Rust allocator tree, please see also [3].

[1] https://lore.kernel.org/all/20240722163111.4766-1-dakr@kernel.org/
[2] https://lore.kernel.org/all/20241004154149.93856-1-dakr@kernel.org/
[3] https://lore.kernel.org/all/20250625143450.2afc473fc0e7124a5108c187@linux-foundation.org/

> We have rust/kernel/mm/ under MEMORY MANAGEMENT - RUST too, I'm not au fait
> with your approach to structuring in these folders but seems to me these
> helpers should be there? I may be unaware of some rust aspect of this
> however.

The Rust allocator module is a user of exactly three functions of mm, i.e.
krealloc(), vrealloc(), kvrealloc(), with a thin abstraction layer for those
three allocator backends. Everything else is rather Rust core infrastructure
than mm infrastructure.

> Can we please add these files to this section and in future cc people
> listed there? We're here to help!

What's your proposal regarding maintainership? Are you asking me to drop it to
"MEMORY MANAGEMENT - RUST"?

> A side-note I wonder if we also need to put specific files also in relevant
> mm sections? E.g. the slab helper should also be put under the slab section
> perhaps?

Yes, we could. But in the end all Rust helper functions are transparent
wrappers, simply forwarding a function call *without* any additional logic.
They don't really require maintainence effort, and, in the end, are just
trivial boilerplate.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 1/4] mm/vmalloc: allow to set node and align in vrealloc
  2025-07-07 16:48 ` [PATCH v11 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
@ 2025-07-08 12:12   ` Vlastimil Babka
  0 siblings, 0 replies; 25+ messages in thread
From: Vlastimil Babka @ 2025-07-08 12:12 UTC (permalink / raw)
  To: Vitaly Wool, linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux

On 7/7/25 18:48, Vitaly Wool wrote:
> Reimplement vrealloc() to be able to set node and alignment should
> a user need to do so. Rename the function to vrealloc_node_align()
> to better match what it actually does now and introduce macros for
> vrealloc() and friends for backward compatibility.
> 
> With that change we also provide the ability for the Rust part of
> the kernel to set node and alignment in its allocations.
> 
> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

Nit:

> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4089,13 +4089,22 @@ void *vzalloc_node_noprof(unsigned long size, int node)
>  EXPORT_SYMBOL(vzalloc_node_noprof);
>  
>  /**
> - * vrealloc - reallocate virtually contiguous memory; contents remain unchanged
> + * vrealloc_node_align_noprof - reallocate virtually contiguous memory; contents
> + * remain unchanged
>   * @p: object to reallocate memory for
>   * @size: the size to reallocate
> + * @align: requested alignment
>   * @flags: the flags for the page level allocator
> + * @nid: node number of the target node
> + *
> + * If @p is %NULL, vrealloc_XXX() behaves exactly like vmalloc(). If @size is
> + * 0 and @p is not a %NULL pointer, the object pointed to is freed.
>   *
> - * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and
> - * @p is not a %NULL pointer, the object pointed to is freed.
> + * if @nid is not NUMA_NO_NODE, this function will try to allocate memory on
> + * the given node. If reallocation is not necessary (e. g. the new size is less
> + * than the current allocated size), the current allocation will be preserved
> + * unless __GFP_THISNODE is set. In the latter case a new allocation on the
> + * requested node will be attempted.
>   *
>   * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
>   * initial memory allocation, every subsequent call to this API for the same
> @@ -4111,7 +4120,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
>   * Return: pointer to the allocated memory; %NULL if @size is zero or in case of
>   *         failure
>   */
> -void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
> +void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
> +				 gfp_t flags, int nid)
>  {
>  	struct vm_struct *vm = NULL;
>  	size_t alloced_size = 0;
> @@ -4135,6 +4145,12 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  		if (WARN(alloced_size < old_size,
>  			 "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
>  			return NULL;
> +		if (WARN(!IS_ALIGNED((unsigned long)p, align),
> +			 "will not reallocate with a bigger alignment (0x%lx)\n", align))
> +			return NULL;

Maybe this should be mentioned in the doc comment above?

> +		if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
> +			     nid != page_to_nid(vmalloc_to_page(p)))
> +			goto need_realloc;
>  	}
>  
>  	/*
> @@ -4165,8 +4181,10 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  		return (void *)p;
>  	}
>  
> +need_realloc:
>  	/* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
> -	n = __vmalloc_noprof(size, flags);
> +	n = __vmalloc_node_noprof(size, align, flags, nid, __builtin_return_address(0));
> +
>  	if (!n)
>  		return NULL;
>  



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 3/4] rust: add support for NUMA ids in allocations
  2025-07-07 16:49 ` [PATCH v11 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
@ 2025-07-08 12:15   ` Danilo Krummrich
  0 siblings, 0 replies; 25+ messages in thread
From: Danilo Krummrich @ 2025-07-08 12:15 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Alice Ryhl,
	Vlastimil Babka, rust-for-linux

On Mon Jul 7, 2025 at 6:49 PM CEST, Vitaly Wool wrote:
> +/// Specify necessary constant to pass the information to Allocator that the caller doesn't care
> +/// about the NUMA node to allocate memory from.
> +impl NumaNode {
> +    /// No node preference

Please end the comment with a period.

> +    pub const NO_NODE: NumaNode = NumaNode(bindings::NUMA_NO_NODE);
> +}
> +
>  /// The kernel's [`Allocator`] trait.
>  ///
>  /// An implementation of [`Allocator`] can allocate, re-allocate and free memory buffers described
> @@ -137,7 +162,7 @@ pub mod flags {
>  /// - Implementers must ensure that all trait functions abide by the guarantees documented in the
>  ///   `# Guarantees` sections.
>  pub unsafe trait Allocator {
> -    /// Allocate memory based on `layout` and `flags`.
> +    /// Allocate memory based on `layout`, `flags` and `nid`.
>      ///
>      /// On success, returns a buffer represented as `NonNull<[u8]>` that satisfies the layout
>      /// constraints (i.e. minimum size and alignment as specified by `layout`).
> @@ -153,13 +178,21 @@ pub unsafe trait Allocator {
>      ///
>      /// Additionally, `Flags` are honored as documented in
>      /// <https://docs.kernel.org/core-api/mm-api.html#mm-api-gfp-flags>.
> -    fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> {
> +    fn alloc(layout: Layout, flags: Flags, nid: NumaNode) -> Result<NonNull<[u8]>, AllocError> {
>          // SAFETY: Passing `None` to `realloc` is valid by its safety requirements and asks for a
>          // new memory allocation.
> -        unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags) }
> +        unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags, nid) }
>      }
>  
> -    /// Re-allocate an existing memory allocation to satisfy the requested `layout`.
> +    /// Re-allocate an existing memory allocation to satisfy the requested `layout` and
> +    /// a specific NUMA node request to allocate the memory for.
> +    ///
> +    /// Systems employing a Non Uniform Memory Access (NUMA) architecture contain collections of
> +    /// hardware resources including processors, memory, and I/O buses, that comprise what is
> +    /// commonly known as a NUMA node.
> +    ///
> +    /// `nid` stands for NUMA id, i. e. NUMA node identifier, which is a non-negative

s/i. e./i.e./

> +    /// integer if a node needs to be specified, or NO_NODE if the caller doesn't care.

s/NO_NODE/[`NumaNode::NO_NODE`]/

>      ///
>      /// If the requested size is zero, `realloc` behaves equivalent to `free`.
>      ///
> @@ -196,6 +229,7 @@ unsafe fn realloc(

<snip>

> @@ -58,18 +58,23 @@ fn aligned_size(new_layout: Layout) -> usize {
>  ///
>  /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`.

You have to adjust those as well, given that you refer to this invariant below.

>  struct ReallocFunc(
> -    unsafe extern "C" fn(*const crate::ffi::c_void, usize, u32) -> *mut crate::ffi::c_void,
> +    unsafe extern "C" fn(
> +        *const crate::ffi::c_void,
> +        usize,
> +        u32,
> +        crate::ffi::c_int,
> +    ) -> *mut crate::ffi::c_void,
>  );
>  
>  impl ReallocFunc {
> -    // INVARIANT: `krealloc` satisfies the type invariants.
> -    const KREALLOC: Self = Self(bindings::krealloc);
> +    // INVARIANT: `krealloc_node` satisfies the type invariants.
> +    const KREALLOC: Self = Self(bindings::krealloc_node);
>  
> -    // INVARIANT: `vrealloc` satisfies the type invariants.
> -    const VREALLOC: Self = Self(bindings::vrealloc);
> +    // INVARIANT: `vrealloc_node` satisfies the type invariants.
> +    const VREALLOC: Self = Self(bindings::vrealloc_node);
>  
> -    // INVARIANT: `kvrealloc` satisfies the type invariants.
> -    const KVREALLOC: Self = Self(bindings::kvrealloc);
> +    // INVARIANT: `kvrealloc_node` satisfies the type invariants.
> +    const KVREALLOC: Self = Self(bindings::kvrealloc_node);


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 4/4] rust: support large alignments in allocations
  2025-07-07 16:49 ` [PATCH v11 4/4] rust: support large alignments " Vitaly Wool
@ 2025-07-08 12:16   ` Danilo Krummrich
  0 siblings, 0 replies; 25+ messages in thread
From: Danilo Krummrich @ 2025-07-08 12:16 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Alice Ryhl,
	Vlastimil Babka, rust-for-linux

On Mon Jul 7, 2025 at 6:49 PM CEST, Vitaly Wool wrote:
>  impl ReallocFunc {
> -    // INVARIANT: `krealloc_node` satisfies the type invariants.
> -    const KREALLOC: Self = Self(bindings::krealloc_node);
> +    // INVARIANT: `krealloc_node_align` satisfies the type invariants.
> +    const KREALLOC: Self = Self(bindings::krealloc_node_align);
>  
> -    // INVARIANT: `vrealloc_node` satisfies the type invariants.
> -    const VREALLOC: Self = Self(bindings::vrealloc_node);
> +    // INVARIANT: `vrealloc_node_align` satisfies the type invariants.
> +    const VREALLOC: Self = Self(bindings::vrealloc_node_align);
>  
> -    // INVARIANT: `kvrealloc_node` satisfies the type invariants.
> -    const KVREALLOC: Self = Self(bindings::kvrealloc_node);
> +    // INVARIANT: `kvrealloc_node_align` satisfies the type invariants.
> +    const KVREALLOC: Self = Self(bindings::kvrealloc_node_align);

Please also adjust the corresponding type invariant. With that,

	Acked-by: Danilo Krummrich <dakr@kernel.org>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 11:55   ` Danilo Krummrich
@ 2025-07-08 12:36     ` Lorenzo Stoakes
  2025-07-08 13:41       ` Danilo Krummrich
  2025-07-08 13:19     ` Lorenzo Stoakes
  1 sibling, 1 reply; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-08 12:36 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

TL;DR - the real issue here is not cc'ing the right people (Vlastimil was
not cc'd until v11 for instance). Beyond that there's some process things
to think about re: rust/mm section.

On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
> On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
> > +cc Liam
> >
> > Hi guys,
> >
> > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
> > it's slightly concerning to find a series (at v11!) like this that changes
> > mm-related stuff and it involves files not listed there and nobody bothered
> > to cc- the people listed there.
>
> What files are you referring to? Are you referring to:
>
> 	rust/kernel/alloc.rs
> 	rust/kernel/alloc/*

 include/linux/slab.h           | 40 ++++++++++++++-------
 include/linux/vmalloc.h        | 12 +++++--
 mm/nommu.c                     |  3 +-
 mm/slub.c                      | 64 +++++++++++++++++++++++-----------
 mm/vmalloc.c                   | 28 ++++++++++++---
this ---> rust/helpers/slab.c            | 10 +++---
this ---> rust/helpers/vmalloc.c         |  5 +--
 rust/kernel/alloc.rs           | 52 ++++++++++++++++++++++++---
 rust/kernel/alloc/allocator.rs | 46 ++++++++++++------------
 rust/kernel/alloc/kbox.rs      |  4 +--
 rust/kernel/alloc/kvec.rs      | 11 ++++--
 11 files changed, 194 insertions(+), 81 deletions(-)

These are clearly specifically related to mm no?

Apologies with comment re rust/kernel/mm/... I was misreading the changes here
(lack of diffstat unhelpful).

>
> If so, they're indeed not under the "MEMORY MANAGEMENT - RUST" entry, which
> so far seems correct.

I think the sticking point here is that these helpers are considered
trivial wrappers around mm bits. More below.

>
> Please also note that we had "RUST [ALLOC]" before "MEMORY MANAGEMENT - RUST"
> did exist.

I'm talking about the mm-specific bits. See above.

>
> > I can fully understand there being some process fail here meaning you
> > missed it - fine if so - but let's fix it please moving forwards.
>
> I agree that this series should have a couple more people in Cc.

There were 17 people missing. So more than a couple.

Until v11 the slab maintainer wasn't even cc'd for changes to slab :)

v10 at https://lore.kernel.org/linux-mm/20250702160758.3609992-1-vitaly.wool@konsulko.se/

This definitely isn't ok.

>
> Given the existing entries in the MAINTAINERS file the Rust parts seems to be
> correct though.

scripts/get_maintainers.pl says:

Alex Gaynor <alex.gaynor@gmail.com> (maintainer:RUST)
Boqun Feng <boqun.feng@gmail.com> (reviewer:RUST)
Gary Guo <gary@garyguo.net> (reviewer:RUST,commit_signer:3/5=60%,authored:1/5=20%,removed_lines:1/9=11%,commit_signer:1/3=33%)
"Björn Roy Baron" <bjorn3_gh@protonmail.com> (reviewer:RUST)
Benno Lossin <lossin@kernel.org> (reviewer:RUST,commit_signer:2/5=40%)
Andreas Hindborg <a.hindborg@kernel.org> (reviewer:RUST,authored:1/5=20%,added_lines:10/26=38%)
Alice Ryhl <aliceryhl@google.com> (reviewer:RUST,commit_signer:2/5=40%,commit_signer:1/3=33%)
Trevor Gross <tmgross@umich.edu> (reviewer:RUST)
Danilo Krummrich <dakr@kernel.org> (reviewer:RUST,authored:1/5=20%,added_lines:6/26=23%,commit_signer:1/3=33%,authored:1/3=33%,added_lines:9/14=64%)

Most of whom aren't cc'd.

This is based on mm-new's MAINTAINERS though so it may not be up-to-date.

>
> > It's really important to me that the rust efforts in mm are collaborative -
> > I really believe in your mission (well - for me it's about the compiler
> > _helping_ me not shooting me in the foot :) - and have put substantial
> > effort in assisting initial work there. So let's make sure we're
> > collaborative in both directions please.
>
> AFAICT, those efforts are collaborative.
>
> Back then I sent patches to introduce vrealloc() and improve and align
> kvrealloc() and krealloc() [1]; it was also mentioned that this was, besides the
> other advantages, prerequisite work for the Rust allocator patch series [2].
>
> The subsequent Rust allocator patch series [2] was also sent to Andrew and the
> -mm mailing list; the previous code replaced by this series was maintained under
> the "RUST" entry in the maintainers file.
>
> With the introduction of the new Rust allocator code I took over maintainership.
>
> So, Andrew is aware of the Rust allocator tree, please see also [3].

I mean there's process issues here too I think. I think ideally best to cc mm
rust people too, sending to linux-mm is usually not enough, since we are all so
busy it's hard to keep up.

I'm making real efforts to improve this by adding explicit MAINTAINERS entries
for things as best I can so everyone's life is easier - and absolutely this is a
bit in flux atm - so forgivable to not be aware/miss entries that were only
added recently.

Anyway, that series appears to me to be more so _internal_.

The important stuff to have mm input on are things that _interface_ with
mm. Even trivial wrappers should be at least tracked so people can at least be
aware of things that might change.

And absolutely I couldn't agree more with this going through the mm tree to be
sync'd up with the mm changes - there was broad agreement on this at LSF/MM.

>
> [1] https://lore.kernel.org/all/20240722163111.4766-1-dakr@kernel.org/
> [2] https://lore.kernel.org/all/20241004154149.93856-1-dakr@kernel.org/
> [3] https://lore.kernel.org/all/20250625143450.2afc473fc0e7124a5108c187@linux-foundation.org/
>
> > We have rust/kernel/mm/ under MEMORY MANAGEMENT - RUST too, I'm not au fait
> > with your approach to structuring in these folders but seems to me these
> > helpers should be there? I may be unaware of some rust aspect of this
> > however.
>
> The Rust allocator module is a user of exactly three functions of mm, i.e.
> krealloc(), vrealloc(), kvrealloc(), with a thin abstraction layer for those
> three allocator backends. Everything else is rather Rust core infrastructure
> than mm infrastructure.

I would argue that making use of mm interfaces would make it important to cc
relevant maintainers.

>
> > Can we please add these files to this section and in future cc people
> > listed there? We're here to help!
>
> What's your proposal regarding maintainership? Are you asking me to drop it to
> "MEMORY MANAGEMENT - RUST"?

I'm not making any suggestions re: maintainership, I'm suggesting mm-related
rust files should belong in the mm rust section and that people who've
volunteered to review mm-related rust code should be cc'd on series relating to
rust + mm.

>
> > A side-note I wonder if we also need to put specific files also in relevant
> > mm sections? E.g. the slab helper should also be put under the slab section
> > perhaps?
>
> Yes, we could. But in the end all Rust helper functions are transparent
> wrappers, simply forwarding a function call *without* any additional logic.
> They don't really require maintainence effort, and, in the end, are just
> trivial boilerplate.

It'd be good to keep track of files like this and to know who to cc when
you change them.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc
  2025-07-07 16:49 ` [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc Vitaly Wool
@ 2025-07-08 12:52   ` Vlastimil Babka
  2025-07-08 14:03     ` Vitaly Wool
  0 siblings, 1 reply; 25+ messages in thread
From: Vlastimil Babka @ 2025-07-08 12:52 UTC (permalink / raw)
  To: Vitaly Wool, linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Hyeonggon Yoo, Roman Gushchin,
	Lorenzo Stoakes, Liam R. Howlett

On 7/7/25 18:49, Vitaly Wool wrote:
> Reimplement k[v]realloc_node() to be able to set node and
> alignment should a user need to do so. In order to do that while
> retaining the maximal backward compatibility, add
> k[v]realloc_node_align() functions and redefine the rest of API
> using these new ones.
> 
> With that change we also provide the ability for the Rust part of
> the kernel to set node and alignment in its K[v]xxx
> [re]allocations.
> 
> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
> ---
>  include/linux/slab.h | 40 ++++++++++++++++++---------
>  mm/slub.c            | 64 ++++++++++++++++++++++++++++++--------------
>  2 files changed, 71 insertions(+), 33 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index d5a8ab98035c..13abcf4ada22 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -465,9 +465,15 @@ int kmem_cache_shrink(struct kmem_cache *s);
>  /*
>   * Common kmalloc functions provided by all allocators
>   */
> -void * __must_check krealloc_noprof(const void *objp, size_t new_size,
> -				    gfp_t flags) __realloc_size(2);
> -#define krealloc(...)				alloc_hooks(krealloc_noprof(__VA_ARGS__))
> +void * __must_check krealloc_node_align_noprof(const void *objp, size_t new_size,
> +					       unsigned long align,
> +					       gfp_t flags, int nid) __realloc_size(2);
> +#define krealloc_node_noprof(_p, _s, _f, _n) \
> +	krealloc_node_align_noprof(_p, _s, 1, _f, _n)
> +#define krealloc_noprof(...)		krealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
> +#define krealloc_node_align(...)	alloc_hooks(krealloc_node_align_noprof(__VA_ARGS__))
> +#define krealloc_node(...)		alloc_hooks(krealloc_node_noprof(__VA_ARGS__))
> +#define krealloc(...)			alloc_hooks(krealloc_noprof(__VA_ARGS__))

Hm wonder if krealloc() and krealloc_node_align() would be enough. Is
krealloc_node() only used between patch 3 and 4?
Also perhaps it would be more concise to only have
krealloc_node_align_noprof() with alloc_hooks wrappers filling the
NUMA_NO_NODE (and 1), so we don't need to #define the _noprof variant of
everything. The _noprof callers are rare so they can just always use
krealloc_node_align_noprof() directly and also fill in the NUMA_NO_NODE (and 1).

>  void kfree(const void *objp);
>  void kfree_sensitive(const void *objp);
> @@ -1041,18 +1047,23 @@ static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flags)
>  #define kzalloc(...)				alloc_hooks(kzalloc_noprof(__VA_ARGS__))
>  #define kzalloc_node(_size, _flags, _node)	kmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
>  
> -void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node) __alloc_size(1);
> -#define kvmalloc_node_noprof(size, flags, node)	\
> -	__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(size, NULL), flags, node)
> +void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
> +			     gfp_t flags, int node) __alloc_size(1);
> +#define kvmalloc_node_align_noprof(_size, _align, _flags, _node)	\
> +	__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, NULL), _align, _flags, _node)
> +#define kvmalloc_node_noprof(_size, _flags, _node)	\
> +	kvmalloc_node_align_noprof(_size, 1, _flags, _node)
> +#define kvmalloc_node_align(...)		\
> +	alloc_hooks(kvmalloc_node_align_noprof(__VA_ARGS__))
>  #define kvmalloc_node(...)			alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__))

Ditto.

>  
> -#define kvmalloc(_size, _flags)			kvmalloc_node(_size, _flags, NUMA_NO_NODE)
> -#define kvmalloc_noprof(_size, _flags)		kvmalloc_node_noprof(_size, _flags, NUMA_NO_NODE)
> +#define kvmalloc_noprof(...)			kvmalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
> +#define kvmalloc(...)				alloc_hooks(kvmalloc_noprof(__VA_ARGS__))
>  #define kvzalloc(_size, _flags)			kvmalloc(_size, (_flags)|__GFP_ZERO)
>  
> -#define kvzalloc_node(_size, _flags, _node)	kvmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
> +#define kvzalloc_node(_s, _f, _n)		kvmalloc_node(_s, (_f)|__GFP_ZERO, _n)
>  #define kmem_buckets_valloc(_b, _size, _flags)	\
> -	alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
> +	alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), 1, _flags, NUMA_NO_NODE))
>  
>  static inline __alloc_size(1, 2) void *
>  kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
> @@ -1068,13 +1079,16 @@ kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
>  #define kvmalloc_array_noprof(...)		kvmalloc_array_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>  #define kvcalloc_node_noprof(_n,_s,_f,_node)	kvmalloc_array_node_noprof(_n,_s,(_f)|__GFP_ZERO,_node)
>  #define kvcalloc_noprof(...)			kvcalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
> -
>  #define kvmalloc_array(...)			alloc_hooks(kvmalloc_array_noprof(__VA_ARGS__))
>  #define kvcalloc_node(...)			alloc_hooks(kvcalloc_node_noprof(__VA_ARGS__))
>  #define kvcalloc(...)				alloc_hooks(kvcalloc_noprof(__VA_ARGS__))
>  
> -void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
> -		__realloc_size(2);
> +void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
> +				  gfp_t flags, int nid) __realloc_size(2);
> +#define kvrealloc_node_align(...)		kvrealloc_node_align_noprof(__VA_ARGS__)
> +#define kvrealloc_node_noprof(_p, _s, _f, _n)	kvrealloc_node_align_noprof(_p, _s, 1, _f, _n)
> +#define kvrealloc_node(...)			alloc_hooks(kvrealloc_node_noprof(__VA_ARGS__))
> +#define kvrealloc_noprof(...)			kvrealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>  #define kvrealloc(...)				alloc_hooks(kvrealloc_noprof(__VA_ARGS__))

Ditto.

>  extern void kvfree(const void *addr);
> diff --git a/mm/slub.c b/mm/slub.c
> index c4b64821e680..881244c357dd 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4845,7 +4845,7 @@ void kfree(const void *object)
>  EXPORT_SYMBOL(kfree);
>  
>  static __always_inline __realloc_size(2) void *
> -__do_krealloc(const void *p, size_t new_size, gfp_t flags)
> +__do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags, int nid)
>  {
>  	void *ret;
>  	size_t ks = 0;
> @@ -4859,6 +4859,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>  	if (!kasan_check_byte(p))
>  		return NULL;
>  
> +	/* refuse to proceed if alignment is bigger than what kmalloc() provides */
> +	if (!IS_ALIGNED((unsigned long)p, align) || new_size < align)
> +		return NULL;
> +
> +	/*
> +	 * If reallocation is not necessary (e. g. the new size is less
> +	 * than the current allocated size), the current allocation will be
> +	 * preserved unless __GFP_THISNODE is set. In the latter case a new
> +	 * allocation on the requested node will be attempted.
> +	 */
> +	if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
> +		     nid != page_to_nid(vmalloc_to_page(p)))

We need virt_to_page() here not vmalloc_to_page().

> +		goto alloc_new;
> +
>  	if (is_kfence_address(p)) {
>  		ks = orig_size = kfence_ksize(p);
>  	} else {


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 11:55   ` Danilo Krummrich
  2025-07-08 12:36     ` Lorenzo Stoakes
@ 2025-07-08 13:19     ` Lorenzo Stoakes
  2025-07-08 14:16       ` Danilo Krummrich
  2025-07-09 11:31       ` Alice Ryhl
  1 sibling, 2 replies; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-08 13:19 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
> On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
> > +cc Liam
> >
> > Hi guys,
> >
> > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
> > it's slightly concerning to find a series (at v11!) like this that changes
> > mm-related stuff and it involves files not listed there and nobody bothered
> > to cc- the people listed there.
>
> What files are you referring to? Are you referring to:
>
> 	rust/kernel/alloc.rs
> 	rust/kernel/alloc/*
>
> If so, they're indeed not under the "MEMORY MANAGEMENT - RUST" entry, which
> so far seems correct.

Looking at these, they seem to be intended to be the primary means by which
slab/vmalloc allocations will be managed in rust kernel code correct?

There's also stuff relating to NUMA etc.

I really do wonder where the line between this and the mm stuff is. Because
if the idea is 'well this is just a wrapper around slab/vmalloc' surely the
same can be said of what's in rust/kernel/mm.rs re: VMAs?

So if this is the rust equivalent of include/linux/slab.h and mm/slub.c
then that does seem to me to suggest this should be considered an mm/rust
thing right?

It'd be good to know exactly what is considered mm rust and should go
through the mm tree and what isn't.

Maybe Alice has some insights on this?


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 12:36     ` Lorenzo Stoakes
@ 2025-07-08 13:41       ` Danilo Krummrich
  2025-07-08 14:06         ` Lorenzo Stoakes
  0 siblings, 1 reply; 25+ messages in thread
From: Danilo Krummrich @ 2025-07-08 13:41 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue Jul 8, 2025 at 2:36 PM CEST, Lorenzo Stoakes wrote:
> TL;DR - the real issue here is not cc'ing the right people (Vlastimil was
> not cc'd until v11 for instance).

Since Andrew was Cc'd and also did reply, but didn't mention anything about
missing receipients on the -mm side of things, I did not see a reason to bring
anything up regarding this from my end.

Thanks for bringing this up.

> On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
>> On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
>> > +cc Liam
>> >
>> > Hi guys,
>> >
>> > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
>> > it's slightly concerning to find a series (at v11!) like this that changes
>> > mm-related stuff and it involves files not listed there and nobody bothered
>> > to cc- the people listed there.
>>
>> What files are you referring to? Are you referring to:
>>
>> 	rust/kernel/alloc.rs
>> 	rust/kernel/alloc/*
>
> this ---> rust/helpers/slab.c            | 10 +++---
> this ---> rust/helpers/vmalloc.c         |  5 +--

So, your concern is about those?

> These are clearly specifically related to mm no?

Yes, and if the maintainers of slab and vmalloc agree we can add them there.

But again, they're just re-exporting inline functions and macros from header
files, which bindgen does not pick up automatically. They do not carry any logic
and purely are a workaround for bindgen.

For instance,

void * __must_check __realloc_size(2)
	rust_helper_vrealloc(const void *p, size_t size, gfp_t flags)
	{
	        return vrealloc(p, size, flags);
	}

works around bindgen not picking up the vrealloc() macro.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc
  2025-07-08 12:52   ` Vlastimil Babka
@ 2025-07-08 14:03     ` Vitaly Wool
  2025-07-09 13:40       ` Vitaly Wool
  0 siblings, 1 reply; 25+ messages in thread
From: Vitaly Wool @ 2025-07-08 14:03 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Hyeonggon Yoo, Roman Gushchin,
	Lorenzo Stoakes, Liam R. Howlett



> On Jul 8, 2025, at 2:52 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
> 
> On 7/7/25 18:49, Vitaly Wool wrote:
>> Reimplement k[v]realloc_node() to be able to set node and
>> alignment should a user need to do so. In order to do that while
>> retaining the maximal backward compatibility, add
>> k[v]realloc_node_align() functions and redefine the rest of API
>> using these new ones.
>> 
>> With that change we also provide the ability for the Rust part of
>> the kernel to set node and alignment in its K[v]xxx
>> [re]allocations.
>> 
>> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
>> ---
>> include/linux/slab.h | 40 ++++++++++++++++++---------
>> mm/slub.c            | 64 ++++++++++++++++++++++++++++++--------------
>> 2 files changed, 71 insertions(+), 33 deletions(-)
>> 
>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>> index d5a8ab98035c..13abcf4ada22 100644
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -465,9 +465,15 @@ int kmem_cache_shrink(struct kmem_cache *s);
>> /*
>>  * Common kmalloc functions provided by all allocators
>>  */
>> -void * __must_check krealloc_noprof(const void *objp, size_t new_size,
>> -     gfp_t flags) __realloc_size(2);
>> -#define krealloc(...) alloc_hooks(krealloc_noprof(__VA_ARGS__))
>> +void * __must_check krealloc_node_align_noprof(const void *objp, size_t new_size,
>> +        unsigned long align,
>> +        gfp_t flags, int nid) __realloc_size(2);
>> +#define krealloc_node_noprof(_p, _s, _f, _n) \
>> + krealloc_node_align_noprof(_p, _s, 1, _f, _n)
>> +#define krealloc_noprof(...) krealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>> +#define krealloc_node_align(...) alloc_hooks(krealloc_node_align_noprof(__VA_ARGS__))
>> +#define krealloc_node(...) alloc_hooks(krealloc_node_noprof(__VA_ARGS__))
>> +#define krealloc(...) alloc_hooks(krealloc_noprof(__VA_ARGS__))
> 
> Hm wonder if krealloc() and krealloc_node_align() would be enough. Is
> krealloc_node() only used between patch 3 and 4?
> Also perhaps it would be more concise to only have
> krealloc_node_align_noprof() with alloc_hooks wrappers filling the
> NUMA_NO_NODE (and 1), so we don't need to #define the _noprof variant of
> everything. The _noprof callers are rare so they can just always use
> krealloc_node_align_noprof() directly and also fill in the NUMA_NO_NODE (and 1).

I don’t think that krealloc_node() is used at all at the moment. I thought I’d define these to be symmetrical to vmalloc() but if you believe these are redundant, I’m all for removing them.

> 
>> void kfree(const void *objp);
>> void kfree_sensitive(const void *objp);
>> @@ -1041,18 +1047,23 @@ static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flags)
>> #define kzalloc(...) alloc_hooks(kzalloc_noprof(__VA_ARGS__))
>> #define kzalloc_node(_size, _flags, _node) kmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
>> 
>> -void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node) __alloc_size(1);
>> -#define kvmalloc_node_noprof(size, flags, node) \
>> - __kvmalloc_node_noprof(PASS_BUCKET_PARAMS(size, NULL), flags, node)
>> +void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
>> +      gfp_t flags, int node) __alloc_size(1);
>> +#define kvmalloc_node_align_noprof(_size, _align, _flags, _node) \
>> + __kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, NULL), _align, _flags, _node)
>> +#define kvmalloc_node_noprof(_size, _flags, _node) \
>> + kvmalloc_node_align_noprof(_size, 1, _flags, _node)
>> +#define kvmalloc_node_align(...) \
>> + alloc_hooks(kvmalloc_node_align_noprof(__VA_ARGS__))
>> #define kvmalloc_node(...) alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__))
> 
> Ditto.
> 
>> 
>> -#define kvmalloc(_size, _flags) kvmalloc_node(_size, _flags, NUMA_NO_NODE)
>> -#define kvmalloc_noprof(_size, _flags) kvmalloc_node_noprof(_size, _flags, NUMA_NO_NODE)
>> +#define kvmalloc_noprof(...) kvmalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>> +#define kvmalloc(...) alloc_hooks(kvmalloc_noprof(__VA_ARGS__))
>> #define kvzalloc(_size, _flags) kvmalloc(_size, (_flags)|__GFP_ZERO)
>> 
>> -#define kvzalloc_node(_size, _flags, _node) kvmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
>> +#define kvzalloc_node(_s, _f, _n) kvmalloc_node(_s, (_f)|__GFP_ZERO, _n)
>> #define kmem_buckets_valloc(_b, _size, _flags) \
>> - alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
>> + alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), 1, _flags, NUMA_NO_NODE))
>> 
>> static inline __alloc_size(1, 2) void *
>> kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
>> @@ -1068,13 +1079,16 @@ kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
>> #define kvmalloc_array_noprof(...) kvmalloc_array_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>> #define kvcalloc_node_noprof(_n,_s,_f,_node) kvmalloc_array_node_noprof(_n,_s,(_f)|__GFP_ZERO,_node)
>> #define kvcalloc_noprof(...) kvcalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>> -
>> #define kvmalloc_array(...) alloc_hooks(kvmalloc_array_noprof(__VA_ARGS__))
>> #define kvcalloc_node(...) alloc_hooks(kvcalloc_node_noprof(__VA_ARGS__))
>> #define kvcalloc(...) alloc_hooks(kvcalloc_noprof(__VA_ARGS__))
>> 
>> -void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
>> - __realloc_size(2);
>> +void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
>> +   gfp_t flags, int nid) __realloc_size(2);
>> +#define kvrealloc_node_align(...) kvrealloc_node_align_noprof(__VA_ARGS__)
>> +#define kvrealloc_node_noprof(_p, _s, _f, _n) kvrealloc_node_align_noprof(_p, _s, 1, _f, _n)
>> +#define kvrealloc_node(...) alloc_hooks(kvrealloc_node_noprof(__VA_ARGS__))
>> +#define kvrealloc_noprof(...) kvrealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>> #define kvrealloc(...) alloc_hooks(kvrealloc_noprof(__VA_ARGS__))
> 
> Ditto.
> 
>> extern void kvfree(const void *addr);
>> diff --git a/mm/slub.c b/mm/slub.c
>> index c4b64821e680..881244c357dd 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -4845,7 +4845,7 @@ void kfree(const void *object)
>> EXPORT_SYMBOL(kfree);
>> 
>> static __always_inline __realloc_size(2) void *
>> -__do_krealloc(const void *p, size_t new_size, gfp_t flags)
>> +__do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags, int nid)
>> {
>> void *ret;
>> size_t ks = 0;
>> @@ -4859,6 +4859,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>> if (!kasan_check_byte(p))
>> return NULL;
>> 
>> + /* refuse to proceed if alignment is bigger than what kmalloc() provides */
>> + if (!IS_ALIGNED((unsigned long)p, align) || new_size < align)
>> + return NULL;
>> +
>> + /*
>> +  * If reallocation is not necessary (e. g. the new size is less
>> +  * than the current allocated size), the current allocation will be
>> +  * preserved unless __GFP_THISNODE is set. In the latter case a new
>> +  * allocation on the requested node will be attempted.
>> +  */
>> + if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
>> +      nid != page_to_nid(vmalloc_to_page(p)))
> 
> We need virt_to_page() here not vmalloc_to_page().

Indeed, thanks. It is a c’n’p error, we had virt_to_page() in earlier patchsets (i. e. up until v10).

> 
>> + goto alloc_new;
>> +
>> if (is_kfence_address(p)) {
>> ks = orig_size = kfence_ksize(p);
>> } else {




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 13:41       ` Danilo Krummrich
@ 2025-07-08 14:06         ` Lorenzo Stoakes
  0 siblings, 0 replies; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-08 14:06 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue, Jul 08, 2025 at 03:41:31PM +0200, Danilo Krummrich wrote:
> On Tue Jul 8, 2025 at 2:36 PM CEST, Lorenzo Stoakes wrote:
> > TL;DR - the real issue here is not cc'ing the right people (Vlastimil was
> > not cc'd until v11 for instance).
>
> Since Andrew was Cc'd and also did reply, but didn't mention anything about
> missing receipients on the -mm side of things, I did not see a reason to bring
> anything up regarding this from my end.
>
> Thanks for bringing this up.

No problem - and it's understandable that it wouldn't quite be clear that
it's important to cc- as many people, as things have recently changed a lot
in mm re: having a good + specific response from get_maintainers.pl.

At any rate, it's important to always include M's/R's from
get_maintainers.pl as a matter of course when submitting series.

>
> > On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
> >> On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
> >> > +cc Liam
> >> >
> >> > Hi guys,
> >> >
> >> > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
> >> > it's slightly concerning to find a series (at v11!) like this that changes
> >> > mm-related stuff and it involves files not listed there and nobody bothered
> >> > to cc- the people listed there.
> >>
> >> What files are you referring to? Are you referring to:
> >>
> >> 	rust/kernel/alloc.rs
> >> 	rust/kernel/alloc/*
> >
> > this ---> rust/helpers/slab.c            | 10 +++---
> > this ---> rust/helpers/vmalloc.c         |  5 +--
>
> So, your concern is about those?
>
> > These are clearly specifically related to mm no?
>
> Yes, and if the maintainers of slab and vmalloc agree we can add them there.

And the mm/rust section. Because if that's not where we put files that
relate to mm/rust, what is it for?

I think it turns out the larger question here is really about the alloc
stuff. I raised that in a separate thread.

>
> But again, they're just re-exporting inline functions and macros from header
> files, which bindgen does not pick up automatically. They do not carry any logic
> and purely are a workaround for bindgen.
>
> For instance,
>
> void * __must_check __realloc_size(2)
> 	rust_helper_vrealloc(const void *p, size_t size, gfp_t flags)
> 	{
> 	        return vrealloc(p, size, flags);
> 	}
>
> works around bindgen not picking up the vrealloc() macro.

Thanks for the explanation - that's useful.

So the fact these came up by accident more or less I think underlines the
broader need for a conversation about about what does/doesn't constitute
mm/rust.

But perhaps best to keep that to the thread where I raise the alloc stuff.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 13:19     ` Lorenzo Stoakes
@ 2025-07-08 14:16       ` Danilo Krummrich
  2025-07-08 14:39         ` Lorenzo Stoakes
  2025-07-09 11:31       ` Alice Ryhl
  1 sibling, 1 reply; 25+ messages in thread
From: Danilo Krummrich @ 2025-07-08 14:16 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue Jul 8, 2025 at 3:19 PM CEST, Lorenzo Stoakes wrote:
> On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
>> On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
>> > +cc Liam
>> >
>> > Hi guys,
>> >
>> > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
>> > it's slightly concerning to find a series (at v11!) like this that changes
>> > mm-related stuff and it involves files not listed there and nobody bothered
>> > to cc- the people listed there.
>>
>> What files are you referring to? Are you referring to:
>>
>> 	rust/kernel/alloc.rs
>> 	rust/kernel/alloc/*
>>
>> If so, they're indeed not under the "MEMORY MANAGEMENT - RUST" entry, which
>> so far seems correct.
>
> Looking at these, they seem to be intended to be the primary means by which
> slab/vmalloc allocations will be managed in rust kernel code correct?
>
> There's also stuff relating to NUMA etc.
>
> I really do wonder where the line between this and the mm stuff is. Because
> if the idea is 'well this is just a wrapper around slab/vmalloc' surely the
> same can be said of what's in rust/kernel/mm.rs re: VMAs?
>
> So if this is the rust equivalent of include/linux/slab.h and mm/slub.c
> then that does seem to me to suggest this should be considered an mm/rust
> thing right?
>
> It'd be good to know exactly what is considered mm rust and should go
> through the mm tree and what isn't.

(Please also see the explanation in [1].)

There's a thin abstraction layer for allocators in Rust, represented by the
kernel's Allocator trait [2] (which has a few differences to the Allocator trait in
upstream Rust, which, for instance, can't deal with GFP flags).

This allocator trait is implemented by three backends, one for each of
krealloc(), vrealloc() and kvrealloc() [3].

Otherwise, the alloc module implements Rust's core allocation primitives Box and
Vec, which each of them have a type alias for allocator backends. For instance,
there is KBox, VBox and KVBox [4].

This was also mentioned in the mm rework in [5] and in the subsequent patch
series reworking the Rust allocator module [6].

Before [6], the Rust allocator module only covered the kmalloc allocator (i.e.
krealloc()) and was maintained under the "RUST" entry.

Since [6], this is maintained under the "RUST [ALLOC]" entry by me.

Given that, there is a clear and documented responsibility, which also Andrew is
aware of.

To me the current setup looks reasonable, but feel free to take a look at the
code and its relationship to mm and Rust core infrastructure and let me know
what you think -- I'm happy to discuss other proposals.

[1] https://lore.kernel.org/all/aG0HJte0Xw55z_y4@pollux/
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc.rs#n139
[3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/allocator.rs#n130
[4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/kbox.rs
[5] https://lore.kernel.org/all/20240722163111.4766-1-dakr@kernel.org/
[6] https://lore.kernel.org/all/20241004154149.93856-1-dakr@kernel.org/


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 14:16       ` Danilo Krummrich
@ 2025-07-08 14:39         ` Lorenzo Stoakes
  2025-07-08 15:11           ` Danilo Krummrich
  0 siblings, 1 reply; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-08 14:39 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue, Jul 08, 2025 at 04:16:48PM +0200, Danilo Krummrich wrote:
> On Tue Jul 8, 2025 at 3:19 PM CEST, Lorenzo Stoakes wrote:
> > On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
> >> On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
> >> > +cc Liam
> >> >
> >> > Hi guys,
> >> >
> >> > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
> >> > it's slightly concerning to find a series (at v11!) like this that changes
> >> > mm-related stuff and it involves files not listed there and nobody bothered
> >> > to cc- the people listed there.
> >>
> >> What files are you referring to? Are you referring to:
> >>
> >> 	rust/kernel/alloc.rs
> >> 	rust/kernel/alloc/*
> >>
> >> If so, they're indeed not under the "MEMORY MANAGEMENT - RUST" entry, which
> >> so far seems correct.
> >
> > Looking at these, they seem to be intended to be the primary means by which
> > slab/vmalloc allocations will be managed in rust kernel code correct?
> >
> > There's also stuff relating to NUMA etc.
> >
> > I really do wonder where the line between this and the mm stuff is. Because
> > if the idea is 'well this is just a wrapper around slab/vmalloc' surely the
> > same can be said of what's in rust/kernel/mm.rs re: VMAs?
> >
> > So if this is the rust equivalent of include/linux/slab.h and mm/slub.c
> > then that does seem to me to suggest this should be considered an mm/rust
> > thing right?
> >
> > It'd be good to know exactly what is considered mm rust and should go
> > through the mm tree and what isn't.
>
> (Please also see the explanation in [1].)
>
> There's a thin abstraction layer for allocators in Rust, represented by the
> kernel's Allocator trait [2] (which has a few differences to the Allocator trait in
> upstream Rust, which, for instance, can't deal with GFP flags).
>
> This allocator trait is implemented by three backends, one for each of
> krealloc(), vrealloc() and kvrealloc() [3].
>
> Otherwise, the alloc module implements Rust's core allocation primitives Box and
> Vec, which each of them have a type alias for allocator backends. For instance,
> there is KBox, VBox and KVBox [4].
>
> This was also mentioned in the mm rework in [5] and in the subsequent patch
> series reworking the Rust allocator module [6].
>
> Before [6], the Rust allocator module only covered the kmalloc allocator (i.e.
> krealloc()) and was maintained under the "RUST" entry.
>
> Since [6], this is maintained under the "RUST [ALLOC]" entry by me.
>
> Given that, there is a clear and documented responsibility, which also Andrew is
> aware of.
>
> To me the current setup looks reasonable, but feel free to take a look at the
> code and its relationship to mm and Rust core infrastructure and let me know
> what you think -- I'm happy to discuss other proposals.

Thanks for the explanation.

To me this is clearly mm rust code. This is an abstraction over mm bits to
provide slab or vmalloc allocations for rust bits.

To be clear - I'm not suggesting anything dramatic here, nor in any way
suggesting you ought not maintain this (apologies if this wasn't clear :)

It's really a couple points:

1. Purely pragmatically - how can we make sure relevant people are pinged?

2. Having clarity on what does/does not constitute 'MEMORY MANAGEMENT - RUST'
   (again, perhaps Alice best placed to give some input here from her point of
   view).

We could solve 1 very simply by just using the fact we can have files in
multiple sections in MAINTAINERS.

Doing a scripts/get_maintainers.pl invocation will very clearly tell you
who's in charge of what so there'd be no lack of clarity on this.

It's a bit messy, obviously. But it'd solve the issue of mm people not
getting notified when things change.

However, at this stage you might want to just limit this to people who have
_opted in_ to look at mm/rust stuff. In which case then it'd make sense to
add only to the "MEMORY MANAGEMENT - RUST" section (but here we need to
address point 2 obviously).

Alternatively we could just add reviewers to the rust alloc bit.

>
> [1] https://lore.kernel.org/all/aG0HJte0Xw55z_y4@pollux/
> [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc.rs#n139
> [3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/allocator.rs#n130
> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/kbox.rs
> [5] https://lore.kernel.org/all/20240722163111.4766-1-dakr@kernel.org/
> [6] https://lore.kernel.org/all/20241004154149.93856-1-dakr@kernel.org/


I feel it's really important to not separate rust _too much_ from the
subsystems it utilises - if we intend to have rust be used more and more
and integrated further in the kernel (something I'd like to see, more so
when I learn it :P) - the only practical way forward is for the rust stuff
to be considered a first class citizen and managed hand-in-glove with the
not-rust stuff.

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 14:39         ` Lorenzo Stoakes
@ 2025-07-08 15:11           ` Danilo Krummrich
  2025-07-08 15:40             ` Lorenzo Stoakes
  0 siblings, 1 reply; 25+ messages in thread
From: Danilo Krummrich @ 2025-07-08 15:11 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue Jul 8, 2025 at 4:39 PM CEST, Lorenzo Stoakes wrote:
> On Tue, Jul 08, 2025 at 04:16:48PM +0200, Danilo Krummrich wrote:
>> (Please also see the explanation in [1].)
>>
>> There's a thin abstraction layer for allocators in Rust, represented by the
>> kernel's Allocator trait [2] (which has a few differences to the Allocator trait in
>> upstream Rust, which, for instance, can't deal with GFP flags).
>>
>> This allocator trait is implemented by three backends, one for each of
>> krealloc(), vrealloc() and kvrealloc() [3].
>>
>> Otherwise, the alloc module implements Rust's core allocation primitives Box and
>> Vec, which each of them have a type alias for allocator backends. For instance,
>> there is KBox, VBox and KVBox [4].
>>
>> This was also mentioned in the mm rework in [5] and in the subsequent patch
>> series reworking the Rust allocator module [6].
>>
>> Before [6], the Rust allocator module only covered the kmalloc allocator (i.e.
>> krealloc()) and was maintained under the "RUST" entry.
>>
>> Since [6], this is maintained under the "RUST [ALLOC]" entry by me.
>>
>> Given that, there is a clear and documented responsibility, which also Andrew is
>> aware of.
>>
>> To me the current setup looks reasonable, but feel free to take a look at the
>> code and its relationship to mm and Rust core infrastructure and let me know
>> what you think -- I'm happy to discuss other proposals.
>
> Thanks for the explanation.
>
> To me this is clearly mm rust code. This is an abstraction over mm bits to
> provide slab or vmalloc allocations for rust bits.
>
> To be clear - I'm not suggesting anything dramatic here, nor in any way
> suggesting you ought not maintain this (apologies if this wasn't clear :)
>
> It's really a couple points:
>
> 1. Purely pragmatically - how can we make sure relevant people are pinged?

I'm very happy to add more reviewers to the RUST [ALLOC] entry for this purpose.

Can you please send a patch for this?

> 2. Having clarity on what does/does not constitute 'MEMORY MANAGEMENT - RUST'
>    (again, perhaps Alice best placed to give some input here from her point of
>    view).

In the end that's a question of definition.

The reason RUST [ALLOC] is a thing is because it is very closely related to Rust
core infrastructure with only a thin backend using {k,v}realloc().

Compared to other abstractions, the main purpose is not to expose a Rust
interface for an existing kernel specific API, but rather implement a very Rust
specific concept while being a user of an existing kernel C API.

> We could solve 1 very simply by just using the fact we can have files in
> multiple sections in MAINTAINERS.

Please not -- I don't want to have files in multiple entries in MAINTAINERS,
especially when there are different maintainers and different trees. I prefer
clear responsibility.

> Doing a scripts/get_maintainers.pl invocation will very clearly tell you
> who's in charge of what so there'd be no lack of clarity on this.

How's that when a file is in multiple entries?

> It's a bit messy, obviously. But it'd solve the issue of mm people not
> getting notified when things change.
>
> However, at this stage you might want to just limit this to people who have
> _opted in_ to look at mm/rust stuff. In which case then it'd make sense to
> add only to the "MEMORY MANAGEMENT - RUST" section (but here we need to
> address point 2 obviously).
>
> Alternatively we could just add reviewers to the rust alloc bit.

Yeah, let's do that instead please.

>>
>> [1] https://lore.kernel.org/all/aG0HJte0Xw55z_y4@pollux/
>> [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc.rs#n139
>> [3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/allocator.rs#n130
>> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/kbox.rs
>> [5] https://lore.kernel.org/all/20240722163111.4766-1-dakr@kernel.org/
>> [6] https://lore.kernel.org/all/20241004154149.93856-1-dakr@kernel.org/
>
>
> I feel it's really important to not separate rust _too much_ from the
> subsystems it utilises - if we intend to have rust be used more and more
> and integrated further in the kernel (something I'd like to see, more so
> when I learn it :P) - the only practical way forward is for the rust stuff
> to be considered a first class citizen and managed hand-in-glove with the
> not-rust stuff.

You're preaching to the choir with this on my end. I'm recommending subsystems
that receive the first Rust bits to get involved in one or the other way all
the time. And if it's only by reading along. :)


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 15:11           ` Danilo Krummrich
@ 2025-07-08 15:40             ` Lorenzo Stoakes
  0 siblings, 0 replies; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-08 15:40 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Vitaly Wool, linux-mm, akpm, linux-kernel, Uladzislau Rezki,
	Alice Ryhl, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue, Jul 08, 2025 at 05:11:28PM +0200, Danilo Krummrich wrote:
> On Tue Jul 8, 2025 at 4:39 PM CEST, Lorenzo Stoakes wrote:
> > On Tue, Jul 08, 2025 at 04:16:48PM +0200, Danilo Krummrich wrote:
> >> (Please also see the explanation in [1].)
> >>
> >> There's a thin abstraction layer for allocators in Rust, represented by the
> >> kernel's Allocator trait [2] (which has a few differences to the Allocator trait in
> >> upstream Rust, which, for instance, can't deal with GFP flags).
> >>
> >> This allocator trait is implemented by three backends, one for each of
> >> krealloc(), vrealloc() and kvrealloc() [3].
> >>
> >> Otherwise, the alloc module implements Rust's core allocation primitives Box and
> >> Vec, which each of them have a type alias for allocator backends. For instance,
> >> there is KBox, VBox and KVBox [4].
> >>
> >> This was also mentioned in the mm rework in [5] and in the subsequent patch
> >> series reworking the Rust allocator module [6].
> >>
> >> Before [6], the Rust allocator module only covered the kmalloc allocator (i.e.
> >> krealloc()) and was maintained under the "RUST" entry.
> >>
> >> Since [6], this is maintained under the "RUST [ALLOC]" entry by me.
> >>
> >> Given that, there is a clear and documented responsibility, which also Andrew is
> >> aware of.
> >>
> >> To me the current setup looks reasonable, but feel free to take a look at the
> >> code and its relationship to mm and Rust core infrastructure and let me know
> >> what you think -- I'm happy to discuss other proposals.
> >
> > Thanks for the explanation.
> >
> > To me this is clearly mm rust code. This is an abstraction over mm bits to
> > provide slab or vmalloc allocations for rust bits.
> >
> > To be clear - I'm not suggesting anything dramatic here, nor in any way
> > suggesting you ought not maintain this (apologies if this wasn't clear :)
> >
> > It's really a couple points:
> >
> > 1. Purely pragmatically - how can we make sure relevant people are pinged?
>
> I'm very happy to add more reviewers to the RUST [ALLOC] entry for this purpose.

Thanks!

>
> Can you please send a patch for this?

Ack will do.

>
> > 2. Having clarity on what does/does not constitute 'MEMORY MANAGEMENT - RUST'
> >    (again, perhaps Alice best placed to give some input here from her point of
> >    view).
>
> In the end that's a question of definition.
>
> The reason RUST [ALLOC] is a thing is because it is very closely related to Rust
> core infrastructure with only a thin backend using {k,v}realloc().
>
> Compared to other abstractions, the main purpose is not to expose a Rust
> interface for an existing kernel specific API, but rather implement a very Rust
> specific concept while being a user of an existing kernel C API.

Right sure. this stuff gets blurry...

>
> > We could solve 1 very simply by just using the fact we can have files in
> > multiple sections in MAINTAINERS.
>
> Please not -- I don't want to have files in multiple entries in MAINTAINERS,
> especially when there are different maintainers and different trees. I prefer
> clear responsibility.

Ack :) No probs.

>
> > Doing a scripts/get_maintainers.pl invocation will very clearly tell you
> > who's in charge of what so there'd be no lack of clarity on this.
>
> How's that when a file is in multiple entries?

Well you can see which is directly related which not. But I get this is not what
you want! :)

>
> > It's a bit messy, obviously. But it'd solve the issue of mm people not
> > getting notified when things change.
> >
> > However, at this stage you might want to just limit this to people who have
> > _opted in_ to look at mm/rust stuff. In which case then it'd make sense to
> > add only to the "MEMORY MANAGEMENT - RUST" section (but here we need to
> > address point 2 obviously).
> >
> > Alternatively we could just add reviewers to the rust alloc bit.
>
> Yeah, let's do that instead please.

Sure.

>
> >>
> >> [1] https://lore.kernel.org/all/aG0HJte0Xw55z_y4@pollux/
> >> [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc.rs#n139
> >> [3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/allocator.rs#n130
> >> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/rust/kernel/alloc/kbox.rs
> >> [5] https://lore.kernel.org/all/20240722163111.4766-1-dakr@kernel.org/
> >> [6] https://lore.kernel.org/all/20241004154149.93856-1-dakr@kernel.org/
> >
> >
> > I feel it's really important to not separate rust _too much_ from the
> > subsystems it utilises - if we intend to have rust be used more and more
> > and integrated further in the kernel (something I'd like to see, more so
> > when I learn it :P) - the only practical way forward is for the rust stuff
> > to be considered a first class citizen and managed hand-in-glove with the
> > not-rust stuff.
>
> You're preaching to the choir with this on my end. I'm recommending subsystems
> that receive the first Rust bits to get involved in one or the other way all
> the time. And if it's only by reading along. :)

:)))

Yes well I like to think we in mm are forward thinking about this!

In the end rust is here to stay and it's important to embrace it and find the
best means of collaborating!

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-08 13:19     ` Lorenzo Stoakes
  2025-07-08 14:16       ` Danilo Krummrich
@ 2025-07-09 11:31       ` Alice Ryhl
  2025-07-09 12:24         ` Lorenzo Stoakes
  1 sibling, 1 reply; 25+ messages in thread
From: Alice Ryhl @ 2025-07-09 11:31 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Danilo Krummrich, Vitaly Wool, linux-mm, akpm, linux-kernel,
	Uladzislau Rezki, Vlastimil Babka, rust-for-linux, Liam Howlett

On Tue, Jul 08, 2025 at 02:19:38PM +0100, Lorenzo Stoakes wrote:
> On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
> > On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
> > > +cc Liam
> > >
> > > Hi guys,
> > >
> > > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
> > > it's slightly concerning to find a series (at v11!) like this that changes
> > > mm-related stuff and it involves files not listed there and nobody bothered
> > > to cc- the people listed there.
> >
> > What files are you referring to? Are you referring to:
> >
> > 	rust/kernel/alloc.rs
> > 	rust/kernel/alloc/*
> >
> > If so, they're indeed not under the "MEMORY MANAGEMENT - RUST" entry, which
> > so far seems correct.
> 
> Looking at these, they seem to be intended to be the primary means by which
> slab/vmalloc allocations will be managed in rust kernel code correct?
> 
> There's also stuff relating to NUMA etc.
> 
> I really do wonder where the line between this and the mm stuff is. Because
> if the idea is 'well this is just a wrapper around slab/vmalloc' surely the
> same can be said of what's in rust/kernel/mm.rs re: VMAs?
> 
> So if this is the rust equivalent of include/linux/slab.h and mm/slub.c
> then that does seem to me to suggest this should be considered an mm/rust
> thing right?
> 
> It'd be good to know exactly what is considered mm rust and should go
> through the mm tree and what isn't.
> 
> Maybe Alice has some insights on this?

The Rust standard library has three pieces:

- core. Defines standard types that can work anywhere. (such as ints)
- alloc. Defines standard types that require an allocator. (such as vectors)
- std. Defines standard types that require an OS. (such as File or TcpStream)

In the kernel we used to use both core and alloc, but we switched away
from alloc because it doesn't support GFP flags well. The 'RUST [ALLOC]'
subsystem originates from that transition from the Rust stdlib alloc to
our own implementation. It contains essentially three pieces:

- Two data structures Vec and Box.
  - The Box data structure is the simplest possible user of allocation:
    A Box<T> stores a single instance of the struct T in its own
    allocation.
  - The Vec data structure stores a resizable array and maintains a
    pointer, length, capacity triplet. There is a bunch of logic to
    manipulate these to correctly keep track of which parts of the
    vector are in use.
- The Allocator trait.
  - This trait defines what functions an allocator must provide.
  - The data structures Box or Vec require you to specify an allocator,
    and internally it calls into the allocator to manage the backing
    memory for its data.
- Three concrete implementations of the Allocator trait.
  - These are kmalloc, vmalloc, and kvmalloc respectively.

In my eyes, the further down this list you get, the more likely it is
that the patch needs to go through the MM tree.

Alice


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 0/4] support large align and nid in Rust allocators
  2025-07-09 11:31       ` Alice Ryhl
@ 2025-07-09 12:24         ` Lorenzo Stoakes
  0 siblings, 0 replies; 25+ messages in thread
From: Lorenzo Stoakes @ 2025-07-09 12:24 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Vitaly Wool, linux-mm, akpm, linux-kernel,
	Uladzislau Rezki, Vlastimil Babka, rust-for-linux, Liam Howlett

On Wed, Jul 09, 2025 at 11:31:31AM +0000, Alice Ryhl wrote:
> On Tue, Jul 08, 2025 at 02:19:38PM +0100, Lorenzo Stoakes wrote:
> > On Tue, Jul 08, 2025 at 01:55:18PM +0200, Danilo Krummrich wrote:
> > > On Tue, Jul 08, 2025 at 11:58:06AM +0100, Lorenzo Stoakes wrote:
> > > > +cc Liam
> > > >
> > > > Hi guys,
> > > >
> > > > We have a section in MAINTAINERS for mm rust (MEMORY MANAGEMENT - RUST), so
> > > > it's slightly concerning to find a series (at v11!) like this that changes
> > > > mm-related stuff and it involves files not listed there and nobody bothered
> > > > to cc- the people listed there.
> > >
> > > What files are you referring to? Are you referring to:
> > >
> > > 	rust/kernel/alloc.rs
> > > 	rust/kernel/alloc/*
> > >
> > > If so, they're indeed not under the "MEMORY MANAGEMENT - RUST" entry, which
> > > so far seems correct.
> >
> > Looking at these, they seem to be intended to be the primary means by which
> > slab/vmalloc allocations will be managed in rust kernel code correct?
> >
> > There's also stuff relating to NUMA etc.
> >
> > I really do wonder where the line between this and the mm stuff is. Because
> > if the idea is 'well this is just a wrapper around slab/vmalloc' surely the
> > same can be said of what's in rust/kernel/mm.rs re: VMAs?
> >
> > So if this is the rust equivalent of include/linux/slab.h and mm/slub.c
> > then that does seem to me to suggest this should be considered an mm/rust
> > thing right?
> >
> > It'd be good to know exactly what is considered mm rust and should go
> > through the mm tree and what isn't.
> >
> > Maybe Alice has some insights on this?
>
> The Rust standard library has three pieces:
>
> - core. Defines standard types that can work anywhere. (such as ints)
> - alloc. Defines standard types that require an allocator. (such as vectors)
> - std. Defines standard types that require an OS. (such as File or TcpStream)

Ahh this makes a lot of sense.

>
> In the kernel we used to use both core and alloc, but we switched away
> from alloc because it doesn't support GFP flags well. The 'RUST [ALLOC]'
> subsystem originates from that transition from the Rust stdlib alloc to
> our own implementation. It contains essentially three pieces:
>
> - Two data structures Vec and Box.
>   - The Box data structure is the simplest possible user of allocation:
>     A Box<T> stores a single instance of the struct T in its own
>     allocation.
>   - The Vec data structure stores a resizable array and maintains a
>     pointer, length, capacity triplet. There is a bunch of logic to
>     manipulate these to correctly keep track of which parts of the
>     vector are in use.
> - The Allocator trait.
>   - This trait defines what functions an allocator must provide.
>   - The data structures Box or Vec require you to specify an allocator,
>     and internally it calls into the allocator to manage the backing
>     memory for its data.
> - Three concrete implementations of the Allocator trait.
>   - These are kmalloc, vmalloc, and kvmalloc respectively.
>
> In my eyes, the further down this list you get, the more likely it is
> that the patch needs to go through the MM tree.

Thanks that's a really useful explanation.

And makes sense that the closer we get to the underlying bits that provide
the actual memory used by all of the above naturally we get closer to core
mm stuff.

I think the implementation details of all of this, looking far into the
future with rust doing a lot more, do end up constituting essentially mm
semantics, as if a decision was made let's say (for the sake of argument)
to - rather than use kvalloc to back an allocation, introduce a new
heuristic that determined whether to use vmalloc or kmalloc - then of
course this would have a very big impact on how slab/vmalloc allocations
were done in the kernel.

Overall I think the approach we're taking - simply adding some mm folks as
reviewers to 'RUST [ALLOC]' - solves the issue wrt being aware of changes
in this area without too much fuss.

>
> Alice

Thanks, Lorenzo


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc
  2025-07-08 14:03     ` Vitaly Wool
@ 2025-07-09 13:40       ` Vitaly Wool
  2025-07-09 14:13         ` Vlastimil Babka
  0 siblings, 1 reply; 25+ messages in thread
From: Vitaly Wool @ 2025-07-09 13:40 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Hyeonggon Yoo, Roman Gushchin,
	Lorenzo Stoakes, Liam R. Howlett



> On Jul 8, 2025, at 4:03 PM, Vitaly Wool <vitaly.wool@konsulko.se> wrote:
> 
> 
> 
>> On Jul 8, 2025, at 2:52 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
>> 
>> On 7/7/25 18:49, Vitaly Wool wrote:
>>> Reimplement k[v]realloc_node() to be able to set node and
>>> alignment should a user need to do so. In order to do that while
>>> retaining the maximal backward compatibility, add
>>> k[v]realloc_node_align() functions and redefine the rest of API
>>> using these new ones.
>>> 
>>> With that change we also provide the ability for the Rust part of
>>> the kernel to set node and alignment in its K[v]xxx
>>> [re]allocations.
>>> 
>>> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
>>> ---
>>> include/linux/slab.h | 40 ++++++++++++++++++---------
>>> mm/slub.c            | 64 ++++++++++++++++++++++++++++++--------------
>>> 2 files changed, 71 insertions(+), 33 deletions(-)
>>> 
>>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>>> index d5a8ab98035c..13abcf4ada22 100644
>>> --- a/include/linux/slab.h
>>> +++ b/include/linux/slab.h
>>> @@ -465,9 +465,15 @@ int kmem_cache_shrink(struct kmem_cache *s);
>>> /*
>>> * Common kmalloc functions provided by all allocators
>>> */
>>> -void * __must_check krealloc_noprof(const void *objp, size_t new_size,
>>> -     gfp_t flags) __realloc_size(2);
>>> -#define krealloc(...) alloc_hooks(krealloc_noprof(__VA_ARGS__))
>>> +void * __must_check krealloc_node_align_noprof(const void *objp, size_t new_size,
>>> +        unsigned long align,
>>> +        gfp_t flags, int nid) __realloc_size(2);
>>> +#define krealloc_node_noprof(_p, _s, _f, _n) \
>>> + krealloc_node_align_noprof(_p, _s, 1, _f, _n)
>>> +#define krealloc_noprof(...) krealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>>> +#define krealloc_node_align(...) alloc_hooks(krealloc_node_align_noprof(__VA_ARGS__))
>>> +#define krealloc_node(...) alloc_hooks(krealloc_node_noprof(__VA_ARGS__))
>>> +#define krealloc(...) alloc_hooks(krealloc_noprof(__VA_ARGS__))
>> 
>> Hm wonder if krealloc() and krealloc_node_align() would be enough. Is
>> krealloc_node() only used between patch 3 and 4?
>> Also perhaps it would be more concise to only have
>> krealloc_node_align_noprof() with alloc_hooks wrappers filling the
>> NUMA_NO_NODE (and 1), so we don't need to #define the _noprof variant of
>> everything. The _noprof callers are rare so they can just always use
>> krealloc_node_align_noprof() directly and also fill in the NUMA_NO_NODE (and 1).
> 
> I don’t think that krealloc_node() is used at all at the moment. I thought I’d define these to be symmetrical to vmalloc() but if you believe these are redundant, I’m all for removing them.
> 
Well, krealloc_noprof() appears to be used by nommu.c and it feels a bit weird to make nommu code deal with NUMA nodes. So unless you feel strongly about it, I would keep krealloc_noprof().

~Vitaly

>> 
>>> void kfree(const void *objp);
>>> void kfree_sensitive(const void *objp);
>>> @@ -1041,18 +1047,23 @@ static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flags)
>>> #define kzalloc(...) alloc_hooks(kzalloc_noprof(__VA_ARGS__))
>>> #define kzalloc_node(_size, _flags, _node) kmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
>>> 
>>> -void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node) __alloc_size(1);
>>> -#define kvmalloc_node_noprof(size, flags, node) \
>>> - __kvmalloc_node_noprof(PASS_BUCKET_PARAMS(size, NULL), flags, node)
>>> +void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
>>> +      gfp_t flags, int node) __alloc_size(1);
>>> +#define kvmalloc_node_align_noprof(_size, _align, _flags, _node) \
>>> + __kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, NULL), _align, _flags, _node)
>>> +#define kvmalloc_node_noprof(_size, _flags, _node) \
>>> + kvmalloc_node_align_noprof(_size, 1, _flags, _node)
>>> +#define kvmalloc_node_align(...) \
>>> + alloc_hooks(kvmalloc_node_align_noprof(__VA_ARGS__))
>>> #define kvmalloc_node(...) alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__))
>> 
>> Ditto.
>> 
>>> 
>>> -#define kvmalloc(_size, _flags) kvmalloc_node(_size, _flags, NUMA_NO_NODE)
>>> -#define kvmalloc_noprof(_size, _flags) kvmalloc_node_noprof(_size, _flags, NUMA_NO_NODE)
>>> +#define kvmalloc_noprof(...) kvmalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>>> +#define kvmalloc(...) alloc_hooks(kvmalloc_noprof(__VA_ARGS__))
>>> #define kvzalloc(_size, _flags) kvmalloc(_size, (_flags)|__GFP_ZERO)
>>> 
>>> -#define kvzalloc_node(_size, _flags, _node) kvmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
>>> +#define kvzalloc_node(_s, _f, _n) kvmalloc_node(_s, (_f)|__GFP_ZERO, _n)
>>> #define kmem_buckets_valloc(_b, _size, _flags) \
>>> - alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
>>> + alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), 1, _flags, NUMA_NO_NODE))
>>> 
>>> static inline __alloc_size(1, 2) void *
>>> kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
>>> @@ -1068,13 +1079,16 @@ kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
>>> #define kvmalloc_array_noprof(...) kvmalloc_array_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>>> #define kvcalloc_node_noprof(_n,_s,_f,_node) kvmalloc_array_node_noprof(_n,_s,(_f)|__GFP_ZERO,_node)
>>> #define kvcalloc_noprof(...) kvcalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>>> -
>>> #define kvmalloc_array(...) alloc_hooks(kvmalloc_array_noprof(__VA_ARGS__))
>>> #define kvcalloc_node(...) alloc_hooks(kvcalloc_node_noprof(__VA_ARGS__))
>>> #define kvcalloc(...) alloc_hooks(kvcalloc_noprof(__VA_ARGS__))
>>> 
>>> -void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
>>> - __realloc_size(2);
>>> +void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
>>> +   gfp_t flags, int nid) __realloc_size(2);
>>> +#define kvrealloc_node_align(...) kvrealloc_node_align_noprof(__VA_ARGS__)
>>> +#define kvrealloc_node_noprof(_p, _s, _f, _n) kvrealloc_node_align_noprof(_p, _s, 1, _f, _n)
>>> +#define kvrealloc_node(...) alloc_hooks(kvrealloc_node_noprof(__VA_ARGS__))
>>> +#define kvrealloc_noprof(...) kvrealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
>>> #define kvrealloc(...) alloc_hooks(kvrealloc_noprof(__VA_ARGS__))
>> 
>> Ditto.
>> 
>>> extern void kvfree(const void *addr);
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index c4b64821e680..881244c357dd 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -4845,7 +4845,7 @@ void kfree(const void *object)
>>> EXPORT_SYMBOL(kfree);
>>> 
>>> static __always_inline __realloc_size(2) void *
>>> -__do_krealloc(const void *p, size_t new_size, gfp_t flags)
>>> +__do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags, int nid)
>>> {
>>> void *ret;
>>> size_t ks = 0;
>>> @@ -4859,6 +4859,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>>> if (!kasan_check_byte(p))
>>> return NULL;
>>> 
>>> + /* refuse to proceed if alignment is bigger than what kmalloc() provides */
>>> + if (!IS_ALIGNED((unsigned long)p, align) || new_size < align)
>>> + return NULL;
>>> +
>>> + /*
>>> +  * If reallocation is not necessary (e. g. the new size is less
>>> +  * than the current allocated size), the current allocation will be
>>> +  * preserved unless __GFP_THISNODE is set. In the latter case a new
>>> +  * allocation on the requested node will be attempted.
>>> +  */
>>> + if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
>>> +      nid != page_to_nid(vmalloc_to_page(p)))
>> 
>> We need virt_to_page() here not vmalloc_to_page().
> 
> Indeed, thanks. It is a c’n’p error, we had virt_to_page() in earlier patchsets (i. e. up until v10).
> 
>> 
>>> + goto alloc_new;
>>> +
>>> if (is_kfence_address(p)) {
>>> ks = orig_size = kfence_ksize(p);
>>> } else {




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc
  2025-07-09 13:40       ` Vitaly Wool
@ 2025-07-09 14:13         ` Vlastimil Babka
  0 siblings, 0 replies; 25+ messages in thread
From: Vlastimil Babka @ 2025-07-09 14:13 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Hyeonggon Yoo, Roman Gushchin,
	Lorenzo Stoakes, Liam R. Howlett

On 7/9/25 15:40, Vitaly Wool wrote:
>> 
>> I don’t think that krealloc_node() is used at all at the moment. I thought I’d define these to be symmetrical to vmalloc() but if you believe these are redundant, I’m all for removing them.
>> 
> Well, krealloc_noprof() appears to be used by nommu.c and it feels a bit weird to make nommu code deal with NUMA nodes. So unless you feel strongly about it, I would keep krealloc_noprof().

Oh well, let's keep it then.



^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2025-07-09 14:13 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-07 16:47 [PATCH v11 0/4] support large align and nid in Rust allocators Vitaly Wool
2025-07-07 16:48 ` [PATCH v11 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
2025-07-08 12:12   ` Vlastimil Babka
2025-07-07 16:49 ` [PATCH v11 2/4] mm/slub: allow to set node and align in k[v]realloc Vitaly Wool
2025-07-08 12:52   ` Vlastimil Babka
2025-07-08 14:03     ` Vitaly Wool
2025-07-09 13:40       ` Vitaly Wool
2025-07-09 14:13         ` Vlastimil Babka
2025-07-07 16:49 ` [PATCH v11 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
2025-07-08 12:15   ` Danilo Krummrich
2025-07-07 16:49 ` [PATCH v11 4/4] rust: support large alignments " Vitaly Wool
2025-07-08 12:16   ` Danilo Krummrich
2025-07-08 10:58 ` [PATCH v11 0/4] support large align and nid in Rust allocators Lorenzo Stoakes
2025-07-08 11:12   ` Lorenzo Stoakes
2025-07-08 11:55   ` Danilo Krummrich
2025-07-08 12:36     ` Lorenzo Stoakes
2025-07-08 13:41       ` Danilo Krummrich
2025-07-08 14:06         ` Lorenzo Stoakes
2025-07-08 13:19     ` Lorenzo Stoakes
2025-07-08 14:16       ` Danilo Krummrich
2025-07-08 14:39         ` Lorenzo Stoakes
2025-07-08 15:11           ` Danilo Krummrich
2025-07-08 15:40             ` Lorenzo Stoakes
2025-07-09 11:31       ` Alice Ryhl
2025-07-09 12:24         ` Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).