linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/4] support large align and nid in Rust allocators
@ 2025-06-28 10:23 Vitaly Wool
  2025-06-28 10:25 ` [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Vitaly Wool @ 2025-06-28 10:23 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Vitaly Wool

The coming patches provide the ability for Rust allocators to set
NUMA node and large alignment.

Changelog:
v2 -> v3:
* fixed the build breakage for non-MMU configs
v3 -> v4:
* added NUMA node support for k[v]realloc (patch #2)
* removed extra logic in Rust helpers
* patch for Rust allocators split into 2 (align: patch #3 and
  NUMA ids: patch #4)
v4 -> v5:
* reworked NUMA node support for k[v]realloc for all 3 <alloc>_node
  functions to have the same signature
* all 3 <alloc>_node slab/vmalloc functions now support alignment
  specification
* Rust helpers are extended with new functions, the old ones are left
  intact
* Rust support for NUMA nodes comes first now (as patch #3)
v5 -> v6:
* added <alloc>_node_align functions to keep the existing interfaces
  intact
* clearer separation for Rust support of MUNA ids and large alignments
v6 -> v7:
* NUMA identifier as a new Rust type (NumaNode)
* better documentation for changed and new functions and constants
v7 -> v8:
* removed NumaError
* small cleanups per reviewers' comments

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc
  2025-06-28 10:23 [PATCH v8 0/4] support large align and nid in Rust allocators Vitaly Wool
@ 2025-06-28 10:25 ` Vitaly Wool
  2025-06-30 10:30   ` Uladzislau Rezki
  2025-06-28 10:25 ` [PATCH v8 2/4] mm/slub: allow to set node and align in k[v]realloc Vitaly Wool
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Vitaly Wool @ 2025-06-28 10:25 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Vitaly Wool

Reimplement vrealloc() to be able to set node and alignment should
a user need to do so. Rename the function to vrealloc_node_align()
to better match what it actually does now and introduce macros for
vrealloc() and friends for backward compatibility.

With that change we also provide the ability for the Rust part of
the kernel to set node and aligmnent in its allocations.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
---
 include/linux/vmalloc.h | 12 +++++++++---
 mm/vmalloc.c            | 20 ++++++++++++++++----
 2 files changed, 25 insertions(+), 7 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index fdc9aeb74a44..68791f7cb3ba 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -197,9 +197,15 @@ extern void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1
 extern void *vcalloc_noprof(size_t n, size_t size) __alloc_size(1, 2);
 #define vcalloc(...)		alloc_hooks(vcalloc_noprof(__VA_ARGS__))
 
-void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags)
-		__realloc_size(2);
-#define vrealloc(...)		alloc_hooks(vrealloc_noprof(__VA_ARGS__))
+void *__must_check vrealloc_node_align_noprof(const void *p, size_t size,
+		unsigned long align, gfp_t flags, int nid) __realloc_size(2);
+#define vrealloc_node_noprof(_p, _s, _f, _nid)	\
+	vrealloc_node_align_noprof(_p, _s, 1, _f, _nid)
+#define vrealloc_noprof(_p, _s, _f)		\
+	vrealloc_node_align_noprof(_p, _s, 1, _f, NUMA_NO_NODE)
+#define vrealloc_node_align(...)		alloc_hooks(vrealloc_node_align_noprof(__VA_ARGS__))
+#define vrealloc_node(...)			alloc_hooks(vrealloc_node_noprof(__VA_ARGS__))
+#define vrealloc(...)				alloc_hooks(vrealloc_noprof(__VA_ARGS__))
 
 extern void vfree(const void *addr);
 extern void vfree_atomic(const void *addr);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6dbcdceecae1..d633ac0ff977 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4089,12 +4089,15 @@ void *vzalloc_node_noprof(unsigned long size, int node)
 EXPORT_SYMBOL(vzalloc_node_noprof);
 
 /**
- * vrealloc - reallocate virtually contiguous memory; contents remain unchanged
+ * vrealloc_node_align_noprof - reallocate virtually contiguous memory; contents
+ * remain unchanged
  * @p: object to reallocate memory for
  * @size: the size to reallocate
+ * @align: requested alignment
  * @flags: the flags for the page level allocator
+ * @nid: node id
  *
- * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and
+ * If @p is %NULL, vrealloc_XXX() behaves exactly like vmalloc(). If @size is 0 and
  * @p is not a %NULL pointer, the object pointed to is freed.
  *
  * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
@@ -4111,7 +4114,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
  * Return: pointer to the allocated memory; %NULL if @size is zero or in case of
  *         failure
  */
-void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				 gfp_t flags, int nid)
 {
 	struct vm_struct *vm = NULL;
 	size_t alloced_size = 0;
@@ -4135,6 +4139,13 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
 		if (WARN(alloced_size < old_size,
 			 "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
 			return NULL;
+		if (WARN(nid != NUMA_NO_NODE && nid != page_to_nid(vmalloc_to_page(p)),
+			 "vrealloc() has mismatched nids\n"))
+			return NULL;
+		if (WARN((uintptr_t)p & (align - 1),
+			 "will not reallocate with a bigger alignment (0x%lx)\n",
+			 align))
+			return NULL;
 	}
 
 	/*
@@ -4166,7 +4177,8 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
 	}
 
 	/* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
-	n = __vmalloc_noprof(size, flags);
+	n = __vmalloc_node_noprof(size, align, flags, nid, __builtin_return_address(0));
+
 	if (!n)
 		return NULL;
 
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v8 2/4] mm/slub: allow to set node and align in k[v]realloc
  2025-06-28 10:23 [PATCH v8 0/4] support large align and nid in Rust allocators Vitaly Wool
  2025-06-28 10:25 ` [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
@ 2025-06-28 10:25 ` Vitaly Wool
  2025-06-28 10:26 ` [PATCH v8 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
  2025-06-28 10:26 ` [PATCH v8 4/4] rust: support large alignments " Vitaly Wool
  3 siblings, 0 replies; 11+ messages in thread
From: Vitaly Wool @ 2025-06-28 10:25 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Vitaly Wool

Reimplement k[v]realloc_node() to be able to set node and
alignment should a user need to do so. In order to do that while
retaining the maximal backward compatibility, add
k[v]realloc_node_align() functions and redefine the rest of API
using these new ones.

With that change we also provide the ability for the Rust part of
the kernel to set node and aligmnent in its K[v]xxx
[re]allocations.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
---
 include/linux/slab.h | 40 +++++++++++++++++++---------
 mm/nommu.c           |  3 ++-
 mm/slub.c            | 63 ++++++++++++++++++++++++++++++--------------
 3 files changed, 72 insertions(+), 34 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index d5a8ab98035c..13abcf4ada22 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -465,9 +465,15 @@ int kmem_cache_shrink(struct kmem_cache *s);
 /*
  * Common kmalloc functions provided by all allocators
  */
-void * __must_check krealloc_noprof(const void *objp, size_t new_size,
-				    gfp_t flags) __realloc_size(2);
-#define krealloc(...)				alloc_hooks(krealloc_noprof(__VA_ARGS__))
+void * __must_check krealloc_node_align_noprof(const void *objp, size_t new_size,
+					       unsigned long align,
+					       gfp_t flags, int nid) __realloc_size(2);
+#define krealloc_node_noprof(_p, _s, _f, _n) \
+	krealloc_node_align_noprof(_p, _s, 1, _f, _n)
+#define krealloc_noprof(...)		krealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
+#define krealloc_node_align(...)	alloc_hooks(krealloc_node_align_noprof(__VA_ARGS__))
+#define krealloc_node(...)		alloc_hooks(krealloc_node_noprof(__VA_ARGS__))
+#define krealloc(...)			alloc_hooks(krealloc_noprof(__VA_ARGS__))
 
 void kfree(const void *objp);
 void kfree_sensitive(const void *objp);
@@ -1041,18 +1047,23 @@ static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flags)
 #define kzalloc(...)				alloc_hooks(kzalloc_noprof(__VA_ARGS__))
 #define kzalloc_node(_size, _flags, _node)	kmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
 
-void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node) __alloc_size(1);
-#define kvmalloc_node_noprof(size, flags, node)	\
-	__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(size, NULL), flags, node)
+void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
+			     gfp_t flags, int node) __alloc_size(1);
+#define kvmalloc_node_align_noprof(_size, _align, _flags, _node)	\
+	__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, NULL), _align, _flags, _node)
+#define kvmalloc_node_noprof(_size, _flags, _node)	\
+	kvmalloc_node_align_noprof(_size, 1, _flags, _node)
+#define kvmalloc_node_align(...)		\
+	alloc_hooks(kvmalloc_node_align_noprof(__VA_ARGS__))
 #define kvmalloc_node(...)			alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__))
 
-#define kvmalloc(_size, _flags)			kvmalloc_node(_size, _flags, NUMA_NO_NODE)
-#define kvmalloc_noprof(_size, _flags)		kvmalloc_node_noprof(_size, _flags, NUMA_NO_NODE)
+#define kvmalloc_noprof(...)			kvmalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
+#define kvmalloc(...)				alloc_hooks(kvmalloc_noprof(__VA_ARGS__))
 #define kvzalloc(_size, _flags)			kvmalloc(_size, (_flags)|__GFP_ZERO)
 
-#define kvzalloc_node(_size, _flags, _node)	kvmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
+#define kvzalloc_node(_s, _f, _n)		kvmalloc_node(_s, (_f)|__GFP_ZERO, _n)
 #define kmem_buckets_valloc(_b, _size, _flags)	\
-	alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
+	alloc_hooks(__kvmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), 1, _flags, NUMA_NO_NODE))
 
 static inline __alloc_size(1, 2) void *
 kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
@@ -1068,13 +1079,16 @@ kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
 #define kvmalloc_array_noprof(...)		kvmalloc_array_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
 #define kvcalloc_node_noprof(_n,_s,_f,_node)	kvmalloc_array_node_noprof(_n,_s,(_f)|__GFP_ZERO,_node)
 #define kvcalloc_noprof(...)			kvcalloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
-
 #define kvmalloc_array(...)			alloc_hooks(kvmalloc_array_noprof(__VA_ARGS__))
 #define kvcalloc_node(...)			alloc_hooks(kvcalloc_node_noprof(__VA_ARGS__))
 #define kvcalloc(...)				alloc_hooks(kvcalloc_noprof(__VA_ARGS__))
 
-void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
-		__realloc_size(2);
+void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				  gfp_t flags, int nid) __realloc_size(2);
+#define kvrealloc_node_align(...)		kvrealloc_node_align_noprof(__VA_ARGS__)
+#define kvrealloc_node_noprof(_p, _s, _f, _n)	kvrealloc_node_align_noprof(_p, _s, 1, _f, _n)
+#define kvrealloc_node(...)			alloc_hooks(kvrealloc_node_noprof(__VA_ARGS__))
+#define kvrealloc_noprof(...)			kvrealloc_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
 #define kvrealloc(...)				alloc_hooks(kvrealloc_noprof(__VA_ARGS__))
 
 extern void kvfree(const void *addr);
diff --git a/mm/nommu.c b/mm/nommu.c
index 87e1acab0d64..8359b2025b9f 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -119,7 +119,8 @@ void *__vmalloc_noprof(unsigned long size, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(__vmalloc_noprof);
 
-void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				 gfp_t flags, int node)
 {
 	return krealloc_noprof(p, size, (flags | __GFP_COMP) & ~__GFP_HIGHMEM);
 }
diff --git a/mm/slub.c b/mm/slub.c
index c4b64821e680..ec355ce31965 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4845,7 +4845,7 @@ void kfree(const void *object)
 EXPORT_SYMBOL(kfree);
 
 static __always_inline __realloc_size(2) void *
-__do_krealloc(const void *p, size_t new_size, gfp_t flags)
+__do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags, int nid)
 {
 	void *ret;
 	size_t ks = 0;
@@ -4859,6 +4859,19 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 	if (!kasan_check_byte(p))
 		return NULL;
 
+	/* refuse to proceed if alignment is bigger than what kmalloc() provides */
+	if (((uintptr_t)p & (align - 1)) || new_size < align)
+		return NULL;
+
+	/*
+	 * it is possible to support reallocation with a different nid, but
+	 * it doesn't go well with the concept of krealloc(). Such
+	 * reallocation should be done explicitly instead.
+	 */
+	if (WARN(nid != NUMA_NO_NODE && nid != page_to_nid(virt_to_page(p)),
+				"krealloc() has mismatched nids\n"))
+		return NULL;
+
 	if (is_kfence_address(p)) {
 		ks = orig_size = kfence_ksize(p);
 	} else {
@@ -4903,7 +4916,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 	return (void *)p;
 
 alloc_new:
-	ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
+	ret = kmalloc_node_track_caller_noprof(new_size, flags, nid, _RET_IP_);
 	if (ret && p) {
 		/* Disable KASAN checks as the object's redzone is accessed. */
 		kasan_disable_current();
@@ -4915,10 +4928,12 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 }
 
 /**
- * krealloc - reallocate memory. The contents will remain unchanged.
+ * krealloc_node_align - reallocate memory. The contents will remain unchanged.
  * @p: object to reallocate memory for.
  * @new_size: how many bytes of memory are required.
+ * @align: desired alignment.
  * @flags: the type of memory to allocate.
+ * @nid: NUMA node or NUMA_NO_NODE
  *
  * If @p is %NULL, krealloc() behaves exactly like kmalloc().  If @new_size
  * is 0 and @p is not a %NULL pointer, the object pointed to is freed.
@@ -4947,7 +4962,8 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
  *
  * Return: pointer to the allocated memory or %NULL in case of error
  */
-void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
+void *krealloc_node_align_noprof(const void *p, size_t new_size, unsigned long align,
+				 gfp_t flags, int nid)
 {
 	void *ret;
 
@@ -4956,13 +4972,13 @@ void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
 		return ZERO_SIZE_PTR;
 	}
 
-	ret = __do_krealloc(p, new_size, flags);
+	ret = __do_krealloc(p, new_size, align, flags, nid);
 	if (ret && kasan_reset_tag(p) != kasan_reset_tag(ret))
 		kfree(p);
 
 	return ret;
 }
-EXPORT_SYMBOL(krealloc_noprof);
+EXPORT_SYMBOL(krealloc_node_align_noprof);
 
 static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
 {
@@ -4993,6 +5009,7 @@ static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
  * failure, fall back to non-contiguous (vmalloc) allocation.
  * @size: size of the request.
  * @b: which set of kmalloc buckets to allocate from.
+ * @align: desired alignment.
  * @flags: gfp mask for the allocation - must be compatible (superset) with GFP_KERNEL.
  * @node: numa node to allocate from
  *
@@ -5005,19 +5022,22 @@ static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
  *
  * Return: pointer to the allocated memory of %NULL in case of failure
  */
-void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
+void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
+			     gfp_t flags, int node)
 {
 	void *ret;
 
 	/*
 	 * It doesn't really make sense to fallback to vmalloc for sub page
-	 * requests
+	 * requests and small alignments
 	 */
-	ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
-				kmalloc_gfp_adjust(flags, size),
-				node, _RET_IP_);
-	if (ret || size <= PAGE_SIZE)
-		return ret;
+	if (size >= align) {
+		ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
+					kmalloc_gfp_adjust(flags, size),
+					node, _RET_IP_);
+		if (ret || size <= PAGE_SIZE)
+			return ret;
+	}
 
 	/* non-sleeping allocations are not supported by vmalloc */
 	if (!gfpflags_allow_blocking(flags))
@@ -5035,7 +5055,7 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
 	 * about the resulting pointer, and cannot play
 	 * protection games.
 	 */
-	return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END,
+	return __vmalloc_node_range_noprof(size, align, VMALLOC_START, VMALLOC_END,
 			flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
 			node, __builtin_return_address(0));
 }
@@ -5079,10 +5099,12 @@ void kvfree_sensitive(const void *addr, size_t len)
 EXPORT_SYMBOL(kvfree_sensitive);
 
 /**
- * kvrealloc - reallocate memory; contents remain unchanged
+ * kvrealloc_node_align - reallocate memory; contents remain unchanged
  * @p: object to reallocate memory for
  * @size: the size to reallocate
+ * @align: desired alignment
  * @flags: the flags for the page level allocator
+ * @nid: NUMA node id
  *
  * If @p is %NULL, kvrealloc() behaves exactly like kvmalloc(). If @size is 0
  * and @p is not a %NULL pointer, the object pointed to is freed.
@@ -5100,17 +5122,18 @@ EXPORT_SYMBOL(kvfree_sensitive);
  *
  * Return: pointer to the allocated memory or %NULL in case of error
  */
-void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
+void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
+				  gfp_t flags, int nid)
 {
 	void *n;
 
 	if (is_vmalloc_addr(p))
-		return vrealloc_noprof(p, size, flags);
+		return vrealloc_node_align_noprof(p, size, align, flags, nid);
 
-	n = krealloc_noprof(p, size, kmalloc_gfp_adjust(flags, size));
+	n = krealloc_node_align_noprof(p, size, align, kmalloc_gfp_adjust(flags, size), nid);
 	if (!n) {
 		/* We failed to krealloc(), fall back to kvmalloc(). */
-		n = kvmalloc_noprof(size, flags);
+		n = kvmalloc_node_align_noprof(size, align, flags, nid);
 		if (!n)
 			return NULL;
 
@@ -5126,7 +5149,7 @@ void *kvrealloc_noprof(const void *p, size_t size, gfp_t flags)
 
 	return n;
 }
-EXPORT_SYMBOL(kvrealloc_noprof);
+EXPORT_SYMBOL(kvrealloc_node_align_noprof);
 
 struct detached_freelist {
 	struct slab *slab;
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v8 3/4] rust: add support for NUMA ids in allocations
  2025-06-28 10:23 [PATCH v8 0/4] support large align and nid in Rust allocators Vitaly Wool
  2025-06-28 10:25 ` [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
  2025-06-28 10:25 ` [PATCH v8 2/4] mm/slub: allow to set node and align in k[v]realloc Vitaly Wool
@ 2025-06-28 10:26 ` Vitaly Wool
  2025-06-28 12:21   ` Danilo Krummrich
  2025-06-28 10:26 ` [PATCH v8 4/4] rust: support large alignments " Vitaly Wool
  3 siblings, 1 reply; 11+ messages in thread
From: Vitaly Wool @ 2025-06-28 10:26 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Vitaly Wool

Add a new type to support specifying NUMA identifiers in Rust
allocators and extend the allocators to have NUMA id as a
parameter. Thus, modify ReallocFunc to use the new extended realloc
primitives from the C side of the kernel (i. e.
k[v]realloc_node_align/vrealloc_node_align) and add the new function
alloc_node to the Allocator trait while keeping the existing one
(alloc) for backward compatibility.

This will allow to specify node to use for allocation of e. g.
{KV}Box, as well as for future NUMA aware users of the API.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
---
 rust/helpers/slab.c            |  8 ++--
 rust/helpers/vmalloc.c         |  4 +-
 rust/kernel/alloc.rs           | 77 ++++++++++++++++++++++++++++++++--
 rust/kernel/alloc/allocator.rs | 42 +++++++++++--------
 4 files changed, 104 insertions(+), 27 deletions(-)

diff --git a/rust/helpers/slab.c b/rust/helpers/slab.c
index a842bfbddcba..8472370a4338 100644
--- a/rust/helpers/slab.c
+++ b/rust/helpers/slab.c
@@ -3,13 +3,13 @@
 #include <linux/slab.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags)
+rust_helper_krealloc_node(const void *objp, size_t new_size, gfp_t flags, int node)
 {
-	return krealloc(objp, new_size, flags);
+	return krealloc_node(objp, new_size, flags, node);
 }
 
 void * __must_check __realloc_size(2)
-rust_helper_kvrealloc(const void *p, size_t size, gfp_t flags)
+rust_helper_kvrealloc_node(const void *p, size_t size, gfp_t flags, int node)
 {
-	return kvrealloc(p, size, flags);
+	return kvrealloc_node(p, size, flags, node);
 }
diff --git a/rust/helpers/vmalloc.c b/rust/helpers/vmalloc.c
index 80d34501bbc0..62d30db9a1a6 100644
--- a/rust/helpers/vmalloc.c
+++ b/rust/helpers/vmalloc.c
@@ -3,7 +3,7 @@
 #include <linux/vmalloc.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_vrealloc(const void *p, size_t size, gfp_t flags)
+rust_helper_vrealloc_node(const void *p, size_t size, gfp_t flags, int node)
 {
-	return vrealloc(p, size, flags);
+	return vrealloc_node(p, size, flags, node);
 }
diff --git a/rust/kernel/alloc.rs b/rust/kernel/alloc.rs
index a2c49e5494d3..8d2b046bf947 100644
--- a/rust/kernel/alloc.rs
+++ b/rust/kernel/alloc.rs
@@ -28,7 +28,9 @@
 /// Indicates an allocation error.
 #[derive(Copy, Clone, PartialEq, Eq, Debug)]
 pub struct AllocError;
+
 use core::{alloc::Layout, ptr::NonNull};
+use crate::error::{code::EINVAL, Result};
 
 /// Flags to be used when allocating memory.
 ///
@@ -115,6 +117,30 @@ pub mod flags {
     pub const __GFP_NOWARN: Flags = Flags(bindings::__GFP_NOWARN);
 }
 
+/// Non Uniform Memory Access (NUMA) node identifier
+#[derive(Clone, Copy, PartialEq)]
+pub struct NumaNode(i32);
+
+impl NumaNode {
+    /// create a new NUMA node identifer (non-negative integer)
+    /// returns EINVAL if a negative id is specified
+    pub fn new(node: i32) -> Result<Self> {
+        if node < 0 {
+            return Err(EINVAL);
+        }
+        Ok(Self(node))
+    }
+}
+
+/// Specify necessary constant to pass the information to Allocator that the caller doesn't care
+/// about the NUMA node to allocate memory from
+pub mod numa {
+    use super::NumaNode;
+
+    /// No preference for NUMA node
+    pub const NUMA_NO_NODE: NumaNode = NumaNode(bindings::NUMA_NO_NODE);
+}
+
 /// The kernel's [`Allocator`] trait.
 ///
 /// An implementation of [`Allocator`] can allocate, re-allocate and free memory buffers described
@@ -148,7 +174,7 @@ pub unsafe trait Allocator {
     ///
     /// When the return value is `Ok(ptr)`, then `ptr` is
     /// - valid for reads and writes for `layout.size()` bytes, until it is passed to
-    ///   [`Allocator::free`] or [`Allocator::realloc`],
+    ///   [`Allocator::free`], [`Allocator::realloc`] or [`Allocator::realloc_node`],
     /// - aligned to `layout.align()`,
     ///
     /// Additionally, `Flags` are honored as documented in
@@ -159,7 +185,36 @@ fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> {
         unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags) }
     }
 
-    /// Re-allocate an existing memory allocation to satisfy the requested `layout`.
+    /// Allocate memory based on `layout`, `flags` and `nid`.
+    ///
+    /// On success, returns a buffer represented as `NonNull<[u8]>` that satisfies the layout
+    /// constraints (i.e. minimum size and alignment as specified by `layout`).
+    ///
+    /// This function is equivalent to `realloc_node` when called with `None`.
+    ///
+    /// # Guarantees
+    ///
+    /// When the return value is `Ok(ptr)`, then `ptr` is
+    /// - valid for reads and writes for `layout.size()` bytes, until it is passed to
+    ///   [`Allocator::free`], [`Allocator::realloc`] or [`Allocator::realloc_node`],
+    /// - aligned to `layout.align()`,
+    ///
+    /// Additionally, `Flags` are honored as documented in
+    /// <https://docs.kernel.org/core-api/mm-api.html#mm-api-gfp-flags>.
+    fn alloc_node(layout: Layout, flags: Flags, nid: NumaNode)
+                -> Result<NonNull<[u8]>, AllocError> {
+        // SAFETY: Passing `None` to `realloc_node` is valid by its safety requirements and
+        // asks for a new memory allocation.
+        unsafe { Self::realloc_node(None, layout, Layout::new::<()>(), flags, nid) }
+    }
+
+    /// Re-allocate an existing memory allocation to satisfy the requested `layout` and
+    /// optionally a specific NUMA node request to allocate the memory for.
+    /// Systems employing a Non Uniform Memory Access (NUMA) architecture contain
+    /// collections of hardware resources including processors, memory, and I/O buses,
+    /// that comprise what is commonly known as a NUMA node.
+    /// `nid` stands for NUMA id, i. e. NUMA node identifier, which is a non-negative
+    /// integer if a node needs to be specified, or NUMA_NO_NODE if the caller doesn't care.
     ///
     /// If the requested size is zero, `realloc` behaves equivalent to `free`.
     ///
@@ -191,13 +246,29 @@ fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> {
     ///   and old size, i.e. `ret_ptr[0..min(layout.size(), old_layout.size())] ==
     ///   p[0..min(layout.size(), old_layout.size())]`.
     /// - when the return value is `Err(AllocError)`, then `ptr` is still valid.
-    unsafe fn realloc(
+    unsafe fn realloc_node(
         ptr: Option<NonNull<u8>>,
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError>;
 
+
+    /// Re-allocate an existing memory allocation to satisfy the requested `layout`. This
+    /// function works exactly as realloc_node() but it doesn't give the ability to specify
+    /// the NUMA node in the call.
+    unsafe fn realloc(
+        ptr: Option<NonNull<u8>>,
+        layout: Layout,
+        old_layout: Layout,
+        flags: Flags,
+    ) -> Result<NonNull<[u8]>, AllocError> {
+        // SAFETY: guaranteed by realloc_node()
+        unsafe { Self::realloc_node(ptr, layout, old_layout, flags, numa::NUMA_NO_NODE) }
+    }
+
+
     /// Free an existing memory allocation.
     ///
     /// # Safety
diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
index aa2dfa9dca4c..2e86e9839a1b 100644
--- a/rust/kernel/alloc/allocator.rs
+++ b/rust/kernel/alloc/allocator.rs
@@ -13,7 +13,7 @@
 use core::ptr;
 use core::ptr::NonNull;
 
-use crate::alloc::{AllocError, Allocator};
+use crate::alloc::{AllocError, Allocator, NumaNode};
 use crate::bindings;
 use crate::pr_warn;
 
@@ -58,18 +58,20 @@ fn aligned_size(new_layout: Layout) -> usize {
 ///
 /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`.
 struct ReallocFunc(
-    unsafe extern "C" fn(*const crate::ffi::c_void, usize, u32) -> *mut crate::ffi::c_void,
+    unsafe extern "C" fn(
+        *const crate::ffi::c_void, usize,  u32, crate::ffi::c_int,
+    ) -> *mut crate::ffi::c_void,
 );
 
 impl ReallocFunc {
-    // INVARIANT: `krealloc` satisfies the type invariants.
-    const KREALLOC: Self = Self(bindings::krealloc);
+    // INVARIANT: `krealloc_node` satisfies the type invariants.
+    const KREALLOC: Self = Self(bindings::krealloc_node);
 
-    // INVARIANT: `vrealloc` satisfies the type invariants.
-    const VREALLOC: Self = Self(bindings::vrealloc);
+    // INVARIANT: `vrealloc_node` satisfies the type invariants.
+    const VREALLOC: Self = Self(bindings::vrealloc_node);
 
-    // INVARIANT: `kvrealloc` satisfies the type invariants.
-    const KVREALLOC: Self = Self(bindings::kvrealloc);
+    // INVARIANT: `kvrealloc_node` satisfies the type invariants.
+    const KVREALLOC: Self = Self(bindings::kvrealloc_node);
 
     /// # Safety
     ///
@@ -87,6 +89,7 @@ unsafe fn call(
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         let size = aligned_size(layout);
         let ptr = match ptr {
@@ -110,7 +113,7 @@ unsafe fn call(
         // - Those functions provide the guarantees of this function.
         let raw_ptr = unsafe {
             // If `size == 0` and `ptr != NULL` the memory behind the pointer is freed.
-            self.0(ptr.cast(), size, flags.0).cast()
+            self.0(ptr.cast(), size, flags.0, nid.0).cast()
         };
 
         let ptr = if size == 0 {
@@ -123,34 +126,36 @@ unsafe fn call(
     }
 }
 
-// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
+// SAFETY: `realloc_node` delegates to `ReallocFunc::call`, which guarantees that
 // - memory remains valid until it is explicitly freed,
 // - passing a pointer to a valid memory allocation is OK,
 // - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same.
 unsafe impl Allocator for Kmalloc {
     #[inline]
-    unsafe fn realloc(
+    unsafe fn realloc_node(
         ptr: Option<NonNull<u8>>,
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`.
-        unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) }
+        unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags, nid) }
     }
 }
 
-// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
+// SAFETY: `realloc_node` delegates to `ReallocFunc::call`, which guarantees that
 // - memory remains valid until it is explicitly freed,
 // - passing a pointer to a valid memory allocation is OK,
 // - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same.
 unsafe impl Allocator for Vmalloc {
     #[inline]
-    unsafe fn realloc(
+    unsafe fn realloc_node(
         ptr: Option<NonNull<u8>>,
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         // TODO: Support alignments larger than PAGE_SIZE.
         if layout.align() > bindings::PAGE_SIZE {
@@ -160,21 +165,22 @@ unsafe fn realloc(
 
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
-        unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags) }
+        unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags, nid) }
     }
 }
 
-// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
+// SAFETY: `realloc_node` delegates to `ReallocFunc::call`, which guarantees that
 // - memory remains valid until it is explicitly freed,
 // - passing a pointer to a valid memory allocation is OK,
 // - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same.
 unsafe impl Allocator for KVmalloc {
     #[inline]
-    unsafe fn realloc(
+    unsafe fn realloc_node(
         ptr: Option<NonNull<u8>>,
         layout: Layout,
         old_layout: Layout,
         flags: Flags,
+        nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
         // TODO: Support alignments larger than PAGE_SIZE.
         if layout.align() > bindings::PAGE_SIZE {
@@ -184,6 +190,6 @@ unsafe fn realloc(
 
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
-        unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags) }
+        unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags, nid) }
     }
 }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v8 4/4] rust: support large alignments in allocations
  2025-06-28 10:23 [PATCH v8 0/4] support large align and nid in Rust allocators Vitaly Wool
                   ` (2 preceding siblings ...)
  2025-06-28 10:26 ` [PATCH v8 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
@ 2025-06-28 10:26 ` Vitaly Wool
  3 siblings, 0 replies; 11+ messages in thread
From: Vitaly Wool @ 2025-06-28 10:26 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux, Vitaly Wool

Add support for large (> PAGE_SIZE) alignments in Rust allocators.
All the preparations on the C side are already done, we just need
to add bindings for <alloc>_node_align() functions and start
using those.

Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
---
 rust/helpers/slab.c            | 10 ++++++----
 rust/helpers/vmalloc.c         |  5 +++--
 rust/kernel/alloc/allocator.rs | 28 ++++++++--------------------
 3 files changed, 17 insertions(+), 26 deletions(-)

diff --git a/rust/helpers/slab.c b/rust/helpers/slab.c
index 8472370a4338..d729be798f31 100644
--- a/rust/helpers/slab.c
+++ b/rust/helpers/slab.c
@@ -3,13 +3,15 @@
 #include <linux/slab.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_krealloc_node(const void *objp, size_t new_size, gfp_t flags, int node)
+rust_helper_krealloc_node_align(const void *objp, size_t new_size, unsigned long align,
+				gfp_t flags, int node)
 {
-	return krealloc_node(objp, new_size, flags, node);
+	return krealloc_node_align(objp, new_size, align, flags, node);
 }
 
 void * __must_check __realloc_size(2)
-rust_helper_kvrealloc_node(const void *p, size_t size, gfp_t flags, int node)
+rust_helper_kvrealloc_node_align(const void *p, size_t size, unsigned long align,
+				gfp_t flags, int node)
 {
-	return kvrealloc_node(p, size, flags, node);
+	return kvrealloc_node_align(p, size, align, flags, node);
 }
diff --git a/rust/helpers/vmalloc.c b/rust/helpers/vmalloc.c
index 62d30db9a1a6..7d7f7336b3d2 100644
--- a/rust/helpers/vmalloc.c
+++ b/rust/helpers/vmalloc.c
@@ -3,7 +3,8 @@
 #include <linux/vmalloc.h>
 
 void * __must_check __realloc_size(2)
-rust_helper_vrealloc_node(const void *p, size_t size, gfp_t flags, int node)
+rust_helper_vrealloc_node_align(const void *p, size_t size, unsigned long align,
+				gfp_t flags, int node)
 {
-	return vrealloc_node(p, size, flags, node);
+	return vrealloc_node_align(p, size, align, flags, node);
 }
diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
index 2e86e9839a1b..58e5bf78c159 100644
--- a/rust/kernel/alloc/allocator.rs
+++ b/rust/kernel/alloc/allocator.rs
@@ -59,19 +59,19 @@ fn aligned_size(new_layout: Layout) -> usize {
 /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`.
 struct ReallocFunc(
     unsafe extern "C" fn(
-        *const crate::ffi::c_void, usize,  u32, crate::ffi::c_int,
+        *const crate::ffi::c_void, usize, crate::ffi::c_ulong, u32, crate::ffi::c_int,
     ) -> *mut crate::ffi::c_void,
 );
 
 impl ReallocFunc {
-    // INVARIANT: `krealloc_node` satisfies the type invariants.
-    const KREALLOC: Self = Self(bindings::krealloc_node);
+    // INVARIANT: `krealloc_node_align` satisfies the type invariants.
+    const KREALLOC: Self = Self(bindings::krealloc_node_align);
 
-    // INVARIANT: `vrealloc_node` satisfies the type invariants.
-    const VREALLOC: Self = Self(bindings::vrealloc_node);
+    // INVARIANT: `vrealloc_node_align` satisfies the type invariants.
+    const VREALLOC: Self = Self(bindings::vrealloc_node_align);
 
-    // INVARIANT: `kvrealloc_node` satisfies the type invariants.
-    const KVREALLOC: Self = Self(bindings::kvrealloc_node);
+    // INVARIANT: `kvrealloc_node_align` satisfies the type invariants.
+    const KVREALLOC: Self = Self(bindings::kvrealloc_node_align);
 
     /// # Safety
     ///
@@ -113,7 +113,7 @@ unsafe fn call(
         // - Those functions provide the guarantees of this function.
         let raw_ptr = unsafe {
             // If `size == 0` and `ptr != NULL` the memory behind the pointer is freed.
-            self.0(ptr.cast(), size, flags.0, nid.0).cast()
+            self.0(ptr.cast(), size, layout.align(), flags.0, nid.0).cast()
         };
 
         let ptr = if size == 0 {
@@ -157,12 +157,6 @@ unsafe fn realloc_node(
         flags: Flags,
         nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
-        // TODO: Support alignments larger than PAGE_SIZE.
-        if layout.align() > bindings::PAGE_SIZE {
-            pr_warn!("Vmalloc does not support alignments larger than PAGE_SIZE yet.\n");
-            return Err(AllocError);
-        }
-
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
         unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags, nid) }
@@ -182,12 +176,6 @@ unsafe fn realloc_node(
         flags: Flags,
         nid: NumaNode,
     ) -> Result<NonNull<[u8]>, AllocError> {
-        // TODO: Support alignments larger than PAGE_SIZE.
-        if layout.align() > bindings::PAGE_SIZE {
-            pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n");
-            return Err(AllocError);
-        }
-
         // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
         // allocated with this `Allocator`.
         unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags, nid) }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v8 3/4] rust: add support for NUMA ids in allocations
  2025-06-28 10:26 ` [PATCH v8 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
@ 2025-06-28 12:21   ` Danilo Krummrich
  2025-06-28 15:25     ` Vitaly Wool
  0 siblings, 1 reply; 11+ messages in thread
From: Danilo Krummrich @ 2025-06-28 12:21 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Alice Ryhl,
	rust-for-linux

On Sat, Jun 28, 2025 at 12:26:11PM +0200, Vitaly Wool wrote:
> +/// Non Uniform Memory Access (NUMA) node identifier
> +#[derive(Clone, Copy, PartialEq)]
> +pub struct NumaNode(i32);
> +
> +impl NumaNode {
> +    /// create a new NUMA node identifer (non-negative integer)
> +    /// returns EINVAL if a negative id is specified
> +    pub fn new(node: i32) -> Result<Self> {
> +        if node < 0 {
> +            return Err(EINVAL);
> +        }

Should we also check for MAX_NUMNODES?

> +        Ok(Self(node))
> +    }
> +}

<snip>

> +    /// Re-allocate an existing memory allocation to satisfy the requested `layout` and
> +    /// optionally a specific NUMA node request to allocate the memory for.

It's not an Option anymore, so we may want to drop 'optionally'. Also please
leave an empty line here.

> +    /// Systems employing a Non Uniform Memory Access (NUMA) architecture contain
> +    /// collections of hardware resources including processors, memory, and I/O buses,
> +    /// that comprise what is commonly known as a NUMA node.
> +    /// `nid` stands for NUMA id, i. e. NUMA node identifier, which is a non-negative
> +    /// integer if a node needs to be specified, or NUMA_NO_NODE if the caller doesn't care.

Please also explain what happens when the NumaNode changes between calls to
realloc_node().

Does it have to remain the same NumaNode? Do we need a safety requirement for
that?

(Btw. no need to send a new version right away, leave a few days for people to
catch up and comment on this one or the other patches before resending.)

>      ///
>      /// If the requested size is zero, `realloc` behaves equivalent to `free`.
>      ///
> @@ -191,13 +246,29 @@ fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> {
>      ///   and old size, i.e. `ret_ptr[0..min(layout.size(), old_layout.size())] ==
>      ///   p[0..min(layout.size(), old_layout.size())]`.
>      /// - when the return value is `Err(AllocError)`, then `ptr` is still valid.
> -    unsafe fn realloc(
> +    unsafe fn realloc_node(
>          ptr: Option<NonNull<u8>>,
>          layout: Layout,
>          old_layout: Layout,
>          flags: Flags,
> +        nid: NumaNode,
>      ) -> Result<NonNull<[u8]>, AllocError>;


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v8 3/4] rust: add support for NUMA ids in allocations
  2025-06-28 12:21   ` Danilo Krummrich
@ 2025-06-28 15:25     ` Vitaly Wool
  2025-06-28 15:33       ` Danilo Krummrich
  0 siblings, 1 reply; 11+ messages in thread
From: Vitaly Wool @ 2025-06-28 15:25 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Alice Ryhl,
	rust-for-linux



> On Jun 28, 2025, at 2:21 PM, Danilo Krummrich <dakr@kernel.org> wrote:
> 
> On Sat, Jun 28, 2025 at 12:26:11PM +0200, Vitaly Wool wrote:
>> +/// Non Uniform Memory Access (NUMA) node identifier
>> +#[derive(Clone, Copy, PartialEq)]
>> +pub struct NumaNode(i32);
>> +
>> +impl NumaNode {
>> +    /// create a new NUMA node identifer (non-negative integer)
>> +    /// returns EINVAL if a negative id is specified
>> +    pub fn new(node: i32) -> Result<Self> {
>> +        if node < 0 {
>> +            return Err(EINVAL);
>> +        }
> 
> Should we also check for MAX_NUMNODES?

Good point, thanks.

> 
>> +        Ok(Self(node))
>> +    }
>> +}
> 
> <snip>
> 
>> +    /// Re-allocate an existing memory allocation to satisfy the requested `layout` and
>> +    /// optionally a specific NUMA node request to allocate the memory for.
> 
> It's not an Option anymore, so we may want to drop 'optionally'. Also please
> leave an empty line here.
> 
>> +    /// Systems employing a Non Uniform Memory Access (NUMA) architecture contain
>> +    /// collections of hardware resources including processors, memory, and I/O buses,
>> +    /// that comprise what is commonly known as a NUMA node.
>> +    /// `nid` stands for NUMA id, i. e. NUMA node identifier, which is a non-negative
>> +    /// integer if a node needs to be specified, or NUMA_NO_NODE if the caller doesn't care.
> 
> Please also explain what happens when the NumaNode changes between calls to
> realloc_node().
> 
> Does it have to remain the same NumaNode? Do we need a safety requirement for
> that?

Since we don’t implement that logic, we trust the C part. The current implementation will refuse to realloc for a different node, and I believe that is the right thing to do because transferring an allocation to a different node doesn’t go well with the concept of simple adjustment of the allocation size.

Do you believe it is necessary to explicitly state it here in the comments?

<snip>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v8 3/4] rust: add support for NUMA ids in allocations
  2025-06-28 15:25     ` Vitaly Wool
@ 2025-06-28 15:33       ` Danilo Krummrich
  0 siblings, 0 replies; 11+ messages in thread
From: Danilo Krummrich @ 2025-06-28 15:33 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Alice Ryhl,
	rust-for-linux

On Sat, Jun 28, 2025 at 05:25:52PM +0200, Vitaly Wool wrote:
> 
> 
> > On Jun 28, 2025, at 2:21 PM, Danilo Krummrich <dakr@kernel.org> wrote:
> > 
> > On Sat, Jun 28, 2025 at 12:26:11PM +0200, Vitaly Wool wrote:
> >> +/// Non Uniform Memory Access (NUMA) node identifier
> >> +#[derive(Clone, Copy, PartialEq)]
> >> +pub struct NumaNode(i32);
> >> +
> >> +impl NumaNode {
> >> +    /// create a new NUMA node identifer (non-negative integer)
> >> +    /// returns EINVAL if a negative id is specified
> >> +    pub fn new(node: i32) -> Result<Self> {
> >> +        if node < 0 {
> >> +            return Err(EINVAL);
> >> +        }
> > 
> > Should we also check for MAX_NUMNODES?
> 
> Good point, thanks.
> 
> > 
> >> +        Ok(Self(node))
> >> +    }
> >> +}
> > 
> > <snip>
> > 
> >> +    /// Re-allocate an existing memory allocation to satisfy the requested `layout` and
> >> +    /// optionally a specific NUMA node request to allocate the memory for.
> > 
> > It's not an Option anymore, so we may want to drop 'optionally'. Also please
> > leave an empty line here.
> > 
> >> +    /// Systems employing a Non Uniform Memory Access (NUMA) architecture contain
> >> +    /// collections of hardware resources including processors, memory, and I/O buses,
> >> +    /// that comprise what is commonly known as a NUMA node.
> >> +    /// `nid` stands for NUMA id, i. e. NUMA node identifier, which is a non-negative
> >> +    /// integer if a node needs to be specified, or NUMA_NO_NODE if the caller doesn't care.
> > 
> > Please also explain what happens when the NumaNode changes between calls to
> > realloc_node().
> > 
> > Does it have to remain the same NumaNode? Do we need a safety requirement for
> > that?
> 
> Since we don’t implement that logic, we trust the C part. The current implementation will refuse to realloc for a different node, and I believe that is the right thing to do because transferring an allocation to a different node doesn’t go well with the concept of simple adjustment of the allocation size.
> 
> Do you believe it is necessary to explicitly state it here in the comments?

Yes, we should document what can be expected to happen in this case, i.e. that
it will cause an AllocError.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc
  2025-06-28 10:25 ` [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
@ 2025-06-30 10:30   ` Uladzislau Rezki
  2025-06-30 11:50     ` Vitaly Wool
  0 siblings, 1 reply; 11+ messages in thread
From: Uladzislau Rezki @ 2025-06-30 10:30 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: linux-mm, akpm, linux-kernel, Uladzislau Rezki, Danilo Krummrich,
	Alice Ryhl, rust-for-linux

On Sat, Jun 28, 2025 at 12:25:37PM +0200, Vitaly Wool wrote:
> Reimplement vrealloc() to be able to set node and alignment should
> a user need to do so. Rename the function to vrealloc_node_align()
> to better match what it actually does now and introduce macros for
> vrealloc() and friends for backward compatibility.
> 
> With that change we also provide the ability for the Rust part of
> the kernel to set node and aligmnent in its allocations.
> 
> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
> ---
>  include/linux/vmalloc.h | 12 +++++++++---
>  mm/vmalloc.c            | 20 ++++++++++++++++----
>  2 files changed, 25 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index fdc9aeb74a44..68791f7cb3ba 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -197,9 +197,15 @@ extern void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1
>  extern void *vcalloc_noprof(size_t n, size_t size) __alloc_size(1, 2);
>  #define vcalloc(...)		alloc_hooks(vcalloc_noprof(__VA_ARGS__))
>  
> -void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags)
> -		__realloc_size(2);
> -#define vrealloc(...)		alloc_hooks(vrealloc_noprof(__VA_ARGS__))
> +void *__must_check vrealloc_node_align_noprof(const void *p, size_t size,
> +		unsigned long align, gfp_t flags, int nid) __realloc_size(2);
> +#define vrealloc_node_noprof(_p, _s, _f, _nid)	\
> +	vrealloc_node_align_noprof(_p, _s, 1, _f, _nid)
> +#define vrealloc_noprof(_p, _s, _f)		\
> +	vrealloc_node_align_noprof(_p, _s, 1, _f, NUMA_NO_NODE)
> +#define vrealloc_node_align(...)		alloc_hooks(vrealloc_node_align_noprof(__VA_ARGS__))
> +#define vrealloc_node(...)			alloc_hooks(vrealloc_node_noprof(__VA_ARGS__))
> +#define vrealloc(...)				alloc_hooks(vrealloc_noprof(__VA_ARGS__))
>  
>  extern void vfree(const void *addr);
>  extern void vfree_atomic(const void *addr);
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6dbcdceecae1..d633ac0ff977 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4089,12 +4089,15 @@ void *vzalloc_node_noprof(unsigned long size, int node)
>  EXPORT_SYMBOL(vzalloc_node_noprof);
>  
>  /**
> - * vrealloc - reallocate virtually contiguous memory; contents remain unchanged
> + * vrealloc_node_align_noprof - reallocate virtually contiguous memory; contents
> + * remain unchanged
>   * @p: object to reallocate memory for
>   * @size: the size to reallocate
> + * @align: requested alignment
>   * @flags: the flags for the page level allocator
> + * @nid: node id
>   *
> - * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and
> + * If @p is %NULL, vrealloc_XXX() behaves exactly like vmalloc(). If @size is 0 and
>   * @p is not a %NULL pointer, the object pointed to is freed.
>   *
>   * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
> @@ -4111,7 +4114,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
>   * Return: pointer to the allocated memory; %NULL if @size is zero or in case of
>   *         failure
>   */
> -void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
> +void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
> +				 gfp_t flags, int nid)
>  {
>  	struct vm_struct *vm = NULL;
>  	size_t alloced_size = 0;
> @@ -4135,6 +4139,13 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  		if (WARN(alloced_size < old_size,
>  			 "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
>  			return NULL;
> +		if (WARN(nid != NUMA_NO_NODE && nid != page_to_nid(vmalloc_to_page(p)),
> +			 "vrealloc() has mismatched nids\n"))
> +			return NULL;
> +		if (WARN((uintptr_t)p & (align - 1),
> +			 "will not reallocate with a bigger alignment (0x%lx)\n",
> +			 align))
> +			return NULL;
>
IMO, IS_ALIGNED() should be used instead. We have already a macro for this
purpose, i.e. the idea is just to check that "p" is aligned with "align"
request.

Can you replace the (uintptr_t) casting to (ulong) or (unsigned long)
this is how we mostly cast in vmalloc code?

WARN() probably is worth to replace. Use WARN_ON_ONCE() to prevent
flooding.

--
Uladzislau Rezki


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc
  2025-06-30 10:30   ` Uladzislau Rezki
@ 2025-06-30 11:50     ` Vitaly Wool
  2025-06-30 16:39       ` Uladzislau Rezki
  0 siblings, 1 reply; 11+ messages in thread
From: Vitaly Wool @ 2025-06-30 11:50 UTC (permalink / raw)
  To: Uladzislau Rezki
  Cc: linux-mm, akpm, linux-kernel, Danilo Krummrich, Alice Ryhl,
	rust-for-linux

[-- Attachment #1: Type: text/plain, Size: 4635 bytes --]



> On Jun 30, 2025, at 12:30 PM, Uladzislau Rezki <urezki@gmail.com> wrote:
> 
> On Sat, Jun 28, 2025 at 12:25:37PM +0200, Vitaly Wool wrote:
>> Reimplement vrealloc() to be able to set node and alignment should
>> a user need to do so. Rename the function to vrealloc_node_align()
>> to better match what it actually does now and introduce macros for
>> vrealloc() and friends for backward compatibility.
>> 
>> With that change we also provide the ability for the Rust part of
>> the kernel to set node and aligmnent in its allocations.
>> 
>> Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
>> ---
>> include/linux/vmalloc.h | 12 +++++++++---
>> mm/vmalloc.c            | 20 ++++++++++++++++----
>> 2 files changed, 25 insertions(+), 7 deletions(-)
>> 
>> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
>> index fdc9aeb74a44..68791f7cb3ba 100644
>> --- a/include/linux/vmalloc.h
>> +++ b/include/linux/vmalloc.h
>> @@ -197,9 +197,15 @@ extern void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1
>> extern void *vcalloc_noprof(size_t n, size_t size) __alloc_size(1, 2);
>> #define vcalloc(...)		alloc_hooks(vcalloc_noprof(__VA_ARGS__))
>> 
>> -void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>> -		__realloc_size(2);
>> -#define vrealloc(...)		alloc_hooks(vrealloc_noprof(__VA_ARGS__))
>> +void *__must_check vrealloc_node_align_noprof(const void *p, size_t size,
>> +		unsigned long align, gfp_t flags, int nid) __realloc_size(2);
>> +#define vrealloc_node_noprof(_p, _s, _f, _nid)	\
>> +	vrealloc_node_align_noprof(_p, _s, 1, _f, _nid)
>> +#define vrealloc_noprof(_p, _s, _f)		\
>> +	vrealloc_node_align_noprof(_p, _s, 1, _f, NUMA_NO_NODE)
>> +#define vrealloc_node_align(...)		alloc_hooks(vrealloc_node_align_noprof(__VA_ARGS__))
>> +#define vrealloc_node(...)			alloc_hooks(vrealloc_node_noprof(__VA_ARGS__))
>> +#define vrealloc(...)				alloc_hooks(vrealloc_noprof(__VA_ARGS__))
>> 
>> extern void vfree(const void *addr);
>> extern void vfree_atomic(const void *addr);
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 6dbcdceecae1..d633ac0ff977 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -4089,12 +4089,15 @@ void *vzalloc_node_noprof(unsigned long size, int node)
>> EXPORT_SYMBOL(vzalloc_node_noprof);
>> 
>> /**
>> - * vrealloc - reallocate virtually contiguous memory; contents remain unchanged
>> + * vrealloc_node_align_noprof - reallocate virtually contiguous memory; contents
>> + * remain unchanged
>>  * @p: object to reallocate memory for
>>  * @size: the size to reallocate
>> + * @align: requested alignment
>>  * @flags: the flags for the page level allocator
>> + * @nid: node id
>>  *
>> - * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and
>> + * If @p is %NULL, vrealloc_XXX() behaves exactly like vmalloc(). If @size is 0 and
>>  * @p is not a %NULL pointer, the object pointed to is freed.
>>  *
>>  * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
>> @@ -4111,7 +4114,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
>>  * Return: pointer to the allocated memory; %NULL if @size is zero or in case of
>>  *         failure
>>  */
>> -void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>> +void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
>> +				 gfp_t flags, int nid)
>> {
>> 	struct vm_struct *vm = NULL;
>> 	size_t alloced_size = 0;
>> @@ -4135,6 +4139,13 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>> 		if (WARN(alloced_size < old_size,
>> 			 "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
>> 			return NULL;
>> +		if (WARN(nid != NUMA_NO_NODE && nid != page_to_nid(vmalloc_to_page(p)),
>> +			 "vrealloc() has mismatched nids\n"))
>> +			return NULL;
>> +		if (WARN((uintptr_t)p & (align - 1),
>> +			 "will not reallocate with a bigger alignment (0x%lx)\n",
>> +			 align))
>> +			return NULL;
>> 
> IMO, IS_ALIGNED() should be used instead. We have already a macro for this
> purpose, i.e. the idea is just to check that "p" is aligned with "align"
> request.
> 
> Can you replace the (uintptr_t) casting to (ulong) or (unsigned long)
> this is how we mostly cast in vmalloc code?

Thanks, noted.
> 
> WARN() probably is worth to replace. Use WARN_ON_ONCE() to prevent
> flooding.

I am not sure i totally agree, because:
a) there’s already one WARN() in that block and I’m just following the pattern
b) I don’t think this will be a frequent error.

~Vitaly


[-- Attachment #2: Type: text/html, Size: 15060 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc
  2025-06-30 11:50     ` Vitaly Wool
@ 2025-06-30 16:39       ` Uladzislau Rezki
  0 siblings, 0 replies; 11+ messages in thread
From: Uladzislau Rezki @ 2025-06-30 16:39 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Uladzislau Rezki, linux-mm, akpm, linux-kernel, Danilo Krummrich,
	Alice Ryhl, rust-for-linux

> 
>     On Jun 30, 2025, at 12:30 PM, Uladzislau Rezki <urezki@gmail.com> wrote:
> 
>     On Sat, Jun 28, 2025 at 12:25:37PM +0200, Vitaly Wool wrote:
> 
>         Reimplement vrealloc() to be able to set node and alignment should
>         a user need to do so. Rename the function to vrealloc_node_align()
>         to better match what it actually does now and introduce macros for
>         vrealloc() and friends for backward compatibility.
> 
>         With that change we also provide the ability for the Rust part of
>         the kernel to set node and aligmnent in its allocations.
> 
>         Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.se>
>         ---
>         include/linux/vmalloc.h | 12 +++++++++---
>         mm/vmalloc.c            | 20 ++++++++++++++++----
>         2 files changed, 25 insertions(+), 7 deletions(-)
> 
>         diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
>         index fdc9aeb74a44..68791f7cb3ba 100644
>         --- a/include/linux/vmalloc.h
>         +++ b/include/linux/vmalloc.h
>         @@ -197,9 +197,15 @@ extern void *__vcalloc_noprof(size_t n, size_t
>         size, gfp_t flags) __alloc_size(1
>         extern void *vcalloc_noprof(size_t n, size_t size) __alloc_size(1, 2);
>         #define vcalloc(...) alloc_hooks(vcalloc_noprof(__VA_ARGS__))
> 
>         -void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t
>         flags)
>         - __realloc_size(2);
>         -#define vrealloc(...) alloc_hooks(vrealloc_noprof(__VA_ARGS__))
>         +void *__must_check vrealloc_node_align_noprof(const void *p, size_t
>         size,
>         + unsigned long align, gfp_t flags, int nid) __realloc_size(2);
>         +#define vrealloc_node_noprof(_p, _s, _f, _nid) \
>         + vrealloc_node_align_noprof(_p, _s, 1, _f, _nid)
>         +#define vrealloc_noprof(_p, _s, _f) \
>         + vrealloc_node_align_noprof(_p, _s, 1, _f, NUMA_NO_NODE)
>         +#define vrealloc_node_align(...) alloc_hooks
>         (vrealloc_node_align_noprof(__VA_ARGS__))
>         +#define vrealloc_node(...) alloc_hooks(vrealloc_node_noprof
>         (__VA_ARGS__))
>         +#define vrealloc(...) alloc_hooks(vrealloc_noprof(__VA_ARGS__))
> 
>         extern void vfree(const void *addr);
>         extern void vfree_atomic(const void *addr);
>         diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>         index 6dbcdceecae1..d633ac0ff977 100644
>         --- a/mm/vmalloc.c
>         +++ b/mm/vmalloc.c
>         @@ -4089,12 +4089,15 @@ void *vzalloc_node_noprof(unsigned long size,
>         int node)
>         EXPORT_SYMBOL(vzalloc_node_noprof);
> 
>         /**
>         - * vrealloc - reallocate virtually contiguous memory; contents remain
>         unchanged
>         + * vrealloc_node_align_noprof - reallocate virtually contiguous
>         memory; contents
>         + * remain unchanged
>          * @p: object to reallocate memory for
>          * @size: the size to reallocate
>         + * @align: requested alignment
>          * @flags: the flags for the page level allocator
>         + * @nid: node id
>          *
>         - * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size
>         is 0 and
>         + * If @p is %NULL, vrealloc_XXX() behaves exactly like vmalloc(). If
>         @size is 0 and
>          * @p is not a %NULL pointer, the object pointed to is freed.
>          *
>          * If __GFP_ZERO logic is requested, callers must ensure that, starting
>         with the
>         @@ -4111,7 +4114,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
>          * Return: pointer to the allocated memory; %NULL if @size is zero or
>         in case of
>          *         failure
>          */
>         -void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>         +void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned
>         long align,
>         +  gfp_t flags, int nid)
>         {
>         struct vm_struct *vm = NULL;
>         size_t alloced_size = 0;
>         @@ -4135,6 +4139,13 @@ void *vrealloc_noprof(const void *p, size_t
>         size, gfp_t flags)
>         if (WARN(alloced_size < old_size,
>          "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
>         return NULL;
>         + if (WARN(nid != NUMA_NO_NODE && nid != page_to_nid(vmalloc_to_page
>         (p)),
>         +  "vrealloc() has mismatched nids\n"))
>         + return NULL;
>         + if (WARN((uintptr_t)p & (align - 1),
>         +  "will not reallocate with a bigger alignment (0x%lx)\n",
>         +  align))
>         + return NULL;
> 
> 
>     IMO, IS_ALIGNED() should be used instead. We have already a macro for this
>     purpose, i.e. the idea is just to check that "p" is aligned with "align"
>     request.
> 
>     Can you replace the (uintptr_t) casting to (ulong) or (unsigned long)
>     this is how we mostly cast in vmalloc code?
> 
> 
> Thanks, noted.
> 
> 
>     WARN() probably is worth to replace. Use WARN_ON_ONCE() to prevent
>     flooding.
> 
> 
> I am not sure i totally agree, because:
> a) there’s already one WARN() in that block and I’m just following the pattern
> b) I don’t think this will be a frequent error.
> 
Could we just drop such assumption(b)? Instead we just eliminate it and
thus we do not spam the kernel buffer :)

Also, there is another:

>
> + if (WARN(nid != NUMA_NO_NODE && nid != page_to_nid(vmalloc_to_page(p)),
> + "vrealloc() has mismatched nids\n"))
> + return NULL;
>
I can easily trigger this with continuous kernel splats after adding
vrealloc_alloc_test into the vmalloc test-suite:

<snip>
[   53.517781] ------------[ cut here ]------------
[   53.517787] vrealloc() has mismatched nids
[   53.517817] WARNING: CPU: 46 PID: 2213 at mm/vmalloc.c:4198 vrealloc_node_align_noprof+0x11b/0x230
[   53.517829] Modules linked in: test_vmalloc(E+) binfmt_misc(E) ppdev(E) parport_pc(E) parport(E) bochs(E) snd_pcm(E) sg(E) drm_client_lib(E) snd_timer(E) drm_shmem_helper(E) evdev(E) joydev(E) snd(E) drm_kms_helper(E) vga16fb(E) soundcore(E) serio_raw(E) button(E) pcspkr(E) vgastate(E) drm(E) dm_mod(E) fuse(E) loop(E) configfs(E) efi_pstore(E) qemu_fw_cfg(E) ip_tables(E) x_tables(E) autofs4(E) ext4(E) crc16(E) mbcache(E) jbd2(E) sr_mod(E) cdrom(E) sd_mod(E) ata_generic(E) ata_piix(E) libata(E) i2c_piix4(E) scsi_mod(E) psmouse(E) floppy(E) e1000(E) i2c_smbus(E) scsi_common(E)
[   53.517879] CPU: 46 UID: 0 PID: 2213 Comm: vmalloc_test/10 Kdump: loaded Tainted: G        W   E       6.16.0-rc1+ #263 PREEMPT(undef)
[   53.517886] Tainted: [W]=WARN, [E]=UNSIGNED_MODULE
[   53.517887] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
[   53.517889] RIP: 0010:vrealloc_node_align_noprof+0x11b/0x230
[   53.517894] Code: 89 4c 24 08 e8 76 b0 ff ff 4c 8b 4c 24 08 48 8b 00 48 c1 e8 36 41 39 c4 0f 84 64 ff ff ff 48 c7 c7 90 c4 28 a2 e8 25 a8 d3 ff <0f> 0b 31 ed eb 95 65 8b 05 f8 cf 90 01 a9 00 ff ff 00 0f 85 dd 00
[   53.517897] RSP: 0018:ffffa6db87f27e08 EFLAGS: 00010282
[   53.517900] RAX: 0000000000000000 RBX: ffffa6db9a315000 RCX: 0000000000000000
[   53.517902] RDX: 0000000000000002 RSI: 0000000000000001 RDI: 00000000ffffffff
[   53.517904] RBP: 000000000000a000 R08: 0000000000000000 R09: 0000000000000003
[   53.517905] R10: ffffa6db87f27ca0 R11: ffff98c5fff0a368 R12: 0000000000000002
[   53.517908] R13: ffff98c201d06a80 R14: 0000000000009000 R15: 0000000000000001
[   53.517912] FS:  0000000000000000(0000) GS:ffff98c24cf17000(0000) knlGS:0000000000000000
[   53.517914] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   53.517916] CR2: 00007fe515c11390 CR3: 000000084bf03000 CR4: 00000000000006f0
[   53.517920] Call Trace:
[   53.517923]  <TASK>
[   53.517928]  ? __pfx_vrealloc_alloc_test+0x10/0x10 [test_vmalloc]
[   53.517937]  vrealloc_alloc_test+0x22/0x60 [test_vmalloc]
[   53.517941]  test_func+0xd5/0x1d0 [test_vmalloc]
[   53.517946]  ? __pfx_test_func+0x10/0x10 [test_vmalloc]
[   53.517949]  kthread+0x109/0x240
[   53.517955]  ? finish_task_switch.isra.0+0x85/0x2a0
[   53.517960]  ? __pfx_kthread+0x10/0x10
[   53.517963]  ? __pfx_kthread+0x10/0x10
[   53.517966]  ret_from_fork+0x87/0xf0
[   53.517971]  ? __pfx_kthread+0x10/0x10
[   53.517974]  ret_from_fork_asm+0x1a/0x30
[   53.517980]  </TASK>
[   53.517981] ---[ end trace 0000000000000000 ]---
<snip>

Please drop that WARN(). The motivation is, we should serve the memory.
Because, processes can migrate between NUMA nodes and they still have to
be able to allocate memory.

Moreover, in the current vrealloc() implementation, memory is fully reallocated
on a new NUMA node in any case and the old allocation is released after copying
the data. So it does not matter if the NUMA node has changed.

--
Uladzislau Rezki


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-06-30 16:40 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-28 10:23 [PATCH v8 0/4] support large align and nid in Rust allocators Vitaly Wool
2025-06-28 10:25 ` [PATCH v8 1/4] mm/vmalloc: allow to set node and align in vrealloc Vitaly Wool
2025-06-30 10:30   ` Uladzislau Rezki
2025-06-30 11:50     ` Vitaly Wool
2025-06-30 16:39       ` Uladzislau Rezki
2025-06-28 10:25 ` [PATCH v8 2/4] mm/slub: allow to set node and align in k[v]realloc Vitaly Wool
2025-06-28 10:26 ` [PATCH v8 3/4] rust: add support for NUMA ids in allocations Vitaly Wool
2025-06-28 12:21   ` Danilo Krummrich
2025-06-28 15:25     ` Vitaly Wool
2025-06-28 15:33       ` Danilo Krummrich
2025-06-28 10:26 ` [PATCH v8 4/4] rust: support large alignments " Vitaly Wool

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).