rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Hui Zhu <hui.zhu@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Uladzislau Rezki <urezki@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>,
	Alex Gaynor <alex.gaynor@gmail.com>,
	Boqun Feng <boqun.feng@gmail.com>, Gary Guo <gary@garyguo.net>,
	bjorn3_gh@protonmail.com, Benno Lossin <lossin@kernel.org>,
	Andreas Hindborg <a.hindborg@kernel.org>,
	Alice Ryhl <aliceryhl@google.com>,
	Trevor Gross <tmgross@umich.edu>,
	Danilo Krummrich <dakr@kernel.org>,
	Geliang Tang <geliang@kernel.org>, Hui Zhu <zhuhui@kylinos.cn>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	rust-for-linux@vger.kernel.org
Subject: Re: [PATCH 1/3] vmalloc: Add vrealloc_align to support allocation of aligned vmap pages
Date: Wed, 16 Jul 2025 09:02:22 +0200	[thread overview]
Message-ID: <aHdOfv1QyisOiAXL@pc636> (raw)
In-Reply-To: <81647cce3b8e7139af47f20dbeba184b7a89b0cc.1752573305.git.zhuhui@kylinos.cn>

On Tue, Jul 15, 2025 at 05:59:46PM +0800, Hui Zhu wrote:
> From: Hui Zhu <zhuhui@kylinos.cn>
> 
> This commit add new function vrealloc_align.
> vrealloc_align support allocation of aligned vmap pages with
> __vmalloc_node_noprof.
> And vrealloc_align will check the old address. If this address does
> not meet the current alignment requirements, it will also release
> the old vmap pages and reallocate new vmap pages that satisfy the
> alignment requirements.
> 
> Co-developed-by: Geliang Tang <geliang@kernel.org>
> Signed-off-by: Geliang Tang <geliang@kernel.org>
> Signed-off-by: Hui Zhu <zhuhui@kylinos.cn>
> ---
>  include/linux/vmalloc.h |  5 +++
>  mm/vmalloc.c            | 80 ++++++++++++++++++++++++++---------------
>  2 files changed, 57 insertions(+), 28 deletions(-)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index fdc9aeb74a44..0ce0c1ea2427 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -201,6 +201,11 @@ void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  		__realloc_size(2);
>  #define vrealloc(...)		alloc_hooks(vrealloc_noprof(__VA_ARGS__))
>  
> +void * __must_check vrealloc_align_noprof(const void *p, size_t size,
> +					  size_t align, gfp_t flags)
> +		__realloc_size(2);
> +#define vrealloc_align(...)	alloc_hooks(vrealloc_align_noprof(__VA_ARGS__))
> +
>  extern void vfree(const void *addr);
>  extern void vfree_atomic(const void *addr);
>  
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ab986dd09b6a..41cb3603b3cc 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4081,9 +4081,11 @@ void *vzalloc_node_noprof(unsigned long size, int node)
>  EXPORT_SYMBOL(vzalloc_node_noprof);
>  
>  /**
> - * vrealloc - reallocate virtually contiguous memory; contents remain unchanged
> + * vrealloc_align - reallocate virtually contiguous memory;
> + *                  contents remain unchanged
>   * @p: object to reallocate memory for
>   * @size: the size to reallocate
> + * @align: requested alignment
>   * @flags: the flags for the page level allocator
>   *
>   * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and
> @@ -4103,7 +4105,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
>   * Return: pointer to the allocated memory; %NULL if @size is zero or in case of
>   *         failure
>   */
> -void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
> +void *vrealloc_align_noprof(const void *p, size_t size, size_t align,
> +			    gfp_t flags)
>  {
>  	struct vm_struct *vm = NULL;
>  	size_t alloced_size = 0;
> @@ -4116,49 +4119,65 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  	}
>  
>  	if (p) {
> +		if (!is_power_of_2(align)) {
> +			WARN(1, "Trying to vrealloc_align() align is not power of 2 (%ld)\n",
> +			     align);
> +			return NULL;
> +		}
> +
>  		vm = find_vm_area(p);
>  		if (unlikely(!vm)) {
> -			WARN(1, "Trying to vrealloc() nonexistent vm area (%p)\n", p);
> +			WARN(1, "Trying to vrealloc_align() nonexistent vm area (%p)\n", p);
>  			return NULL;
>  		}
>  
>  		alloced_size = get_vm_area_size(vm);
>  		old_size = vm->requested_size;
>  		if (WARN(alloced_size < old_size,
> -			 "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
> +			 "vrealloc_align() has mismatched area vs requested sizes (%p)\n", p))
>  			return NULL;
>  	}
>  
> -	/*
> -	 * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What
> -	 * would be a good heuristic for when to shrink the vm_area?
> -	 */
> -	if (size <= old_size) {
> -		/* Zero out "freed" memory, potentially for future realloc. */
> -		if (want_init_on_free() || want_init_on_alloc(flags))
> -			memset((void *)p + size, 0, old_size - size);
> -		vm->requested_size = size;
> -		kasan_poison_vmalloc(p + size, old_size - size);
> -		return (void *)p;
> -	}
> +	if (IS_ALIGNED((unsigned long)p, align)) {
> +		/*
> +		 * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What
> +		 * would be a good heuristic for when to shrink the vm_area?
> +		 */
> +		if (size <= old_size) {
> +			/* Zero out "freed" memory, potentially for future realloc. */
> +			if (want_init_on_free() || want_init_on_alloc(flags))
> +				memset((void *)p + size, 0, old_size - size);
> +			vm->requested_size = size;
> +			kasan_poison_vmalloc(p + size, old_size - size);
> +			return (void *)p;
> +		}
>  
> -	/*
> -	 * We already have the bytes available in the allocation; use them.
> -	 */
> -	if (size <= alloced_size) {
> -		kasan_unpoison_vmalloc(p + old_size, size - old_size,
> -				       KASAN_VMALLOC_PROT_NORMAL);
>  		/*
> -		 * No need to zero memory here, as unused memory will have
> -		 * already been zeroed at initial allocation time or during
> -		 * realloc shrink time.
> +		 * We already have the bytes available in the allocation; use them.
> +		 */
> +		if (size <= alloced_size) {
> +			kasan_unpoison_vmalloc(p + old_size, size - old_size,
> +					KASAN_VMALLOC_PROT_NORMAL);
> +			/*
> +			 * No need to zero memory here, as unused memory will have
> +			 * already been zeroed at initial allocation time or during
> +			 * realloc shrink time.
> +			 */
> +			vm->requested_size = size;
> +			return (void *)p;
> +		}
> +	} else {
> +		/*
> +		 * p is not aligned with align.
> +		 * Allocate a new address to handle it.
>  		 */
> -		vm->requested_size = size;
> -		return (void *)p;
> +		if (size < old_size)
> +			old_size = size;
>  	}
>  
>  	/* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
> -	n = __vmalloc_noprof(size, flags);
> +	n = __vmalloc_node_noprof(size, align, flags, NUMA_NO_NODE,
> +				  __builtin_return_address(0));
>  	if (!n)
>  		return NULL;
>  
> @@ -4170,6 +4189,11 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  	return n;
>  }
>  
> +void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
> +{
> +	return vrealloc_align_noprof(p, size, 1, flags);
> +}
> +
>  #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32)
>  #define GFP_VMALLOC32 (GFP_DMA32 | GFP_KERNEL)
>  #elif defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA)
> -- 
> 2.43.0
> 
This is similar what Vitaly is doing. There is already v14
but as example see it here: https://lkml.org/lkml/2025/7/9/1583

--
Uladzislau Rezki

  parent reply	other threads:[~2025-07-16  7:02 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-15  9:59 [PATCH 0/3] rust: allocator: Vmalloc: Support alignments larger than PAGE_SIZE Hui Zhu
2025-07-15  9:59 ` [PATCH 1/3] vmalloc: Add vrealloc_align to support allocation of aligned vmap pages Hui Zhu
2025-07-15 23:19   ` kernel test robot
2025-07-16  7:02   ` Uladzislau Rezki [this message]
2025-07-15  9:59 ` [PATCH 2/3] rust: allocator: Vmalloc: Support alignments larger than PAGE_SIZE Hui Zhu
2025-07-15  9:59 ` [PATCH 3/3] rust: add a sample allocator usage Hui Zhu
2025-07-15 10:37   ` Danilo Krummrich
2025-07-17 10:02     ` Your Name
2025-07-15 10:21 ` [PATCH 0/3] rust: allocator: Vmalloc: Support alignments larger than PAGE_SIZE Danilo Krummrich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aHdOfv1QyisOiAXL@pc636 \
    --to=urezki@gmail.com \
    --cc=a.hindborg@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=alex.gaynor@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=dakr@kernel.org \
    --cc=gary@garyguo.net \
    --cc=geliang@kernel.org \
    --cc=hui.zhu@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lossin@kernel.org \
    --cc=ojeda@kernel.org \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=tmgross@umich.edu \
    --cc=zhuhui@kylinos.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).