From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 070F2C83F1D for ; Tue, 15 Jul 2025 10:00:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B7136B00B5; Tue, 15 Jul 2025 06:00:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93FF76B00B6; Tue, 15 Jul 2025 06:00:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E1BA6B00B7; Tue, 15 Jul 2025 06:00:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6E2446B00B5 for ; Tue, 15 Jul 2025 06:00:23 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2182F160361 for ; Tue, 15 Jul 2025 10:00:23 +0000 (UTC) X-FDA: 83666053926.19.BFC916A Received: from out-170.mta1.migadu.com (out-170.mta1.migadu.com [95.215.58.170]) by imf04.hostedemail.com (Postfix) with ESMTP id 7C20740003 for ; Tue, 15 Jul 2025 10:00:21 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=rR7p1Lyb; spf=pass (imf04.hostedemail.com: domain of hui.zhu@linux.dev designates 95.215.58.170 as permitted sender) smtp.mailfrom=hui.zhu@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752573621; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7JRpjPQtsxOsEKyXUhk8AvHxOa1IbLg4OX688b1Boz8=; b=3b0GXZLwTjam4uZhgXFv3ecWBU6h0jikU8nTkPM4LA3xTsWt2nGIT4+kCsxvQIV+x6wMrV 76DQUJEaWeMt3AdSaDVAnxdBUlNMDYcFjhK9p8TwS6BZAHr1ZnqgcMw3sWXCnCp3ZCITaJ DvUOhrYJM4X1VhDf/B9yTxLY8FazsH4= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=rR7p1Lyb; spf=pass (imf04.hostedemail.com: domain of hui.zhu@linux.dev designates 95.215.58.170 as permitted sender) smtp.mailfrom=hui.zhu@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752573621; a=rsa-sha256; cv=none; b=k8ItIRvIJdLWLyUcDeGrIts4Ligj7J71W/tEZYi1y68BzpPn8M1fsd0j85OYaUhZrBDDbl OrYL/pGF3lwDCr9zpgokIL8XTZRyeOmD0W5YoBQe/JmMKqpqOJhwYqb/SofoXAbeGO2YOB NX19etRMnkcsLF2CkdYo7HWfdQuQiEs= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1752573618; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7JRpjPQtsxOsEKyXUhk8AvHxOa1IbLg4OX688b1Boz8=; b=rR7p1Lybcl+KmgiLw+IM9aWE6ezyxo9p7M0asXbZuZZUxJ/cf9RifiJWj8Ao+X5yMMO432 ybr76/SJIsfcafqjuG86q+3K8tFDt9FvYpy4tnimn3RUeQaY87c7qoXGa1WNngAXsv3cyL nIJfLCRARG+D9H6HsmKJUbE+p5jsNh0= From: Hui Zhu To: Andrew Morton , Uladzislau Rezki , Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , bjorn3_gh@protonmail.com, Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Geliang Tang , Hui Zhu , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rust-for-linux@vger.kernel.org Subject: [PATCH 1/3] vmalloc: Add vrealloc_align to support allocation of aligned vmap pages Date: Tue, 15 Jul 2025 17:59:46 +0800 Message-Id: <81647cce3b8e7139af47f20dbeba184b7a89b0cc.1752573305.git.zhuhui@kylinos.cn> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Stat-Signature: rydghsqoqffhb76ww8upk9jnrjuejup6 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 7C20740003 X-Rspam-User: X-HE-Tag: 1752573621-336426 X-HE-Meta: U2FsdGVkX18py81WscttfI35QmWZHT0wTMpK7U1F6bhUU2lKUUfdbKTc2RUnaY1kLtHQgKOMZlodA3djMwo14p0G9/eJPsDQiIG79TrgVJG1cP+xUmhJRrsljb/m6FP4ZsmcWMpDcHDDp24e3EYe9Jqc3+E6ZB7Qhiu9ohd0QaX3azHjqHNbHPFvAGKWESOrhb58ZSjwoI+uxho1D7ypvc65pCWaeY+6TFb+H+0g45wmKcT2llxM4Z4X+Y5eNC/mCMb83bs8HQlUlwIoI0AkOvQXOcjBO/5BIGa8OOxLXv6HuqVZ7icAhOlVkV0s/avW+GkzdOpYzuaM1U2IxoRK2b78BJMm8lfPciwhdXI6iOVisHH7GgdJfeAU/BqMqqDbCLGM6u1NccK2ZP4mkNgEtQ1jFfCUPpHaSCaw9etzSjY7YKXYMHyFHmvrLT73DuAq486mLaXbM1lGEhs46Gr/Sz4AvvCCUQMsTzaxTyXCY4V10F8n8JnjVcmw/bRGecqjJxckwFSrq4FfdZ9omv4ahG+ucCh40v0NzsOVIJQOw5GiN0I/XBtQ5XALsLDQ9lixfWz7R93iMYUbb4a5f+WG+jxwXjwwNBCvkUk2S3guwigXZx+usIpSDNv3aSuCnzU8gUApp39OWd+YW1CsghELbz7wFNHlEHcXqL01N20GGSBIVCILvzmXFlrHmQeJu7UV8JjmJAsoh+mOL3XUkr5awVYNOtl/gw6dwmqc+NR8uEuAHmi7Q4nwAqoUGiROfKdjJ/c/ZbNe9yUxHoyIUPL49ehyIMoF51bxRQaxYNed/j/ooChFGZ4YgflJzox2kww5izCO472q2Ie0twkS32r7wyOgYHwwW0HAFyUbRVRrcC7vSdE3yugxVMMiKU83sof0mtIfPVAaxGHbetYpvq9AYtoYHMtIV7vpjinBwE+8AwZBH49QkjfttB3NldZzvA6YcENhwprsI7YjbM9t2AX 0uV4GexG 5HXhHFSoNhBNUl2t1/EGNcMmwqhTv6Ai7HvCj/8GjiSIZXGAHTWSHYeSAssQ1Czl8UdLL+3gamuwALd7dE2hbx7A9YxCf3gKP32SGcdh7sZWsQCefyCZ0gRsTAVm+RFVRPa3EhdAFHUTyQpUlS49h/OUnqQCFmYgX+E24uBverq9kJFQm259gXKhfShgLObngQwkqEq/Q0F/0OXbPe2f+eGj6YpcnUuDL4UGXrBhmhq8oKf2bhGMxuDNCdrs/ZUgMqzBuyrTWVRzQW3OhRBiIBtD1evshYgCl6CnQOMyzvZAnAgoLkrJmAEeYc/bghGKxN6+5tVX5wjpZ/Wg+FdRaTG5lYw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Hui Zhu This commit add new function vrealloc_align. vrealloc_align support allocation of aligned vmap pages with __vmalloc_node_noprof. And vrealloc_align will check the old address. If this address does not meet the current alignment requirements, it will also release the old vmap pages and reallocate new vmap pages that satisfy the alignment requirements. Co-developed-by: Geliang Tang Signed-off-by: Geliang Tang Signed-off-by: Hui Zhu --- include/linux/vmalloc.h | 5 +++ mm/vmalloc.c | 80 ++++++++++++++++++++++++++--------------- 2 files changed, 57 insertions(+), 28 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index fdc9aeb74a44..0ce0c1ea2427 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -201,6 +201,11 @@ void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags) __realloc_size(2); #define vrealloc(...) alloc_hooks(vrealloc_noprof(__VA_ARGS__)) +void * __must_check vrealloc_align_noprof(const void *p, size_t size, + size_t align, gfp_t flags) + __realloc_size(2); +#define vrealloc_align(...) alloc_hooks(vrealloc_align_noprof(__VA_ARGS__)) + extern void vfree(const void *addr); extern void vfree_atomic(const void *addr); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ab986dd09b6a..41cb3603b3cc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4081,9 +4081,11 @@ void *vzalloc_node_noprof(unsigned long size, int node) EXPORT_SYMBOL(vzalloc_node_noprof); /** - * vrealloc - reallocate virtually contiguous memory; contents remain unchanged + * vrealloc_align - reallocate virtually contiguous memory; + * contents remain unchanged * @p: object to reallocate memory for * @size: the size to reallocate + * @align: requested alignment * @flags: the flags for the page level allocator * * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and @@ -4103,7 +4105,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof); * Return: pointer to the allocated memory; %NULL if @size is zero or in case of * failure */ -void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) +void *vrealloc_align_noprof(const void *p, size_t size, size_t align, + gfp_t flags) { struct vm_struct *vm = NULL; size_t alloced_size = 0; @@ -4116,49 +4119,65 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) } if (p) { + if (!is_power_of_2(align)) { + WARN(1, "Trying to vrealloc_align() align is not power of 2 (%ld)\n", + align); + return NULL; + } + vm = find_vm_area(p); if (unlikely(!vm)) { - WARN(1, "Trying to vrealloc() nonexistent vm area (%p)\n", p); + WARN(1, "Trying to vrealloc_align() nonexistent vm area (%p)\n", p); return NULL; } alloced_size = get_vm_area_size(vm); old_size = vm->requested_size; if (WARN(alloced_size < old_size, - "vrealloc() has mismatched area vs requested sizes (%p)\n", p)) + "vrealloc_align() has mismatched area vs requested sizes (%p)\n", p)) return NULL; } - /* - * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What - * would be a good heuristic for when to shrink the vm_area? - */ - if (size <= old_size) { - /* Zero out "freed" memory, potentially for future realloc. */ - if (want_init_on_free() || want_init_on_alloc(flags)) - memset((void *)p + size, 0, old_size - size); - vm->requested_size = size; - kasan_poison_vmalloc(p + size, old_size - size); - return (void *)p; - } + if (IS_ALIGNED((unsigned long)p, align)) { + /* + * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What + * would be a good heuristic for when to shrink the vm_area? + */ + if (size <= old_size) { + /* Zero out "freed" memory, potentially for future realloc. */ + if (want_init_on_free() || want_init_on_alloc(flags)) + memset((void *)p + size, 0, old_size - size); + vm->requested_size = size; + kasan_poison_vmalloc(p + size, old_size - size); + return (void *)p; + } - /* - * We already have the bytes available in the allocation; use them. - */ - if (size <= alloced_size) { - kasan_unpoison_vmalloc(p + old_size, size - old_size, - KASAN_VMALLOC_PROT_NORMAL); /* - * No need to zero memory here, as unused memory will have - * already been zeroed at initial allocation time or during - * realloc shrink time. + * We already have the bytes available in the allocation; use them. + */ + if (size <= alloced_size) { + kasan_unpoison_vmalloc(p + old_size, size - old_size, + KASAN_VMALLOC_PROT_NORMAL); + /* + * No need to zero memory here, as unused memory will have + * already been zeroed at initial allocation time or during + * realloc shrink time. + */ + vm->requested_size = size; + return (void *)p; + } + } else { + /* + * p is not aligned with align. + * Allocate a new address to handle it. */ - vm->requested_size = size; - return (void *)p; + if (size < old_size) + old_size = size; } /* TODO: Grow the vm_area, i.e. allocate and map additional pages. */ - n = __vmalloc_noprof(size, flags); + n = __vmalloc_node_noprof(size, align, flags, NUMA_NO_NODE, + __builtin_return_address(0)); if (!n) return NULL; @@ -4170,6 +4189,11 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) return n; } +void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) +{ + return vrealloc_align_noprof(p, size, 1, flags); +} + #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32) #define GFP_VMALLOC32 (GFP_DMA32 | GFP_KERNEL) #elif defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA) -- 2.43.0