From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 502E61A239A for ; Wed, 8 Apr 2026 02:51:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775616712; cv=none; b=hX7f34DRfm9OTXOWvFKykGfUgw2KKgDK/Sg6JKgiRfxdn016Q7xUGpw8UAhcuUeFauvntu29j3HlPe/9WAFZZK+9Hx1eFIkJtoxZiFBIIFSale0HCHjTIwnO8mD3CoFEczbbblw2rmDbViMGb5DGljqjmMYfdLw0PHsf83McElM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775616712; c=relaxed/simple; bh=PKtItPJH+nSfbPKiSnTVSIKouqPIOVzmyf2uKcpw7Ds=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AglpuwubSruNozN2fMWqSTvhtWg4F7IgxbF1TicEfdmASFRKh9a4m8DoxPz4pK1bVDfGg74WiYpDYUr7sfage5w81QVB/tnrRBxRXBxhXpVAAdqx9bPshC+euNIkOOMEygTU9QFoFZ2MJoakgI19ehdb4+W1CZEQhuglJ6Q2zNE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=K+u/rD5a; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="K+u/rD5a" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 31F42C19424; Wed, 8 Apr 2026 02:51:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775616711; bh=PKtItPJH+nSfbPKiSnTVSIKouqPIOVzmyf2uKcpw7Ds=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K+u/rD5azpr8es305v/KMUlewCknD5c7RdCe2Nv1fnif5jdekzR5z72uB0XVvbhf8 559XCvpf2ujN+0v055UU2ailQuyIx4O6fsijGq5EdqomDTipii0OPoSxYhGpq83F5W +rOF9PK/XyLmm5YdJdA/KzBtgDWtqgJajiWkLyeFGrWvjf/4uRD9T4CLtVAdP/j1HP 3mbPUWakkrUQrYOcK+XwBqka08Q9f0V3Wa3Y6fCFXD3rvsfAfFgLrhP8ONEC8QPSkO 6tC4+Bad8HvLf3ZLhkkMBega2S2viD1f4xf6bzgx5hAabqvbCniwxYN+CDTpaHsD4V IuL2CUt5ClpWA== From: "Barry Song (Xiaomi)" To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, urezki@gmail.com Cc: linux-kernel@vger.kernel.org, anshuman.khandual@arm.com, ryan.roberts@arm.com, ajd@linux.ibm.com, rppt@kernel.org, david@kernel.org, Xueyuan.chen21@gmail.com, "Barry Song (Xiaomi)" Subject: [RFC PATCH 6/8] mm/vmalloc: align vm_area so vmap() can batch mappings Date: Wed, 8 Apr 2026 10:51:13 +0800 Message-Id: <20260408025115.27368-7-baohua@kernel.org> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20260408025115.27368-1-baohua@kernel.org> References: <20260408025115.27368-1-baohua@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Try to align the vmap virtual address to PMD_SHIFT or a larger PTE mapping size hinted by the architecture, so contiguous pages can be batch-mapped when setting PMD or PTE entries. Signed-off-by: Barry Song (Xiaomi) --- mm/vmalloc.c | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e8dbfada42bc..6643ec0288cd 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3576,6 +3576,35 @@ static int vmap_contig_pages_range(unsigned long addr, unsigned long end, return err; } +static struct vm_struct *get_aligned_vm_area(unsigned long size, unsigned long flags) +{ + unsigned int shift = (size >= PMD_SIZE) ? PMD_SHIFT : + arch_vmap_pte_supported_shift(size); + struct vm_struct *vm_area = NULL; + + /* + * Try to allocate an aligned vm_area so contiguous pages can be + * mapped in batches. + */ + while (1) { + unsigned long align = 1UL << shift; + + vm_area = __get_vm_area_node(size, align, PAGE_SHIFT, flags, + VMALLOC_START, VMALLOC_END, + NUMA_NO_NODE, GFP_KERNEL, + __builtin_return_address(0)); + if (vm_area || shift <= PAGE_SHIFT) + goto out; + if (shift == PMD_SHIFT) + shift = arch_vmap_pte_supported_shift(size); + else if (shift > PAGE_SHIFT) + shift = PAGE_SHIFT; + } + +out: + return vm_area; +} + /** * vmap - map an array of pages into virtually contiguous space * @pages: array of page pointers @@ -3614,7 +3643,7 @@ void *vmap(struct page **pages, unsigned int count, return NULL; size = (unsigned long)count << PAGE_SHIFT; - area = get_vm_area_caller(size, flags, __builtin_return_address(0)); + area = get_aligned_vm_area(size, flags); if (!area) return NULL; -- 2.39.3 (Apple Git-146)