From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D54D4202C29 for ; Wed, 8 Apr 2026 02:51:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775616708; cv=none; b=QH6vXENKo40ZjYQ+f6VkmBowITIkZyD8o06qlIUnO0kWyLgTJwBeQqpXqet+L7U0drEQgAOhiierpuZPtWiZKCwSU7JqyPo2yTxsjuW9kRNvgRGsf0DvwMSX6j68Di4IpQ7Vpq8MG+nNuP1B9yFvXDQIT+OQrkC/d5GpGsKaE0M= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775616708; c=relaxed/simple; bh=ADjBqE0C1KfIyVTZP4SAf0I43MDC/OSOfU23K3IWy+c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=ckChqRDyp3DozDkx5UjsEjA0iDf/hccqvAXJ/tt+QUT74Yo07ykRD3FiyYNnUu+h8+Fpjt+vtmZYhJfPq0yxS1w9WqAGhhg823gMOAA/G/fjHKuwO/KlhOC5NjtwxI1mnToBKttq4UGqSHXLvu8E03LBm81jvf1O5N8izp8w3DI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=j51XMXfv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="j51XMXfv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07F78C2BC9E; Wed, 8 Apr 2026 02:51:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775616708; bh=ADjBqE0C1KfIyVTZP4SAf0I43MDC/OSOfU23K3IWy+c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j51XMXfvi+Uxqq+1/UvQP3n+KERV+0p0XtWw/Uhmkna6BimHQotLAW11I6s5QRSr6 0crdor5UZv60wH48QovXiIHoA36Ujapw2ywR0srgEx4fVQ7Fqbt6OmZxnp6yUB6G6k yyJoygwMMFIBaxuWQf3aKwv45lImLjGwWMafowo0YFHZ2LA3rHAM7XvizQqptwU20H eIFEaq9hRfJZgAwFlcEzyVGomeRJMEllO2maK95kgCj7wAE8eD59Jw21DAkW1vTxgJ CNFnpjcC2b94Zz11qOIM2c7yZz83pxIpgYbRw45w3/fAQZV/2ksIOpJPQ505DPR8tg aTa0X/Be3f8cA== From: "Barry Song (Xiaomi)" To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, urezki@gmail.com Cc: linux-kernel@vger.kernel.org, anshuman.khandual@arm.com, ryan.roberts@arm.com, ajd@linux.ibm.com, rppt@kernel.org, david@kernel.org, Xueyuan.chen21@gmail.com, "Barry Song (Xiaomi)" Subject: [RFC PATCH 5/8] mm/vmalloc: map contiguous pages in batches for vmap() if possible Date: Wed, 8 Apr 2026 10:51:12 +0800 Message-Id: <20260408025115.27368-6-baohua@kernel.org> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20260408025115.27368-1-baohua@kernel.org> References: <20260408025115.27368-1-baohua@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit In many cases, the pages passed to vmap() may include high-order pages allocated with __GFP_COMP flags. For example, the systemheap often allocates pages in descending order: order 8, then 4, then 0. Currently, vmap() iterates over every page individually—even pages inside a high-order block are handled one by one. This patch detects high-order pages and maps them as a single contiguous block whenever possible. An alternative would be to implement a new API, vmap_sg(), but that change seems to be large in scope. Signed-off-by: Barry Song (Xiaomi) --- mm/vmalloc.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 49 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index eba436386929..e8dbfada42bc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3529,6 +3529,53 @@ void vunmap(const void *addr) } EXPORT_SYMBOL(vunmap); +static inline int get_vmap_batch_order(struct page **pages, + unsigned int max_steps, unsigned int idx) +{ + unsigned int nr_pages; + + if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP) || + ioremap_max_page_shift == PAGE_SHIFT) + return 0; + + nr_pages = compound_nr(pages[idx]); + if (nr_pages == 1 || max_steps < nr_pages) + return 0; + + if (num_pages_contiguous(&pages[idx], nr_pages) == nr_pages) + return compound_order(pages[idx]); + return 0; +} + +static int vmap_contig_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages) +{ + unsigned int count = (end - addr) >> PAGE_SHIFT; + int err; + + err = kmsan_vmap_pages_range_noflush(addr, end, prot, pages, + PAGE_SHIFT, GFP_KERNEL); + if (err) + goto out; + + for (unsigned int i = 0; i < count; ) { + unsigned int shift = PAGE_SHIFT + + get_vmap_batch_order(pages, count - i, i); + + err = vmap_range_noflush(addr, addr + (1UL << shift), + page_to_phys(pages[i]), prot, shift); + if (err) + goto out; + + addr += 1UL << shift; + i += 1U << (shift - PAGE_SHIFT); + } + +out: + flush_cache_vmap(addr, end); + return err; +} + /** * vmap - map an array of pages into virtually contiguous space * @pages: array of page pointers @@ -3572,8 +3619,8 @@ void *vmap(struct page **pages, unsigned int count, return NULL; addr = (unsigned long)area->addr; - if (vmap_pages_range(addr, addr + size, pgprot_nx(prot), - pages, PAGE_SHIFT) < 0) { + if (vmap_contig_pages_range(addr, addr + size, pgprot_nx(prot), + pages) < 0) { vunmap(area->addr); return NULL; } -- 2.39.3 (Apple Git-146)