From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EB6427280A for ; Mon, 13 Apr 2026 16:16:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776096978; cv=none; b=MktQ+kZ2qG8kC2qWQtvI897krSRcj/ONBdr038YUCWzPHsRj1lP5HRNEBNYPEP+TfMRsj+6DgThstVVmqolFkaffv3fUDnBg9pNkWu/DvbPTdtddX9iVHaUpTMdeQ4y73+p7KIKfJ9abGtz5yyopw2i22Zvz5Q3njceNUVK28HE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776096978; c=relaxed/simple; bh=s0GwUHExH4iN6ORL9X/gv2Ol3KDyD0IWHk3CIXYTiuQ=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=j3o3UufeKWDVBl5Ya+dc3vIvcHzYs2TDSLpc0VDlNzSMz35iC3+VUiFy8ycDDZDyA/dIQyETSbPGrIZNXlIDQwCZz04fwkDGErIkw9lgzQaf2DzQNvAc2WfgpyU7YbSv95lFTBQnOJENxIMgyDlbkU8gIsaQnobBzkmww0wuBLI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kSEiYwVV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kSEiYwVV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 201E9C2BCAF; Mon, 13 Apr 2026 16:16:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776096978; bh=s0GwUHExH4iN6ORL9X/gv2Ol3KDyD0IWHk3CIXYTiuQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=kSEiYwVVNDqkNOGFJlyXyMEfXbylJRhQHfHiecOFmq/5hD8BLWtz4RNtQBt4oQgUJ IDmkPBS9QF4yIrQjGHjGuni8vDAPswk8iJF4zZsQ3tRvOuYsqWfC+aRdn8YqHNjv/6 jHM7kdzTSaQ++71op3y/TQPHhLsNMUYgMjh6ztNrwtBu1KJQS26m/TTiEG2C/X5ZEx VJ97LP/CF1M2ecnrQ3b7mrdVHZn8bgGaYqyUA6HBnvX2NQpUNAdl9GjNC1WJPYP1p1 RY1bFEvthjSK1toNlL50id84i1EqNq7rumXyeTrF1u5l+qTxKtvPa6y6mV6zbIWD5Z 7ojaosHo643pQ== Date: Mon, 13 Apr 2026 19:16:10 +0300 From: Mike Rapoport To: "Barry Song (Xiaomi)" Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, urezki@gmail.com, linux-kernel@vger.kernel.org, anshuman.khandual@arm.com, ryan.roberts@arm.com, ajd@linux.ibm.com, david@kernel.org, Xueyuan.chen21@gmail.com Subject: Re: [RFC PATCH 4/8] mm/vmalloc: Eliminate page table zigzag for huge vmalloc mappings Message-ID: References: <20260408025115.27368-1-baohua@kernel.org> <20260408025115.27368-5-baohua@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260408025115.27368-5-baohua@kernel.org> On Wed, Apr 08, 2026 at 10:51:11AM +0800, Barry Song (Xiaomi) wrote: > For vmalloc() allocations with VM_ALLOW_HUGE_VMAP, we no longer > need to iterate over pages one by one, which would otherwise lead to > zigzag page table mappings. > > The code is now unified with the PAGE_SHIFT case by simply > calling vmap_small_pages_range_noflush(). > > Signed-off-by: Barry Song (Xiaomi) > --- > mm/vmalloc.c | 22 ++++------------------ > 1 file changed, 4 insertions(+), 18 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 5bf072297536..eba436386929 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -689,27 +689,13 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, > int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, > pgprot_t prot, struct page **pages, unsigned int page_shift) > { > - unsigned int i, nr = (end - addr) >> PAGE_SHIFT; > - > WARN_ON(page_shift < PAGE_SHIFT); > > - if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) || > - page_shift == PAGE_SHIFT) > - return vmap_small_pages_range_noflush(addr, end, prot, pages, PAGE_SHIFT); > - > - for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) { > - int err; > - > - err = vmap_range_noflush(addr, addr + (1UL << page_shift), > - page_to_phys(pages[i]), prot, > - page_shift); > - if (err) > - return err; > + if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC)) > + page_shift = PAGE_SHIFT; > > - addr += 1UL << page_shift; > - } > - > - return 0; > + return vmap_small_pages_range_noflush(addr, end, prot, pages, > + min(page_shift, PMD_SHIFT)); Wouldn't vmap_range_noflush() already "do the right thing" even without changes to vmap_small_pages_range_noflush()? > } > > int vmap_pages_range_noflush(unsigned long addr, unsigned long end, > -- > 2.39.3 (Apple Git-146) > -- Sincerely yours, Mike.