From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24BF533A6F5 for ; Thu, 18 Dec 2025 21:24:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766093105; cv=none; b=ol3wZKn441UOZeKqe7ZDdgF1HsqQd8YE2lTGjegLwIn/k3gdAWTtZRW6qCGoJuneYiiAJpjghaXNozz/i17FV8qmZ+kVKaFUOBb+da8igEB3uvBOp5kEMR4hIWuv2IRjATtOuBshc+OFpfSurXmy+4JSVgcbCh6unfI4pbOSHps= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766093105; c=relaxed/simple; bh=ioGwul6NcJRMLNkxVxfAXWMl0uIpRthPzWaXBvUwg/Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=f3PlEU6eejR8aGaJV5O2EdraevIp4WGr5BJYgSFd9fTgpoVzKluPo05BwuSBcPrwhc6JOQAyF4r/GfZeWQ/gwWE4Tx2eANLOe6LMph0gsmIdwHTCFLHTbPTlzq3exA3vLzHJjH6nheWevOD/200emRDIE3LpdigSEiWxanDk44Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=V1oHi/iV; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="V1oHi/iV" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-7fc0c1d45a4so1083707b3a.0 for ; Thu, 18 Dec 2025 13:24:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766093086; x=1766697886; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YTohBmtIORr8n8lsBagnrEz6IPI4sdKvMwJT/pJENjY=; b=V1oHi/iV8tP+QX3YzWlZfOgNf7b+lBrmCaPQHoIdJSxsIKxVsPtmnrTP8swNc7/VgJ n1oah6v6cYJBfFMojA8UV3WsaPkYTzXtM2ii2bpXDXj8daJJEChtO3iBqdNPBfSsX0s+ N8cmhqOyw3/4pd9usp9CjikjcFvPm1nb6+w7xKgdqH1eKLGnkZem5l5gnJNGpEesVUv/ kIEjIZKwW6kwwL64I/bMJidu60gCtcE525C+dqru6bUYMBWgMMM3t0YvOEAaUQC1XXmm j9puqDGISXiEzLD7sR6xx14MPTLs+v1LmytfiY/NWy5jkIYidFmZEdvosmQEcXS40ZIz NuMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766093086; x=1766697886; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=YTohBmtIORr8n8lsBagnrEz6IPI4sdKvMwJT/pJENjY=; b=eSpniZPLUdBArQveyb81jm1nN/Wrt6FX5F/quhvCZHBQy5JT3nxsc4ik1XoZ9P2AX3 KqXR1Diran62MPy1AwhxKWkgw9E1mJn24Qy40pQpJU7VchD+HaHeya4sfY9gTThItRd5 TjQP79Vgy4mT0m1tyVU3bPDojsGeajkJHkNw+SsJk33RrGpTT+Oa6vUWn6NjPRWvjzLD XmV1v8REIBVoaNMEzxZFIfHlacozHm5ASrhWOZ+PtFyW6MMMe2mG8o33lJk00duh/HMJ yK81Pa4NXBZg0ruucQN2kjLb6XyAATlacVNHdIb/MJg/fcPzgYteORV6Z+/IE/cb9rn0 hM2g== X-Forwarded-Encrypted: i=1; AJvYcCUOFGU7wJPJXCNjx9mxB7mWxSwLyWqjuDp9/3Fh60919NiDlOWbE8v/CDKGwkKwMaPpRnnh5geTNtrpRa8=@vger.kernel.org X-Gm-Message-State: AOJu0Yw2bejJtpgSNBIkgq9tS34V8NJs+UiSOc0+23FNYD3BuvOLIcV7 uv8yVzp5SYc6aXWaKuQ1Wr/gOiZLHfUdgL6LOgi+oXE4dKYMoEl/9kRF X-Gm-Gg: AY/fxX58gX8mYsVopuAWur9PBp3HD/rTHUertaxCuLLnxhV0TehP1HPA/vPdiy11nuD MwsZc/5KJeFWVGWKRs5OLWsWe0d9xdBE1MzzmS1D3f2VsX/FqUnQciAWQPETOQIoXmyehNiJK53 SwPdXQ/TQ7I9dPrhIXmTZe9Xdxy2VgXYtkKk93S2Vc9esYRZL94Z/5bTjJnbhF5c0tITiBpegVV 8LZWMnwx/ODkIAm4YMDug8LFZjlbbbggMK3KLog4E38UURXYiOiWdx1HTbxamyDru4iwercVNd1 yMyaxAbZqkid41GQ52YI0URkSp8lvH4cwV8M50k0pq78MOGHCeXwaSIfbjS0uB+b7S7zWR1bS8y nCfBJi9IS1FukXcT2doSsmZ2d8sRxSKoTSGL8EA2V9Ar41eFNzbo9peZd9QEcG5cEJcRiktPyIX uWL4paD9dH3Z/83pQWOaDcu8Ha X-Google-Smtp-Source: AGHT+IGdOHVdrcCuEhBbazWVZdWwuUQF6Z6rcTSGphDUl2qn81RLeTCX0R1hfWncTkyQFdQxgYXjUA== X-Received: by 2002:a05:6a00:808c:b0:7e8:450c:61b5 with SMTP id d2e1a72fcca58-7ff6607b7e7mr687761b3a.37.1766093086042; Thu, 18 Dec 2025 13:24:46 -0800 (PST) Received: from Barrys-MBP.hub ([47.72.129.29]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7ff7a939ea2sm258519b3a.4.2025.12.18.13.24.40 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 18 Dec 2025 13:24:45 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: urezki@gmail.com Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, david@kernel.org, dri-devel@lists.freedesktop.org, jstultz@google.com, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, mripard@kernel.org, sumit.semwal@linaro.org, v-songbaohua@oppo.com, zhengtangquan@oppo.com Subject: Re: [PATCH] mm/vmalloc: map contiguous pages in batches for vmap() whenever possible Date: Fri, 19 Dec 2025 05:24:36 +0800 Message-Id: <20251218212436.17142-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On Thu, Dec 18, 2025 at 9:55 PM Uladzislau Rezki wrote: > > On Thu, Dec 18, 2025 at 02:01:56PM +0100, David Hildenbrand (Red Hat) wrote: > > On 12/15/25 06:30, Barry Song wrote: > > > From: Barry Song > > > > > > In many cases, the pages passed to vmap() may include high-order > > > pages allocated with __GFP_COMP flags. For example, the systemheap > > > often allocates pages in descending order: order 8, then 4, then 0. > > > Currently, vmap() iterates over every page individually—even pages > > > inside a high-order block are handled one by one. > > > > > > This patch detects high-order pages and maps them as a single > > > contiguous block whenever possible. > > > > > > An alternative would be to implement a new API, vmap_sg(), but that > > > change seems to be large in scope. > > > > > > When vmapping a 128MB dma-buf using the systemheap, this patch > > > makes system_heap_do_vmap() roughly 17× faster. > > > > > > W/ patch: > > > [   10.404769] system_heap_do_vmap took 2494000 ns > > > [   12.525921] system_heap_do_vmap took 2467008 ns > > > [   14.517348] system_heap_do_vmap took 2471008 ns > > > [   16.593406] system_heap_do_vmap took 2444000 ns > > > [   19.501341] system_heap_do_vmap took 2489008 ns > > > > > > W/o patch: > > > [    7.413756] system_heap_do_vmap took 42626000 ns > > > [    9.425610] system_heap_do_vmap took 42500992 ns > > > [   11.810898] system_heap_do_vmap took 42215008 ns > > > [   14.336790] system_heap_do_vmap took 42134992 ns > > > [   16.373890] system_heap_do_vmap took 42750000 ns > > > > > > > That's quite a speedup. > > > > > Cc: David Hildenbrand > > > Cc: Uladzislau Rezki > > > Cc: Sumit Semwal > > > Cc: John Stultz > > > Cc: Maxime Ripard > > > Tested-by: Tangquan Zheng > > > Signed-off-by: Barry Song > > > --- > > >   * diff with rfc: > > >   Many code refinements based on David's suggestions, thanks! > > >   Refine comment and changelog according to Uladzislau, thanks! > > >   rfc link: > > >   https://lore.kernel.org/linux-mm/20251122090343.81243-1-21cnbao@gmail.com/ > > > > > >   mm/vmalloc.c | 45 +++++++++++++++++++++++++++++++++++++++------ > > >   1 file changed, 39 insertions(+), 6 deletions(-) > > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > > index 41dd01e8430c..8d577767a9e5 100644 > > > --- a/mm/vmalloc.c > > > +++ b/mm/vmalloc.c > > > @@ -642,6 +642,29 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, > > >     return err; > > >   } > > > +static inline int get_vmap_batch_order(struct page **pages, > > > +           unsigned int stride, unsigned int max_steps, unsigned int idx) > > > +{ > > > +   int nr_pages = 1; > > > > unsigned int, maybe Right > > > > Why are you initializing nr_pages when you overwrite it below? Right, initializing nr_pages can be dropped. > > > > > + > > > +   /* > > > +    * Currently, batching is only supported in vmap_pages_range > > > +    * when page_shift == PAGE_SHIFT. > > > > I don't know the code so realizing how we go from page_shift to stride too > > me a second. Maybe only talk about stride here? > > > > OTOH, is "stride" really the right terminology? > > > > we calculate it as > > > >       stride = 1U << (page_shift - PAGE_SHIFT); > > > > page_shift - PAGE_SHIFT should give us an "order". So is this a > > "granularity" in nr_pages? This is the case where vmalloc() may realize that it has high-order pages and therefore calls vmap_pages_range_noflush() with a page_shift larger than PAGE_SHIFT. For vmap(), we take a pages array, so page_shift is always PAGE_SHIFT. > > > > Again, I don't know this code, so sorry for the question. > > > To me "stride" also sounds unclear. Thanks, David and Uladzislau. On second thought, this stride may be redundant, and it should be possible to drop it entirely. This results in the code below: diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 41dd01e8430c..3962bdcb43e5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -642,6 +642,20 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, return err; } +static inline int get_vmap_batch_order(struct page **pages, + unsigned int max_steps, unsigned int idx) +{ + unsigned int nr_pages = compound_nr(pages[idx]); + + if (nr_pages == 1 || max_steps < nr_pages) + return 0; + + if (num_pages_contiguous(&pages[idx], nr_pages) == nr_pages) + return compound_order(pages[idx]); + return 0; +} + /* * vmap_pages_range_noflush is similar to vmap_pages_range, but does not * flush caches. @@ -658,20 +672,35 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, WARN_ON(page_shift < PAGE_SHIFT); + /* + * For vmap(), users may allocate pages from high orders down to + * order 0, while always using PAGE_SHIFT as the page_shift. + * We first check whether the initial page is a compound page. If so, + * there may be an opportunity to batch multiple pages together. + */ if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) || - page_shift == PAGE_SHIFT) + (page_shift == PAGE_SHIFT && !PageCompound(pages[0]))) return vmap_small_pages_range_noflush(addr, end, prot, pages); - for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) { + for (i = 0; i < nr; ) { + unsigned int shift = page_shift; int err; - err = vmap_range_noflush(addr, addr + (1UL << page_shift), + /* + * For vmap() cases, page_shift is always PAGE_SHIFT, even + * if the pages are physically contiguous, they may still + * be mapped in a batch. + */ + if (page_shift == PAGE_SHIFT) + shift += get_vmap_batch_order(pages, nr - i, i); + err = vmap_range_noflush(addr, addr + (1UL << shift), page_to_phys(pages[i]), prot, - page_shift); + shift); if (err) return err; - addr += 1UL << page_shift; + addr += 1UL << shift; + i += 1U << shift; } return 0; Does this look clearer? Thanks Barry