From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-lj1-f172.google.com (mail-lj1-f172.google.com [209.85.208.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45BEA30F7F3 for ; Thu, 9 Apr 2026 10:20:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775730017; cv=none; b=WSI6XaEcekxq3qse2vki8mNIQwljs0SjOrZ+GYvx2YyFAPd1TfTnEkXtR4eK+IoLqob/gFZ0FWQ7WfZGVvbt+QJ5BYriM60V6jX0UXqlqN40xuj+6Ynv4RmUzfVvzcl/koWBRfrUfa6mKP/RsZADdQXLf1+KbHmEncdEdl0CN0Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775730017; c=relaxed/simple; bh=IvBU7kCN3NohhGNfUlHyeSvNP1+U14zlfrNM+jh3K9Q=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=n7KV7H9C36ViqtK05Q1MdkIpSXLqAVuKsIGQ1/AAglPUcVf+v5PnPN584qn+p/8tAjUHebW7t+/xtKjRq4ksPAHr4CqIkeyVrBtSr7ihuwAaGH5vNCfONrdqEmRKcXscAJJWhYgVZ77mG6DoIhDdKIln+15vX3Ho5BEM7yZoQGU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QxjKRKC7; arc=none smtp.client-ip=209.85.208.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QxjKRKC7" Received: by mail-lj1-f172.google.com with SMTP id 38308e7fff4ca-38ddd8d3b7fso5180321fa.3 for ; Thu, 09 Apr 2026 03:20:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1775730014; x=1776334814; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from:from:to :cc:subject:date:message-id:reply-to; bh=u/CrLVrNrD/BoMaAhUshUQsHT/vPQb3nnMxgKIza7ds=; b=QxjKRKC7rv0kf4WDygsbFr7nc51Go4g5rtDhHVfE20r7A9e56Qfh/DbxoSOgs9WoyJ nGPTCT4OxXVQpRTNQpDx2EmhtAUnaY+DmAHxzThoulNBTQPYn6yVae/Jz/flWSVOZhcy BZ2UYlwjSa9LFQtI1DmCEtWhq3erqdI7KLLEaIEgnHzayf8qvah46VRGJutdc6vpLYQz Jd6tPJQkKieWVBy/z/APTBFXq2SkMH4ROgyjUKFK5jOyIYDVI1tHVR0sE1gmrsRzb6Ms CHc+gSezrmSf1yEdzU6AT5HFb2+l64zh6t87LFE0XC808NVBiEzv1fucFGWIzV0dq2YF q1Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775730014; x=1776334814; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=u/CrLVrNrD/BoMaAhUshUQsHT/vPQb3nnMxgKIza7ds=; b=QK/QgiEspGBBZ0/mK0RX0cpGTqzPCABK35GCLU9T6QUFRXVx5ZorFsDPFZreEgHv29 Sgz2wz1cABjtDmAYR6R15QTCn16uXd9DBB80vMSe2p4/QnDMqgVbToYT82IzmNJE2pCZ aXqSmb7PhvcDkUf69Pliiq0QGCs9OBu4ANqKXk3Fj7FxmvdGBhKW+oQjaYTLoFXUOwYQ YpRGU/tg8UZBkJDmgYk5zCZtVOmz13QdXiLyj+7+9eMkPCviKNFBJ8XCyvVl8QpAIJ/a hsat1iH05ZkhDSiN7QYh0dyEzyGr9vkrZrr4Bm97nf65MxDw59hbyWJyBvvD5UoD+0re 6/iQ== X-Forwarded-Encrypted: i=1; AJvYcCUTaV1KvSlXSVanPFdDUfluOKE3Xpl81wprViEbFgNJebZaafEc2KPY2ZaKJrgeSWbrTPm8xt5bbToY+Y4=@vger.kernel.org X-Gm-Message-State: AOJu0YxMIMi587/RvX7aWNO0cPUvciKiwME52vS627Bx5ypYJBM0cEyv cfWeh4AFWQHJBClM1738tdTJTlaZD+eTytg9defirPfIgM7oCD1NnrVv X-Gm-Gg: AeBDieuvSzEpgXtnEWSsnTPQg/c6rfjqKyABu5U90mX2ZjAV/yl0JDaST9wXSYxGL5q k/Id1xFrWGEcjCqoeLt0oCT/a0pkll+iy+emFRsguxZzCDWgPF0XM0OU5A8D2iFDo2MzUAaiRbn 1sVZX84/JdjDFeYkQN5RwXy2X8MgXedd+VjJtxVtisCitXNGNkTZ2AoeGDZEoezS/BRVzpMEonb +Zl1yX3cJc6x5hWQztdRG9idFi2EKuun/FWNEy+6u8URyhlpsfiYbeOVGA80YHT4gTeU4d6Oop0 nqAgh6Gvq2CfK9yLN7aLvBgGQww8VUpCQB6QqEcDBtap0RqSJgHOkkLOKiBjzJRqfDCqnVhfXOn xTqGfJ8ixfmeujKzErqWt/NR6KbSNDGhJkLRJpczP00TW+PzJMZc9Sr5QQk4SGdgw X-Received: by 2002:a2e:bc27:0:b0:38a:5402:a9e9 with SMTP id 38308e7fff4ca-38d8d37ac6emr83380891fa.5.1775730013964; Thu, 09 Apr 2026 03:20:13 -0700 (PDT) Received: from milan ([2001:9b1:d5a0:a500::24b]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-38cd215cc94sm48932641fa.35.2026.04.09.03.20.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Apr 2026 03:20:13 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Thu, 9 Apr 2026 12:20:12 +0200 To: Barry Song , Dev Jain Cc: Dev Jain , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, urezki@gmail.com, linux-kernel@vger.kernel.org, anshuman.khandual@arm.com, ryan.roberts@arm.com, ajd@linux.ibm.com, rppt@kernel.org, david@kernel.org, Xueyuan.chen21@gmail.com Subject: Re: [RFC PATCH 5/8] mm/vmalloc: map contiguous pages in batches for vmap() if possible Message-ID: References: <20260408025115.27368-1-baohua@kernel.org> <20260408025115.27368-6-baohua@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Thu, Apr 09, 2026 at 05:54:55AM +0800, Barry Song wrote: > On Wed, Apr 8, 2026 at 10:03 PM Dev Jain wrote: > > > > > > > > On 08/04/26 8:21 am, Barry Song (Xiaomi) wrote: > > > In many cases, the pages passed to vmap() may include high-order > > > pages allocated with __GFP_COMP flags. For example, the systemheap > > > often allocates pages in descending order: order 8, then 4, then 0. > > > Currently, vmap() iterates over every page individually—even pages > > > inside a high-order block are handled one by one. > > > > > > This patch detects high-order pages and maps them as a single > > > contiguous block whenever possible. > > > > > > An alternative would be to implement a new API, vmap_sg(), but that > > > change seems to be large in scope. > > > > > > Signed-off-by: Barry Song (Xiaomi) > > > --- > > > mm/vmalloc.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-- > > > 1 file changed, 49 insertions(+), 2 deletions(-) > > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > > index eba436386929..e8dbfada42bc 100644 > > > --- a/mm/vmalloc.c > > > +++ b/mm/vmalloc.c > > > @@ -3529,6 +3529,53 @@ void vunmap(const void *addr) > > > } > > > EXPORT_SYMBOL(vunmap); > > > > > > +static inline int get_vmap_batch_order(struct page **pages, > > > + unsigned int max_steps, unsigned int idx) > > > +{ > > > + unsigned int nr_pages; > > > + > > > + if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP) || > > > + ioremap_max_page_shift == PAGE_SHIFT) > > > + return 0; > > > + > > > + nr_pages = compound_nr(pages[idx]); > > > + if (nr_pages == 1 || max_steps < nr_pages) > > > + return 0; > > > > This assumes that the page array passed to vmap() will have compound pages > > if it is a higher order allocation. > > > > See rb_alloc_aux_page(). It gets higher-order allocations without passing > > GFP_COMP. > > > > That is why my implementation does not assume anything about the property > > of the pages. > > If you’re asking about support for non-compound pages, I think > that’s fine. My current use case is dma-buf, where pages are > compound. I recall discussing this previously with David and > Uladzislau. > > If you’re working with non-compound pages, I’m happy to add > support in the next version. I’m also happy to reuse some of your > code and credit you as Co-developed-by if you’re willing. I actually > prefer your __vmap_huge() name over my > vmap_contig_pages_range(). > > Does that make sense to you? > > > > > Also it may be useful to do regression-testing for the common case of > > vmap() with a single page (assuming it is common, I don't know), in > > which case we may have to special case it. > > I agree, so I had Xueyuan test single pages and highlighted this > in the cover letter. There is no regression: "vmap() is 5.6× > faster when memory includes some order-8 pages, with no > regression observed for order-0 pages." > > > > > My implementation requires opting in with VM_ALLOW_HUGE_VMAP - I suspect > > you may run into problems if you make vmap() do huge-mappings as best-effort > > by default. I am guessing this because ... > > > > Drivers can operate on individual pages, so vmalloc() calls split_page() > > and then does the block/cont mappings. This same issue should be present > > with vmap() too? In which case if we are to do huge-mappings by default > > then we can do split_page() after detecting contiguous chunks. > > > > But ... that may create problems for the caller of vmap() - vmap now > > has the changed the properties of the pages. > > I don’t see this as a problem at all. Splitting pages does not > affect physical or virtual contiguity; it only changes the > contents of struct page objects, not the PTE/PMD mappings. > For ioremap, there isn’t even a struct page, yet the mappings > can still be huge. > It would be good if you could combine the work together with Jain. -- Uladzislau Rezki