From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 733D119D8A8 for ; Wed, 17 Dec 2025 19:22:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.53 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765999376; cv=none; b=QjRVsPTzUaJCEqHtiVTFWOx9Obh6ueDUQTO29FEb4QEGlxQVzMy/GkWMGaD51/Mmiucch/KG5l7vIuXN9zv9cAcFrwOt91MW6+gQ3BTTLjTgIRCd8a9ur1CELga7iGo6aK7lXHgk/hF3slA0QSeywm98KUQ2t5EvwYpaU3QUAe0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765999376; c=relaxed/simple; bh=Cg8UzGpsBglXaXsf69DOBIyIQPMhwy1Cv51c3j4EZfc=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=aBcSufAfv0HL0kZNROKDvv8/aGIPb1AmKFCX9KEpFFzWCbNHOAHKj+7jdwCRY1jwXqwSpEq0VPPNbmny5CinBnlTXXV1Kn7UFF1CVBtC7vzttkC7pFq3Ra2Jc+edXgjscisoXY85ScheMhTJLVmWvRAerG0R7deHBSoKI828fkY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DKjL8Vh7; arc=none smtp.client-ip=209.85.167.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DKjL8Vh7" Received: by mail-lf1-f53.google.com with SMTP id 2adb3069b0e04-59577c4c7c1so862045e87.1 for ; Wed, 17 Dec 2025 11:22:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1765999372; x=1766604172; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=tXv71vQRwZmSrjPwJYiXG0fsD7GmbALfv0SmwgVeiGQ=; b=DKjL8Vh7i4AqWma+UBrl68bTcVKbu/S03SSy2zhsUmxO/JAt2BzWdbfOvrTRpzDTH9 kNNYBtdhjAivF7KQR4xNL/ao5QhEwvOycQCKSU4NI6M3Ghwf9KQvjjRW0Z4wFjC/QNzv Q4COMwL1M9AS2v9UzscDuLvon2xDoovrcY33dMGS4f6a6rJGZA/oCR9s0MNsSeCa/W4U lj3bc5YhcdzRxWyxw6IzfpQldlY4WQ/ifcYbu0g6ztX+TT/Ksw5smy9KXpVn3I6Cq6xt OW2k9eNqldw/HNn5UZAdJa44He1PI9ZAySLx+1zqNmb0XnOBwoxszTNCSyQEp5CzVGYA Aotw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765999372; x=1766604172; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tXv71vQRwZmSrjPwJYiXG0fsD7GmbALfv0SmwgVeiGQ=; b=ayzTyVWJ80/Mm9nqZ1ZOpGax2KLV6biHEUy/T4BzJS7UP0/Zcx/XALVidKflPwQOgY 2h60cQIQmNkl2owwy70uqnlximubQn51h5q23zabLUiZWo0yXX9FZZ/+gEb7lVss11Xf FNKF/zPYnUPA+EC3AflywLl/q1Y5F4NQcK4tTBDmc1ZbnuHFMlPVMf/G9Ofh89geIVVW 2A+bF4zwtBvg58C3QjBz5LoXCauo508L9F7WKKLP/SYcyVNYeov3sQtu3O6XOQQISBBN NVGI3L6+EGNVHQ69cTnosPYCC+6r3BGYD4DUcEAnCsJbYe7QVZJyoEc9gQLLzm2SsYsW TH+Q== X-Forwarded-Encrypted: i=1; AJvYcCXaJb68fVUzEPSN1o+ySWg+09A2C1QRqMpD45fGut6aXEuVE+0OhYxTZvDw2daajco5NdJZ7xsTRJ9ijcU=@vger.kernel.org X-Gm-Message-State: AOJu0YwTjnadBo0ugdHyHGS4fYRE/jyJUxLQPPNlzVOPrQUfmbFXlw5r r+cEoHQaIhTcLxo5RnZpROSFsBcyaOnycLnmF2sYHq3LW7pqN7fZN2XNnsJjjSRN X-Gm-Gg: AY/fxX7slHFkhbHTJvl7TIMhQY0hqZBjRJcIwr0URKUb/zpBY9N0Pfm0EFjboZ+vdi4 4IU5+OykvZ3tp5/hVTDUIdAi5ZWLZKPG4oHEcMOfkz4dtKsptf/nTadexubG1rnRX/BTOEDrMgd lqiAGOaOLBn/HfUI32hZ4+mog+Oj+ymfg4V0AvrQGdbf/8cqT2cUUCQg8/JUXF+sEAAyk2HEQ5f S+Gpb8ZmwzkRgW+dhz2QpaMUC7dXTgN2e7R+XDGn8gpMnBLzTQCFBPkU5mEkFT4LxxzZDE/K6T3 uIyVWE48GVsEWsJx9Nsxpq9JPtWVmq/oTwYfte72iW5FC8XlXU2FGxu0Bzpzufgr3NCbVt9405+ WcB/doW0POotOkxduR8vIznglkCDiRWLGTQi3MISDytp01zQe9PtV X-Google-Smtp-Source: AGHT+IGAARlvvyw3ne2FlyuYzdtP1XaHjkMmk9qDhN9jJePGA5iBzMwPYucwEJAgVxQA5q92hBEZew== X-Received: by 2002:a05:6512:12c8:b0:597:cfee:db20 with SMTP id 2adb3069b0e04-59a126ed5f0mr145924e87.23.1765999372064; Wed, 17 Dec 2025 11:22:52 -0800 (PST) Received: from milan ([2001:9b1:d5a0:a500::24b]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-59a134fea6csm39753e87.80.2025.12.17.11.22.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Dec 2025 11:22:51 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Wed, 17 Dec 2025 20:22:49 +0100 To: Ryan Roberts Cc: Uladzislau Rezki , linux-mm@kvack.org, Andrew Morton , Vishal Moola , Dev Jain , Baoquan He , LKML Subject: Re: [PATCH 2/2] mm/vmalloc: Add attempt_larger_order_alloc parameter Message-ID: References: <20251216211921.1401147-1-urezki@gmail.com> <20251216211921.1401147-2-urezki@gmail.com> <6ca6e796-cded-4221-b1f8-92176a80513e@arm.com> <0f69442d-b44e-4b30-b11e-793511db9f1e@arm.com> <4a66f13d-318b-4cdb-b168-0c993ff8a309@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4a66f13d-318b-4cdb-b168-0c993ff8a309@arm.com> On Wed, Dec 17, 2025 at 05:01:19PM +0000, Ryan Roberts wrote: > On 17/12/2025 15:20, Ryan Roberts wrote: > > On 17/12/2025 12:02, Uladzislau Rezki wrote: > >>> On 16/12/2025 21:19, Uladzislau Rezki (Sony) wrote: > >>>> Introduce a module parameter to enable or disable the large-order > >>>> allocation path in vmalloc. High-order allocations are disabled by > >>>> default so far, but users may explicitly enable them at runtime if > >>>> desired. > >>>> > >>>> High-order pages allocated for vmalloc are immediately split into > >>>> order-0 pages and later freed as order-0, which means they do not > >>>> feed the per-CPU page caches. As a result, high-order attempts tend > >>>> to bypass the PCP fastpath and fall back to the buddy allocator that > >>>> can affect performance. > >>>> > >>>> However, when the PCP caches are empty, high-order allocations may > >>>> show better performance characteristics especially for larger > >>>> allocation requests. > >>> > >>> I wonder if a better solution would be "allocate order-0 if available in pcp, > >>> else try large order, else fallback to order-0" Could that provide the best of > >>> all worlds without needing a configuration knob? > >>> > >> I am not sure, to me it looks like a bit odd. > > > > Perhaps it would feel better if it was generalized to "first try allocation from > > PCP list, highest to lowest order, then try allocation from the buddy, highest > > to lowest order"? > > > >> Ideally it would be > >> good just free it as high-order page and not order-0 peaces. > > > > Yeah perhaps that's better. How about something like this (very lightly tested > > and no performance results yet): > > > > (And I should admit I'm not 100% sure it is safe to call free_frozen_pages() > > with a contiguous run of order-0 pages, but I'm not seeing any warnings or > > memory leaks when running mm selftests...) > > > > ---8<--- > > commit caa3e5eb5bfade81a32fa62d1a8924df1eb0f619 > > Author: Ryan Roberts > > Date: Wed Dec 17 15:11:08 2025 +0000 > > > > WIP > > > > Signed-off-by: Ryan Roberts > > > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > > index b155929af5b1..d25f5b867e6b 100644 > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -383,6 +383,8 @@ extern void __free_pages(struct page *page, unsigned int order); > > extern void free_pages_nolock(struct page *page, unsigned int order); > > extern void free_pages(unsigned long addr, unsigned int order); > > > > +void free_pages_bulk(struct page *page, int nr_pages); > > + > > #define __free_page(page) __free_pages((page), 0) > > #define free_page(addr) free_pages((addr), 0) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 822e05f1a964..5f11224cf353 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -5304,6 +5304,48 @@ static void ___free_pages(struct page *page, unsigned int > > order, > > } > > } > > > > +static void free_frozen_pages_bulk(struct page *page, int nr_pages) > > +{ > > + while (nr_pages) { > > + unsigned int fit_order, align_order, order; > > + unsigned long pfn; > > + > > + pfn = page_to_pfn(page); > > + fit_order = ilog2(nr_pages); > > + align_order = pfn ? __ffs(pfn) : fit_order; > > + order = min3(fit_order, align_order, MAX_PAGE_ORDER); > > + > > + free_frozen_pages(page, order); > > + > > + page += 1U << order; > > + nr_pages -= 1U << order; > > + } > > +} > > + > > +void free_pages_bulk(struct page *page, int nr_pages) > > +{ > > + struct page *start = NULL; > > + bool can_free; > > + int i; > > + > > + for (i = 0; i < nr_pages; i++, page++) { > > + VM_BUG_ON_PAGE(PageHead(page), page); > > + VM_BUG_ON_PAGE(PageTail(page), page); > > + > > + can_free = put_page_testzero(page); > > + > > + if (!can_free && start) { > > + free_frozen_pages_bulk(start, page - start); > > + start = NULL; > > + } else if (can_free && !start) { > > + start = page; > > + } > > + } > > + > > + if (start) > > + free_frozen_pages_bulk(start, page - start); > > +} > > + > > /** > > * __free_pages - Free pages allocated with alloc_pages(). > > * @page: The page pointer returned from alloc_pages(). > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index ecbac900c35f..8f782bac1ece 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -3429,7 +3429,8 @@ void vfree_atomic(const void *addr) > > void vfree(const void *addr) > > { > > struct vm_struct *vm; > > - int i; > > + struct page *start; > > + int i, nr; > > > > if (unlikely(in_interrupt())) { > > vfree_atomic(addr); > > @@ -3455,17 +3456,26 @@ void vfree(const void *addr) > > /* All pages of vm should be charged to same memcg, so use first one. */ > > if (vm->nr_pages && !(vm->flags & VM_MAP_PUT_PAGES)) > > mod_memcg_page_state(vm->pages[0], MEMCG_VMALLOC, -vm->nr_pages); > > - for (i = 0; i < vm->nr_pages; i++) { > > + > > + start = vm->pages[0]; > > + BUG_ON(!start); > > + nr = 1; > > + for (i = 1; i < vm->nr_pages; i++) { > > struct page *page = vm->pages[i]; > > > > BUG_ON(!page); > > - /* > > - * High-order allocs for huge vmallocs are split, so > > - * can be freed as an array of order-0 allocations > > - */ > > - __free_page(page); > > - cond_resched(); > > + > > + if (start + nr != page) { > > + free_pages_bulk(start, nr); > > + start = page; > > + nr = 1; > > + cond_resched(); > > + } else { > > + nr++; > > + } > > } > > + free_pages_bulk(start, nr); > > + > > if (!(vm->flags & VM_MAP_PUT_PAGES)) > > atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); > > kvfree(vm->pages); > > ---8<--- > > I tested this on a performance monitoring system and see a huge improvement for > the test_vmalloc tests. > > Both columns are compared to v6.18. 6-19-0-rc1 has Vishal's change to allocate > large orders, which I previously reported the regressions for. vfree-high-order > adds the above patch to free contiguous order-0 pages in bulk. > > (R)/(I) means statistically significant regression/improvement. Results are > normalized so that less than zero is regression and greater than zero is > improvement. > > +-----------------+----------------------------------------------------------+--------------+------------------+ > | Benchmark | Result Class | 6-19-0-rc1 | vfree-high-order | > +=================+==========================================================+==============+==================+ > | micromm/vmalloc | fix_align_alloc_test: p:1, h:0, l:500000 (usec) | (R) -40.69% | (I) 3.98% | > | | fix_size_alloc_test: p:1, h:0, l:500000 (usec) | 0.10% | -1.47% | > | | fix_size_alloc_test: p:4, h:0, l:500000 (usec) | (R) -22.74% | (I) 11.57% | > | | fix_size_alloc_test: p:16, h:0, l:500000 (usec) | (R) -23.63% | (I) 47.42% | > | | fix_size_alloc_test: p:16, h:1, l:500000 (usec) | -1.58% | (I) 106.01% | > | | fix_size_alloc_test: p:64, h:0, l:100000 (usec) | (R) -24.39% | (I) 99.12% | > | | fix_size_alloc_test: p:64, h:1, l:100000 (usec) | (I) 2.34% | (I) 196.87% | > | | fix_size_alloc_test: p:256, h:0, l:100000 (usec) | (R) -23.29% | (I) 125.42% | > | | fix_size_alloc_test: p:256, h:1, l:100000 (usec) | (I) 3.74% | (I) 238.59% | > | | fix_size_alloc_test: p:512, h:0, l:100000 (usec) | (R) -23.80% | (I) 132.38% | > | | fix_size_alloc_test: p:512, h:1, l:100000 (usec) | (R) -2.84% | (I) 514.75% | > | | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | 2.74% | 0.33% | > | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | 0.58% | 1.36% | > | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | -0.66% | 1.48% | > | | long_busy_list_alloc_test: p:1, h:0, l:500000 (usec) | (R) -25.24% | (I) 77.95% | > | | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | -0.58% | 0.60% | > | | random_size_align_alloc_test: p:1, h:0, l:500000 (usec) | (R) -45.75% | (I) 8.51% | > | | random_size_alloc_test: p:1, h:0, l:500000 (usec) | (R) -28.16% | (I) 65.34% | > | | vm_map_ram_test: p:1, h:0, l:500000 (usec) | -0.54% | -0.33% | > +-----------------+----------------------------------------------------------+--------------+------------------+ > > What do you think? > You were first :) Some figures from me: # Default(3 pages) fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 541868 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 542515 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 541561 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 542951 usec # Patch(3 pages) fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 585266 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 594301 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 598912 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 589345 usec Now the perf figures are almost settled and aligned with default! We do use per-cpu-cache for 3 pages allocations. # Default(100 pages) fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 5724919 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 5721430 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 5717224 usec # Patch(100 pages) fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2629600 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2622811 usec fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2629324 usec ~2x faster! It is because of freeing now occurs much more efficient so we spent less cycles on free path comparing with default case. See below, perf also confirms that vfree() ~2x consumes less cycles: # Default + 96.99% 0.49% [test_vmalloc] [k] fix_size_alloc_test + 59.64% 2.38% [kernel] [k] vfree.part.0 + 45.69% 15.80% [kernel] [k] __free_frozen_pages + 39.83% 0.00% [kernel] [k] ret_from_fork_asm + 39.83% 0.00% [kernel] [k] ret_from_fork + 39.83% 0.00% [kernel] [k] kthread + 38.67% 0.00% [test_vmalloc] [k] test_func + 36.64% 0.01% [kernel] [k] __vmalloc_node_noprof + 36.63% 0.20% [kernel] [k] __vmalloc_node_range_noprof + 17.55% 4.94% [kernel] [k] alloc_pages_bulk_noprof + 16.46% 12.21% [kernel] [k] free_frozen_page_commit.isra.0 + 16.06% 8.09% [kernel] [k] vmap_small_pages_range_noflush + 12.56% 10.82% [kernel] [k] __rmqueue_pcplist + 9.45% 9.43% [kernel] [k] __get_pfnblock_flags_mask.isra.0 + 7.95% 7.95% [kernel] [k] pfn_valid + 5.77% 0.03% [kernel] [k] remove_vm_area + 5.44% 5.44% [kernel] [k] ___free_pages + 4.67% 4.59% [kernel] [k] __vunmap_range_noflush + 4.30% 4.30% [kernel] [k] __list_add_valid_or_report # Patch + 94.28% 1.00% [test_vmalloc] [k] fix_size_alloc_test + 55.63% 0.03% [kernel] [k] __vmalloc_node_noprof + 55.60% 3.78% [kernel] [k] __vmalloc_node_range_noprof + 37.26% 19.29% [kernel] [k] vmap_small_pages_range_noflush + 37.12% 5.63% [kernel] [k] vfree.part.0 + 30.59% 0.00% [kernel] [k] ret_from_fork_asm + 30.59% 0.00% [kernel] [k] ret_from_fork + 30.59% 0.00% [kernel] [k] kthread + 28.79% 0.00% [test_vmalloc] [k] test_func + 17.90% 17.88% [kernel] [k] pfn_valid + 13.24% 0.02% [kernel] [k] remove_vm_area + 10.90% 10.68% [kernel] [k] __vunmap_range_noflush + 10.81% 10.80% [kernel] [k] free_pages_bulk + 7.09% 0.51% [kernel] [k] alloc_pages_noprof + 6.58% 0.41% [kernel] [k] alloc_pages_mpol + 6.50% 0.30% [kernel] [k] free_frozen_pages_bulk + 5.74% 0.97% [kernel] [k] __alloc_frozen_pages_noprof + 5.70% 0.00% [kernel] [k] worker_thread + 5.62% 0.02% [kernel] [k] process_one_work + 5.57% 0.01% [kernel] [k] __purge_vmap_area_lazy + 4.76% 2.55% [kernel] [k] get_page_from_freelist So it is nice :) -- Uladzislau Rezki