From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C6E124169D for ; Wed, 10 Dec 2025 22:28:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765405722; cv=none; b=kVszujbaoocmBnJ3I5F4kQwyM7HCrAy5SsjqWMLagsgE+v7VnNyIymGWlUSQCWaHg31qlPEVpBVnXPSUK/M8R5FIaio+Ldn7IX6U+MyJSQTdj4uHJvcltGv+W5zw7qn/ShK2ZS9I0rDFbBVY3+Lp/YC9Id7Uf4N1Pa5gZgaHwzI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765405722; c=relaxed/simple; bh=bOeZ7IL7KKp/yYySopwuwIyf20hqGj3dyPwVAt5HKio=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=j/DPhHxQzQI24Jtqqi5ABLQeEM5X5B6EGNbAQ6ZYkFQQIJm42tdNdL8bv2UsA4V8UrQMosN6u/2Q9ryzeMBu9nYGHRHfxREMqftwf55soOdS1sX40Jav93TczDTJg+FwWpnuUAXWtsHA+XXikFY6SQshrG1Aza0wle+6ky6xZyg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=biixItjD; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="biixItjD" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-298287a26c3so4781095ad.0 for ; Wed, 10 Dec 2025 14:28:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1765405721; x=1766010521; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=2Y4YR3EvZUPa1M0pk6ofj+UMivnBJWFnH2I8aOHSdEA=; b=biixItjDhokzuwt/L8Z+vqtG6PQJdT/lSDiLJ6ubyCHq0dNrL3l4SM3LVkPuPUx4rG hE4+01kvniuv+iXcW1SVGyxYphgsjHM0q9BnkQgNpxobYq6PWU/MhLHefweX9C5Pk3Qc 1yz4aoO5ydOZnygducn1XutPoBn+N3h9InfSlpjCuv1GCIioupxR8ko7NvV7n2Ps2vWd yZ+thsN43r8mGZOv9DKGKUBfboK6B+oeJKCfJDoQmy1rddMcn0GDuXUuMta69DuJVB37 B6oY0GwkAzqz+BQGajUtbWHmWTaxQDBo2C2NadiSmSrsO/4U1RQ8OIwYxPzQk5XWZ2uh OGhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765405721; x=1766010521; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2Y4YR3EvZUPa1M0pk6ofj+UMivnBJWFnH2I8aOHSdEA=; b=rrzE9ZtldvG97oHkCWoLUwgCv/USt2DBrFCc+kejMdWElpMc59KV/3bOqM2R8NMAGY QnULXh8BaKE6bDDMD2wnm5xUyI4xKNyLydN6qPeAfogUjH2Zksksltzm+e16AwhyATmX HhOnDLFrhLEBVDGfwtoB1LgDnVbTtTqA59jLgG0hnrgKLp79BQOZv9ec/yJ7G+WaARRr 2NRhBEgEKMPVH9Dt7C1mbpe4v1kB6uoqoEuMPyznFDj1wp0fbAWs9ai6W6g7DcY8LOib 0rOc9c3E+pwPdoati/rdY/FQAygr0uRdVokmIw2G0sMICL7w5PjOYlgRHyfAJY7dJRsi airA== X-Forwarded-Encrypted: i=1; AJvYcCXwgaV9FJ3PdDJxv13IPDu4MwIvnYtxF1cchIeH4WawbN6RzxP7MYNr1r/SB0CXrQ/ss7IIv7n2Ej+wP+Q=@vger.kernel.org X-Gm-Message-State: AOJu0YyLUuU+epoNT8+4AFL0sXqQDGYOXNWxhrmARI7p1og1hhxffrfg 4La0C/oUkWoi5oTy9oOpek+ASmGQpY9sjcknJ6jAxyPpa/9hgfbvqsRm X-Gm-Gg: ASbGncsVuCvP40hT/6aQWVKpdGl23lp6goXROEcP6z1KmL/8xLbAfOzERXE4Bir9s7A LRWmX72EgDoB7dTSUlF2nvn9hjNhIsCzADafu9iVXhxSzAXxk3wwqfw3n1NWnsRXXJL46jOdxpN Mx1EToh8mUV9ZRgbC1+dZxmkqKkTptWfsiftTUyP+1pTHB0Xii2wlw11wT87eHpr+CHJfFK664c 0tFuzkYltsQQ7Jvh3d3lHwBkhror89BDLFfU9t1jCByOld9iXC3rcZvRI34EnnSCwXVmRoeGRs5 pcO3yysQWHytbMBHlEZ77zn9iEVm7+fXvEMn2Ry1Zxt3c8ZOqe10vbw29w/pEj4BV8V/xTWpMt7 iqKWodCQm9eQPRIsxniTzLRrGIB5UCsrFlwt2PvtkNM9sz9LXH7mljlgVg2hKz6WzbFuEXtxHF4 ijBZaIOgdYYrmDKaK9V6Q+H68oRnDBCR80UCt2fuvUbOw= X-Google-Smtp-Source: AGHT+IHZ6lBnOUweOAui2ZzJf3TpHq0VX8oilk8s2hjf+9kxfxm8aifawoRpmp/EQn7l9HuOqXwLqA== X-Received: by 2002:a05:7022:2485:b0:119:e56c:189c with SMTP id a92af1059eb24-11f2966a2ecmr2952426c88.4.1765405720243; Wed, 10 Dec 2025 14:28:40 -0800 (PST) Received: from fedora (c-67-164-59-41.hsd1.ca.comcast.net. [67.164.59.41]) by smtp.gmail.com with ESMTPSA id a92af1059eb24-11f2e2ff624sm2090304c88.12.2025.12.10.14.28.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Dec 2025 14:28:39 -0800 (PST) Date: Wed, 10 Dec 2025 14:28:37 -0800 From: "Vishal Moola (Oracle)" To: Ryan Roberts Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Uladzislau Rezki , Andrew Morton Subject: Re: [PATCH] mm/vmalloc: request large order pages from buddy allocator Message-ID: References: <20251021194455.33351-2-vishal.moola@gmail.com> <66919a28-bc81-49c9-b68f-dd7c73395a0d@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <66919a28-bc81-49c9-b68f-dd7c73395a0d@arm.com> On Wed, Dec 10, 2025 at 01:21:22PM +0000, Ryan Roberts wrote: > Hi Vishal, > > > On 21/10/2025 20:44, Vishal Moola (Oracle) wrote: > > Sometimes, vm_area_alloc_pages() will want many pages from the buddy > > allocator. Rather than making requests to the buddy allocator for at > > most 100 pages at a time, we can eagerly request large order pages a > > smaller number of times. > > > > We still split the large order pages down to order-0 as the rest of the > > vmalloc code (and some callers) depend on it. We still defer to the bulk > > allocator and fallback path in case of order-0 pages or failure. > > > > Running 1000 iterations of allocations on a small 4GB system finds: > > > > 1000 2mb allocations: > > [Baseline] [This patch] > > real 46.310s real 0m34.582 > > user 0.001s user 0.006s > > sys 46.058s sys 0m34.365s > > > > 10000 200kb allocations: > > [Baseline] [This patch] > > real 56.104s real 0m43.696 > > user 0.001s user 0.003s > > sys 55.375s sys 0m42.995s > > I'm seeing some big vmalloc micro benchmark regressions on arm64, for which > bisect is pointing to this patch. Ulad had similar findings/concerns[1]. Tldr: The numbers you are seeing are expected for how the test module is currently written. > The tests are all originally from the vmalloc_test module. Note that (R) > indicates a statistically significant regression and (I) indicates a > statistically improvement. > > p is number of pages in the allocation, h is huge. So it looks like the > regressions are all coming for the non-huge case, where we want to split to > order-0. > > +---------------------------------+----------------------------------------------------------+------------+------------------------+ > | Benchmark | Result Class | 6-18-0 | 6-18-0-gc2f2b01b74be | > +=================================+==========================================================+============+========================+ > | micromm/vmalloc | fix_align_alloc_test: p:1, h:0, l:500000 (usec) | 514126.58 | (R) -42.20% | > | | fix_size_alloc_test: p:1, h:0, l:500000 (usec) | 320458.33 | -0.02% | > | | fix_size_alloc_test: p:4, h:0, l:500000 (usec) | 399680.33 | (R) -23.43% | > | | fix_size_alloc_test: p:16, h:0, l:500000 (usec) | 788723.25 | (R) -23.66% | > | | fix_size_alloc_test: p:16, h:1, l:500000 (usec) | 979839.58 | -1.05% | > | | fix_size_alloc_test: p:64, h:0, l:100000 (usec) | 481454.58 | (R) -23.99% | > | | fix_size_alloc_test: p:64, h:1, l:100000 (usec) | 615924.00 | (I) 2.56% | > | | fix_size_alloc_test: p:256, h:0, l:100000 (usec) | 1799224.08 | (R) -23.28% | > | | fix_size_alloc_test: p:256, h:1, l:100000 (usec) | 2313859.25 | (I) 3.43% | > | | fix_size_alloc_test: p:512, h:0, l:100000 (usec) | 3541904.75 | (R) -23.86% | > | | fix_size_alloc_test: p:512, h:1, l:100000 (usec) | 3597577.25 | (R) -2.97% | > | | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | 487021.83 | (I) 4.95% | > | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | 344466.33 | -0.65% | > | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | 342484.25 | -1.58% | > | | long_busy_list_alloc_test: p:1, h:0, l:500000 (usec) | 4034901.17 | (R) -25.35% | > | | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | 195973.42 | 0.57% | > | | random_size_align_alloc_test: p:1, h:0, l:500000 (usec) | 643489.33 | (R) -47.63% | > | | random_size_alloc_test: p:1, h:0, l:500000 (usec) | 2029261.33 | (R) -27.88% | > | | vm_map_ram_test: p:1, h:0, l:500000 (usec) | 83557.08 | -0.22% | > +---------------------------------+----------------------------------------------------------+------------+------------------------+ > > I have a couple of thoughts from looking at the patch: > > - Perhaps split_page() is the bulk of the cost? Previously for this case we > were allocating order-0 so there was no split to do. For h=1, split would > have already been called so that would explain why no regression for that > case? For h=1, this patch shouldn't change (as long as nr_pages < arch_vmap_{pte,pmd}_supported_shift). This is why you don't see regressions in those cases. > - I guess we are bypassing the pcpu cache? Could this be having an effect? Dev > (cc'ed) did some similar investigation a while back and saw increased vmalloc > latencies when bypassing pcpu cache. I'd say this is more a case of this test module targeting the pcpu cache. The module allocates then frees one at a time, which promotes reusing pcpu pages. [1] Has some numbers after modifying the test such that all the allocations are made before freeing any. > - Philosophically is allocating physically contiguous memory when it is not > strictly needed the right thing to do? Large physically contiguous blocks are > a scarce resource so we don't want to waste them. Although I guess it could > be argued that this actually preserves the contiguous blocks because the > lifetime of all the pages is tied together. Anyway, I doubt this is the This was the primary incentive for this patch :) > reason for the slow down, since those benchmarks are not under memory > pressure. > > Anyway, it would be good to resolve the performance regressions if we can. Imo, the appropriate way to address these is to modify the test module as seen in [1]. [1] https://lore.kernel.org/linux-mm/aPJ6lLf24TfW_1n7@milan/