From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B02FA2DCC03 for ; Fri, 20 Feb 2026 22:38:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.41 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771627093; cv=none; b=NoUOj5hPFHLoRuOmVDvnpPUIYMnzgKGEPt8PEvneB56N8odFlDCI1TP82L82t136tep/6rSlsGJbYkfhy4xP2ALy5edFtlAAcsqvYMLyNyFEJy4FCyV5GwN6gfuMTNeEugMi3vaBtD9IPoPbC8+TY9i7kBzKRMR49E/xl0Yagdc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771627093; c=relaxed/simple; bh=kznS5zbgByL4LswbVNS7+HIUnrX5/AH+cagKNemQ4+A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type:Content-Disposition; b=aiy3W1LmU1NzqWtP8/ztHL+38qvGDN2u55mvNzd5lRAUiV+U3gDnx8K02LO4ZzUT0CSpcMdwTckawv3Hi1fnxQqiOJWovC6UojLDLtCgUKpXlzOuSmakX1ZugBhLccGTo8kcQpl0S59wORreiUWe5YoI+8YVjb7iRvoT0k/XOis= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MEhsmIRo; arc=none smtp.client-ip=209.85.128.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MEhsmIRo" Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-4807068eacbso20634015e9.2 for ; Fri, 20 Feb 2026 14:38:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1771627090; x=1772231890; darn=vger.kernel.org; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=/XzsAqXyN/LDQgNAnCZM9h69cqY0mG7wY/zZ4ZIqdNg=; b=MEhsmIRoMxpNxMxPP2xr84OyAgziY2AGVHUAbRW+FUd4VS6ur35GsYwK/rCgZ5AqvN jdFqfve029Z+SUe75q9NhEJJbPalW04AJmohCTdLYnaZoZIXchNcXTBXQcFGMmPsthpH hwdA/wqf7FK9RvzMAqxgGa47DbzW8bMld7E86FRzFtI/oHE9/mSMnsg9AN+rTiU5vryu 4/0/4ZtQfxITdB3+7lH+6LPkyQfshR7ajETCIeelXxbh4O0yfDdnuciP+vF3eAWQ8OzB juVGzADw2F3UR+GAC6xMVM2/y2jjHd8QQQIwdydI8QC1CtRdcZqk5uzV+zaPFdXKWUl7 NZew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771627090; x=1772231890; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/XzsAqXyN/LDQgNAnCZM9h69cqY0mG7wY/zZ4ZIqdNg=; b=dhOTpXCxExAhy9PakjzXDznuv/NQLewu7Yf/nLIMy/SEfaycGTfQPjR6ru0K0c3icz 9+uucFud1rR/oL472cxti5JkmEmz8yIhBtoVhqqF9fDMW48+ZOK3simO+kiLQv4ckxGd EBAVZVHlA+1CeD95nAa1Mxjrng9W+ZO2HRtOk3IpHERN9O0clJU+ehU30NTKVLtxXZbc AZPEL+X5avv6KWHQv2nM2YBctMJCcm9+fMcXgO4Cb6YVm84urg+YiYSVtQQOU+nXuE2U /TkKLNxgliVuG1z8MkoKRoN771HULaAAqJ4GFVQIC2EfXkS4rjNRVf3ar9HgnJUhxA+X MEhQ== X-Forwarded-Encrypted: i=1; AJvYcCUhZ/4zbmbfijdsJRK2PUklFC+1efkXTszRAyEwpI6G8efeznILlGMwq5jE/wJLVL4RFrlHFVchmFr/QxU=@vger.kernel.org X-Gm-Message-State: AOJu0YyYx0+T7iU4PDXjoC+3gkIn6GPmWfZ4YT3ZFjT4c7OyadCccXLl TbHlsK1YJ/aQNE1iSzBpNtCRKMDA/jUjGXx33uAyod3jWEShfu6gRFim X-Gm-Gg: AZuq6aJ2cIy/UdVs6A1n3z4T3AZ69grjk1UWqy5KVTUmkQuqLfYMXFoq1a10rg85PX9 l6OnEmPqmCbsPoj7dCpU0eKkLkPEXY6eCu979RhNlGt4vIO/gPgnAIuqRXXI9aUcOs8bOh9JyBv SWoB//6+nyALHKgcTlYnC0UWJZiawhJEJ+tslXsQvI5afWFAfufBP8IzM5oRwJk8kxQ4v3ESjpM E6RYkEVxvEaObbn0mcEU/Tllrg/XUi1AGEpYZP2pIsG0oW7EwbNPZqi6CCnqSisA66L78xcUtRZ GUh5IAGDcUhsEx9a+CVo779MmxUD3JVLyosqUe8PaA3H2zcyp/TQaxBgvB7gQMdPEa2QuS5QUBc zudHoOn/0N/u0HErT8pEp9xypVg0FVf3eakU0TmqkA7jBkXaiCZzaiIBiov/ozafIRb+KWT0jaE kfrmxrzIkK6mIdAo+jZ4gYpyBMPbMvPzGBU6g= X-Received: by 2002:a05:600c:8b71:b0:483:7980:4687 with SMTP id 5b1f17b1804b1-483a95dd932mr18372615e9.17.1771627089941; Fri, 20 Feb 2026 14:38:09 -0800 (PST) Received: from WindFlash.powerhub ([2a0a:ef40:1b2a:fa01:9944:6a8c:dc37:eba5]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-483a31b3e0dsm167824285e9.1.2026.02.20.14.38.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Feb 2026 14:38:09 -0800 (PST) From: Leonardo Bras To: Marcelo Tosatti Cc: Leonardo Bras , Michal Hocko , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feng , Frederic Weisbecker Subject: Re: [PATCH 0/4] Introduce QPW for per-cpu operations Date: Fri, 20 Feb 2026 19:38:04 -0300 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: <20260206143430.021026873@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit On Fri, Feb 20, 2026 at 01:55:57PM -0300, Marcelo Tosatti wrote: > On Fri, Feb 20, 2026 at 01:51:13PM -0300, Marcelo Tosatti wrote: > > On Mon, Feb 16, 2026 at 12:00:55PM +0100, Michal Hocko wrote: > > > On Sat 14-02-26 19:02:19, Leonardo Bras wrote: > > > > On Wed, Feb 11, 2026 at 05:38:47PM +0100, Michal Hocko wrote: > > > > > On Wed 11-02-26 09:01:12, Marcelo Tosatti wrote: > > > > > > On Tue, Feb 10, 2026 at 03:01:10PM +0100, Michal Hocko wrote: > > > > > [...] > > > > > > > What about !PREEMPT_RT? We have people running isolated workloads and > > > > > > > these sorts of pcp disruptions are really unwelcome as well. They do not > > > > > > > have requirements as strong as RT workloads but the underlying > > > > > > > fundamental problem is the same. Frederic (now CCed) is working on > > > > > > > moving those pcp book keeping activities to be executed to the return to > > > > > > > the userspace which should be taking care of both RT and non-RT > > > > > > > configurations AFAICS. > > > > > > > > > > > > Michal, > > > > > > > > > > > > For !PREEMPT_RT, _if_ you select CONFIG_QPW=y, then there is a kernel > > > > > > boot option qpw=y/n, which controls whether the behaviour will be > > > > > > similar (the spinlock is taken on local_lock, similar to PREEMPT_RT). > > > > > > > > > > My bad. I've misread the config space of this. > > > > > > > > > > > If CONFIG_QPW=n, or kernel boot option qpw=n, then only local_lock > > > > > > (and remote work via work_queue) is used. > > > > > > > > > > > > What "pcp book keeping activities" you refer to ? I don't see how > > > > > > moving certain activities that happen under SLUB or LRU spinlocks > > > > > > to happen before return to userspace changes things related > > > > > > to avoidance of CPU interruption ? > > > > > > > > > > Essentially delayed operations like pcp state flushing happens on return > > > > > to the userspace on isolated CPUs. No locking changes are required as > > > > > the work is still per-cpu. > > > > > > > > > > In other words the approach Frederic is working on is to not change the > > > > > locking of pcp delayed work but instead move that work into well defined > > > > > place - i.e. return to the userspace. > > > > > > > > > > Btw. have you measure the impact of preempt_disbale -> spinlock on hot > > > > > paths like SLUB sheeves? > > > > > > > > Hi Michal, > > > > > > > > I have done some study on this (which I presented on Plumbers 2023): > > > > https://lpc.events/event/17/contributions/1484/ > > > > > > > > Since they are per-cpu spinlocks, and the remote operations are not that > > > > frequent, as per design of the current approach, we are not supposed to see > > > > contention (I was not able to detect contention even after stress testing > > > > for weeks), nor relevant cacheline bouncing. > > > > > > > > That being said, for RT local_locks already get per-cpu spinlocks, so there > > > > is only difference for !RT, which as you mention, does preemtp_disable(): > > > > > > > > The performance impact noticed was mostly about jumping around in > > > > executable code, as inlining spinlocks (test #2 on presentation) took care > > > > of most of the added extra cycles, adding about 4-14 extra cycles per > > > > lock/unlock cycle. (tested on memcg with kmalloc test) > > > > > > > > Yeah, as expected there is some extra cycles, as we are doing extra atomic > > > > operations (even if in a local cacheline) in !RT case, but this could be > > > > enabled only if the user thinks this is an ok cost for reducing > > > > interruptions. > > > > > > > > What do you think? > > > > > > The fact that the behavior is opt-in for !RT is certainly a plus. I also > > > do not expect the overhead to be really be really big. To me, a much > > > more important question is which of the two approaches is easier to > > > maintain long term. The pcp work needs to be done one way or the other. > > > Whether we want to tweak locking or do it at a very well defined time is > > > the bigger question. > > > > Without patchset: > > ================ > > > > [ 1188.050725] kmalloc_bench: Avg cycles per kmalloc: 159 > > > > With qpw patchset, CONFIG_QPW=n: > > ================================ > > > > [ 50.292190] kmalloc_bench: Avg cycles per kmalloc: 163 Weird.. with CONFIG_QPW we should see no difference. Oh, maybe the changes in the code, such as adding a new cpu parameter in some functions may have caused this. (oh, there is the migrate_disable as well) > > > > With qpw patchset, CONFIG_QPW=y, qpw=0: > > ======================================= > > > > [ 29.872153] kmalloc_bench: Avg cycles per kmalloc: 170 > > Humm, what changed here is basically from +#define qpw_lock(lock, cpu) \ + local_lock(lock) to +#define qpw_lock(lock, cpu) \ + do { \ + if (static_branch_maybe(CONFIG_QPW_DEFAULT, &qpw_sl)) \ + spin_lock(per_cpu_ptr(lock.sl, cpu)); \ + else \ + local_lock(lock.ll); \ + } while (0) So only the cost of a static branch.. maybe I did something wrong here with the static_branch_maybe, as any cpu branch predictor should make this delta close to zero. > > > > With qpw patchset, CONFIG_QPW=y, qpw=1: > > ======================================== > > > > [ 37.494687] kmalloc_bench: Avg cycles per kmalloc: 190 > > 20 cycles as a price for a local_lock->spinlock seems too much. Taking in account the previous message, maybe we should work on making them inlined spinlocks, if not already. (Yeah, I missed that verification :| ) > > With PREEMPT_RT enabled, qpw=0: > > =============================== > > > > [ 65.163251] kmalloc_bench: Avg cycles per kmalloc: 181 > > > > With PREEMPT_RT enabled, no patchset: > > ===================================== > > [ 52.701639] kmalloc_bench: Avg cycles per kmalloc: 185 > > Nice, having the QPW patch saved some cycles :) > > With PREEMPT_RT enabled, qpw=1: > > ============================== > > > > [ 35.103830] kmalloc_bench: Avg cycles per kmalloc: 196 > This is odd, though. The spinlock is already there, so from qpw=0 to qpw=1 there should be no performance change. Maybe in local_lock they do some optimization in their spinlock? > #include > #include > #include > #include > #include > #include > #include > > MODULE_LICENSE("GPL"); > MODULE_AUTHOR("Gemini AI"); > MODULE_DESCRIPTION("A simple kmalloc performance benchmark"); > > static int size = 64; // Default allocation size in bytes > module_param(size, int, 0644); > > static int iterations = 1000000; // Default number of iterations > module_param(iterations, int, 0644); > > static int __init kmalloc_bench_init(void) { > void **ptrs; > cycles_t start, end; > uint64_t total_cycles; > int i; > pr_info("kmalloc_bench: Starting test (size=%d, iterations=%d)\n", size, iterations); > > // Allocate an array to store pointers to avoid immediate kfree-reuse optimization > ptrs = vmalloc(sizeof(void *) * iterations); > if (!ptrs) { > pr_err("kmalloc_bench: Failed to allocate pointer array\n"); > return -ENOMEM; > } > > preempt_disable(); > start = get_cycles(); > > for (i = 0; i < iterations; i++) { > ptrs[i] = kmalloc(size, GFP_ATOMIC); > } > > end = get_cycles(); > > total_cycles = end - start; > preempt_enable(); > > pr_info("kmalloc_bench: Total cycles for %d allocs: %llu\n", iterations, total_cycles); > pr_info("kmalloc_bench: Avg cycles per kmalloc: %llu\n", total_cycles / iterations); > > // Cleanup > for (i = 0; i < iterations; i++) { > kfree(ptrs[i]); > } > vfree(ptrs); > > return 0; > } > > static void __exit kmalloc_bench_exit(void) { > pr_info("kmalloc_bench: Module unloaded\n"); > } > > Nice! Please collect min and max as well, maybe we can have an insight of what could have happened, then :) What was the system you used for testing? Thanks! Leo