From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B84C338735D for ; Sun, 8 Mar 2026 17:41:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.44 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772991678; cv=none; b=D+CALXyijmcEc3XJ2rv9tPfg9IgclTqN/fffM5y3c+jqGAFNaZPowEYewIrNe8P6/1BqIY8qsr6vRYHFm6gvYaS1su/3RThGlKA9gCnRzv3FGxr5apGAi9Vyt6Qo2jHVnPg3RUWhtXv/vGzT61O8MCgbmnekfOZkUc6+dc/suRU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772991678; c=relaxed/simple; bh=ARZK3uVQrOyoXwf3qKIGnriN5WRDpHAY+UZnNF0jOVU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type:Content-Disposition; b=mLSel1OHdttkzUCU94A1xNLf8G2Ffs8065Zw62JO1m8jOWNbwTOGoQgECNA6Lytzlk/DbodRxAgtNLrWWCZbqxv3qsKugjtQw4D5hge5PtXQjMo9QM6e8Hm0qT4XqRBrBlxTXGIJM2VOttoLpmfCxjVATxjEBCzcSXvox5EA58k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UvmyFEiT; arc=none smtp.client-ip=209.85.221.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UvmyFEiT" Received: by mail-wr1-f44.google.com with SMTP id ffacd0b85a97d-439c9eb5d36so4030771f8f.2 for ; Sun, 08 Mar 2026 10:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1772991675; x=1773596475; darn=vger.kernel.org; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=3gsaK4LeHfsp+BefHk+i0EQWqaH5qYyXSGTQN29Xs4Q=; b=UvmyFEiTb9YCEIdLpuSVJ158BbmcSShVwQ+TwH4oiW9i9Lpu+Qd3xX41gA9CjlO0I9 H7PrN+2kItdOgT9+p2SVozpnRrOl5/VTqsBCz26/9bw+hG5O1rmBLK3aPW2V3m9Z5H03 INL9T7z5l5Kz/s0Az1RpE02ZF/muzJETqFqPiZ9mkt0CKfIMnN3WLTGoiB4eHSBOq6DO PJEN99wwKWWt/9Sx6rAvqyVdTuGo+0BaPrAEj+Ba/T8ry/LETuKKeY7SKVO2LT6c2sRP F7rv2n5gMGqa3q5qrmZR95xteHSwrURqWZoBsufcydaist2B93/K1bYzidATEtg49LYR cHHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772991675; x=1773596475; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3gsaK4LeHfsp+BefHk+i0EQWqaH5qYyXSGTQN29Xs4Q=; b=W2v1HfGo4CNEmWSb/CI9x4EvHuqx45aWluebcPdvc1ENA7MbOWktd/HBaoXdh7PQTb coUxtG+8mWSzGPSfelUxtHZqtHwwINDQpnvqeIS6jo4ZBT38K7rTzG0Xe2RNMOJ9sRUQ HJxw7ZZYu6cvG6uTG6bW6SMdIKLXmFTDCf8iNsxIQiKAYVIXbiRW5FAqZZnD36csG0/c 1nNUBYcTLyKpGtLeVjFbviIJconn0o0SlYbx9jx3kWwy6oBjF/xmLJmGYXXTsqdV0LpL yNwxsXTSGfiYA4i9dRC72i/PjJmhx+4efbCgkqSNf/XVnWy8sXn0qsLf6KATPlikbBr8 1zCQ== X-Forwarded-Encrypted: i=1; AJvYcCWtcFjFYtHfPp2YEPsP0+3pN8DQbfNmmFLSRUBGeScLa2/Xv/HC1vAICX7QjgLxl0sjIhUN3zfF9dFvBrw=@vger.kernel.org X-Gm-Message-State: AOJu0YxjX66+s8OK7W0q8NMu6Gm7il27gtkkbwxXh8B9R0lxXfV/bbd7 ROSHvOZ/Ui1dh0C9voM787aBGm2xvvycuDyZWeEvlagBlzLtPsOiZqm5 X-Gm-Gg: ATEYQzybvbo/Qd+aY+J1wuP1VBeS2enlVerAkChJJmxh/bHxSqSurMezg2r45QsDgwA c6Kp1FsKE0HNmlfE77ODPIj0PlkB/qgjuw5d26KiL2ipd9OFoyE3DxjwnnvCllf/qCKeMX8IBJj XEjxf7bAAhsui7qHgjd3QY74DZL8++W+QeNirr9DLeHFgb2BSLfhddoW/Nju1pG2KA+zlQsuJkl P0Txk4jgIylCsh9bVNdqAoxx7NM5/M7QvC8lCOrQZfo5cQuBhlEM+ahd6yjJFH6NkHs6aT0nTWj 3WfnyLL57jIvyDL1uvogsek9DLoPJD98JrPbVT88h1Yi7Y7ZvOvjlZW/4FpVMxvnuXdfy7LeF4k 6urEz0/Q8/gOcJgstaeJhzPBgnvRtp3KHczodzofX9bukYRXRQQ30S8iyIeY0G6XZdjxob+59PW tm7HufOrzatMyTui1BhQVG2Z7aqenpgLXPpDndTre1sB+u7w== X-Received: by 2002:a05:6000:4283:b0:439:c9d6:4313 with SMTP id ffacd0b85a97d-439da369c4amr15406233f8f.44.1772991674882; Sun, 08 Mar 2026 10:41:14 -0700 (PDT) Received: from WindFlash.powerhub ([2a0a:ef40:1b2a:fa01:9944:6a8c:dc37:eba5]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-439dae3c80esm17542369f8f.29.2026.03.08.10.41.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Mar 2026 10:41:14 -0700 (PDT) From: Leonardo Bras To: Marcelo Tosatti Cc: Leonardo Bras , Michal Hocko , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feng , Frederic Weisbecker Subject: Re: [PATCH 0/4] Introduce QPW for per-cpu operations Date: Sun, 8 Mar 2026 14:41:12 -0300 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: <20260206143430.021026873@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit On Mon, Mar 02, 2026 at 09:19:44PM -0300, Marcelo Tosatti wrote: > On Fri, Feb 27, 2026 at 10:23:27PM -0300, Leonardo Bras wrote: > > On Mon, Feb 23, 2026 at 10:06:32AM +0100, Michal Hocko wrote: > > > On Fri 20-02-26 18:58:14, Leonardo Bras wrote: > > > > On Mon, Feb 16, 2026 at 12:00:55PM +0100, Michal Hocko wrote: > > > > > On Sat 14-02-26 19:02:19, Leonardo Bras wrote: > > > > > > On Wed, Feb 11, 2026 at 05:38:47PM +0100, Michal Hocko wrote: > > > > > > > On Wed 11-02-26 09:01:12, Marcelo Tosatti wrote: > > > > > > > > On Tue, Feb 10, 2026 at 03:01:10PM +0100, Michal Hocko wrote: > > > > > > > [...] > > > > > > > > > What about !PREEMPT_RT? We have people running isolated workloads and > > > > > > > > > these sorts of pcp disruptions are really unwelcome as well. They do not > > > > > > > > > have requirements as strong as RT workloads but the underlying > > > > > > > > > fundamental problem is the same. Frederic (now CCed) is working on > > > > > > > > > moving those pcp book keeping activities to be executed to the return to > > > > > > > > > the userspace which should be taking care of both RT and non-RT > > > > > > > > > configurations AFAICS. > > > > > > > > > > > > > > > > Michal, > > > > > > > > > > > > > > > > For !PREEMPT_RT, _if_ you select CONFIG_QPW=y, then there is a kernel > > > > > > > > boot option qpw=y/n, which controls whether the behaviour will be > > > > > > > > similar (the spinlock is taken on local_lock, similar to PREEMPT_RT). > > > > > > > > > > > > > > My bad. I've misread the config space of this. > > > > > > > > > > > > > > > If CONFIG_QPW=n, or kernel boot option qpw=n, then only local_lock > > > > > > > > (and remote work via work_queue) is used. > > > > > > > > > > > > > > > > What "pcp book keeping activities" you refer to ? I don't see how > > > > > > > > moving certain activities that happen under SLUB or LRU spinlocks > > > > > > > > to happen before return to userspace changes things related > > > > > > > > to avoidance of CPU interruption ? > > > > > > > > > > > > > > Essentially delayed operations like pcp state flushing happens on return > > > > > > > to the userspace on isolated CPUs. No locking changes are required as > > > > > > > the work is still per-cpu. > > > > > > > > > > > > > > In other words the approach Frederic is working on is to not change the > > > > > > > locking of pcp delayed work but instead move that work into well defined > > > > > > > place - i.e. return to the userspace. > > > > > > > > > > > > > > Btw. have you measure the impact of preempt_disbale -> spinlock on hot > > > > > > > paths like SLUB sheeves? > > > > > > > > > > > > Hi Michal, > > > > > > > > > > > > I have done some study on this (which I presented on Plumbers 2023): > > > > > > https://lpc.events/event/17/contributions/1484/ > > > > > > > > > > > > Since they are per-cpu spinlocks, and the remote operations are not that > > > > > > frequent, as per design of the current approach, we are not supposed to see > > > > > > contention (I was not able to detect contention even after stress testing > > > > > > for weeks), nor relevant cacheline bouncing. > > > > > > > > > > > > That being said, for RT local_locks already get per-cpu spinlocks, so there > > > > > > is only difference for !RT, which as you mention, does preemtp_disable(): > > > > > > > > > > > > The performance impact noticed was mostly about jumping around in > > > > > > executable code, as inlining spinlocks (test #2 on presentation) took care > > > > > > of most of the added extra cycles, adding about 4-14 extra cycles per > > > > > > lock/unlock cycle. (tested on memcg with kmalloc test) > > > > > > > > > > > > Yeah, as expected there is some extra cycles, as we are doing extra atomic > > > > > > operations (even if in a local cacheline) in !RT case, but this could be > > > > > > enabled only if the user thinks this is an ok cost for reducing > > > > > > interruptions. > > > > > > > > > > > > What do you think? > > > > > > > > > > The fact that the behavior is opt-in for !RT is certainly a plus. I also > > > > > do not expect the overhead to be really be really big. > > > > > > > > Awesome! Thanks for reviewing! > > > > > > > > > To me, a much > > > > > more important question is which of the two approaches is easier to > > > > > maintain long term. The pcp work needs to be done one way or the other. > > > > > Whether we want to tweak locking or do it at a very well defined time is > > > > > the bigger question. > > > > > > > > That crossed my mind as well, and I went with the idea of changing locking > > > > because I was working on workloads in which deferring work to a kernel > > > > re-entry would cause deadline misses as well. Or more critically, the > > > > drains could take forever, as some of those tasks would avoid returning to > > > > kernel as much as possible. > > > > > > Could you be more specific please? > > > > Hi Michal, > > Sorry for the delay > > > > I think Marcelo covered some of the main topics earlier in this > > thread: > > > > https://lore.kernel.org/all/aZ3ejedS7nE5mnva@tpad/ > > > > But in syntax: > > - There are workloads that are projected not avoid as much as possible > > return to kernelspace, as they are either cpu intensive, or latency > > sensitive (RT workloads) such as low-latency automation. > > > > There are scenarios such as industrial automation in which > > the applications are supposed to reply a request in less than 50us since it > > was generated (IIRC), so sched-out, dealing with interruptions, or syscalls > > are a no-go. In those cases, using cpu isolation is a must, and since it > > can stay really long running in userspace, it may take a very long time to > > do any syscall to actually perform the scheduled flush. > > > > - Other workloads may need to use syscalls, or rely in interrupts, such as > > HPC, but it's also not interesting to take long on them, as the time spent > > there is time not used for processing the required data. > > > > Let's say that for the sake of cpu isolation, a lot of different > > requests made to given isolated cpu are batched to be run on syscall > > entry/exit. It means the next syscall may take much longer than > > usual. > > - This may break other RT workloads such as sensor/sound/image sampling, > > which could be generally ok with some of the faster syscalls for their > > application, and now may perceive an error because one of those syscalls > > took too long. > > > > While the qpw approach may cost a few extra cycles, it operates remotelly > > and makes the system a bit more predictable. > > > > Also, when I was planning the mechanism, I remember it was meant to add > > zero overhead in case of CONFIG_QPW=n, very little overhead in case of > > CONFIG_QPW=y + qpw=0 (a couple of static branches, possibly with the > > cost removed by the cpu branch predictor), and only add a few cycles in > > case of qpw=1 + !RT. Which means we may be missing just a few adjustments > > to get there. > > Leo, > > v2 of the patchset adds only 2 cycles to CONFIG_QPW=y + qpw=0. > The larger overhead was due to migrate_disable, which is now (on v2) > hidden inside the static branch. > My bad. Hi Marcelo, Great, hiding migrate_disable under the static branch is the best scenario. I wonder why we spend 2 cycles on the static branches, though, should be close to nothing unless the branch predictor is too busy already. Well, we can always try to optimize in a different way. Thanks for the effort on this! Leo > > > BTW, if the numbers are not that great for your workloads, we could take a > > look at adding an extra QPW mode in which local_locks are taken in > > the fastpath and it allows the flush wq to be posponed to that point in > > syscall return that you mentioned. What I mean is that we don't need to be > > limitted to choosing between solutions, but instead allow the user (or > > distro) to choose the desired behavior. > > > > Thanks! > > Leo > > I think 2 cycles is acceptable. >