From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f46.google.com (mail-wr1-f46.google.com [209.85.221.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3C872F39B5 for ; Sat, 28 Feb 2026 01:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.46 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772241825; cv=none; b=m26rgy+JCa1XN5qLbLonMdyiqDSGloiB/8JuUBjEVU0i/MdBBAs/cOBuh4WHa586e6o7C3L3o/wKapeaM3wRYt7EUj4GjzXe0mfXjlIPgEAxcmXvz3zN/xC23yQwUrlfI7/+KaLSs99b5ZUwoaMZIlQLsSLqytrgs/JFAxaCYOw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772241825; c=relaxed/simple; bh=8K1g0s200C2jXvnlF3n/xCUoV2tw1xf9NKltWq8Auq8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type:Content-Disposition; b=Hlnvvj5tR2eH6qm8O8/6ZD5XhbaU+6ahWgM5DEl4wInbzuqIYOq67rlox4wkeuZk/Xk1Yn0+GYxkq7+csKEDS2Vd4oTTszo7pOak5pffJ07vt6ZEo571Sf42OFRW/azlWo7jEwKn/DYc2K7F3XCh38P1Pt9zUA+SBVvNb1IS16c= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mUsUfYm8; arc=none smtp.client-ip=209.85.221.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mUsUfYm8" Received: by mail-wr1-f46.google.com with SMTP id ffacd0b85a97d-4398d1f06caso2453573f8f.0 for ; Fri, 27 Feb 2026 17:23:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1772241821; x=1772846621; darn=vger.kernel.org; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=kpdBycfJnCA5XxdzZKoEBDpwQf+1LoRkJkd0FNmFdis=; b=mUsUfYm8qK+TkLbqzHlWbc/k2m5TnCNm/tkUyMTfuYLbOPrzMgVawMbie2X+P5J1ep p0X2K858n6UtddJW82QZuqZ9UOsy5IwsVgRX2hQVhMQVXaeyYr5oMBUdpcy55cjd4Tpt V+wGCi6kzmWLaeE034Bh+7Ln8+it84vsGz8iMCeSKFGY8vLwiVV2mV+S4On2z7RUBeCt borjvLtdEzarLQzJEbpJZWlm1n2jN52MWr8v7FypEuspxCiqryZmYsn/Bmu5YhYIFKUO ZcTqtXcgGRC3Uj26fpfwxSlHk2tCTXHw5iRyQA9WcBMsXx/fU+b8Gx6e2Vwlwr3tfX0T jy3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772241821; x=1772846621; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kpdBycfJnCA5XxdzZKoEBDpwQf+1LoRkJkd0FNmFdis=; b=Lm8yn3Rktg4fRdLj9mhndQRQkI+LiSlnJuze9VMVEi4fTW7vuKG4R1ndEDS2wYE7nW Fo0YuJGZKQWOuLIZpsLoFrq5HuHTD9a8rtsUNmfIbX8qIkFo3XSTzOVPjbOde9iAn+BW AoQhQuiFNeOngKjjZNxeAbCcUeBEmL0K/KhTNsx8KQQQMAi0wZMiJ7fZdQBJ2XpGHgAQ 2XyaixVDc02t307F8s0PTFGL3KMsJsPiOpSoG/eWTyhhVM9RSNS8DuHjBZIu1MCdpGuI bLdjedm90jW/bL0xPVc4ioF00zVybchkA7XlGj1m+0M79cZoMmYIXiBwY0aCZLjzv1RV S15Q== X-Forwarded-Encrypted: i=1; AJvYcCUqFFyuDtBJvwbjLHa+C+RNoJeljXJ9ci9sbt92M14+9SfWQ8WZrYdjXAGhh3k/Vkj7uL3VuZwLFnmA1Yk=@vger.kernel.org X-Gm-Message-State: AOJu0YxptgUV6DNyQStc1wQN3DxGmRhBl5r8egRmpk/TuLhuZT9pksrU moJUCxdLq8eMRAKySpaVo0vKDXhp6l1bLKAHxdZWw7E3pSkOySaOR0fhfDzXK2OE X-Gm-Gg: ATEYQzwZFZ4JlmerfFpZ3dC6zVYCdNxC/zUtf6FtToNhNjRxq8C4L7Cd/v/PGq5+tly Sq85QFUttVabe+iFtFCxodpDEbdyE7SaalpZrDjIKGwPysz1mQPuEkAjVkJuOkrLZQKBQ5PAzfo hwUENtrdUpoZDTiN5a5DjFGIYgkcVDYiF6zqqgmYEZ4fL/7k/cR7p6aSC/jHM+lGUKgCF2wywig 8LpYPkDVnMY4AE6jeoMNwTk1KQ6qOf7qyciCtBorUlqaV8UAGGvIyNSwaljSJT33sJdBfQdqDS5 CntF4BvTtRN9olR0986WsSEv0vcQH9I3Kr/LY0uzL9hDf6PaYKCcZeobypaXhftGufF1uR/okjK OZK0RLcDGpwThwjsOp4GdylL5K/am3bpD8lP1OfXChibHnnOcTyATDL5MlSuosUsPzJg4w2nMER DFpvClrWurDRkPOdeDhJrpXgM9fg5cgXIK8ss= X-Received: by 2002:a05:6000:1845:b0:435:add0:3d68 with SMTP id ffacd0b85a97d-4399de33986mr8879477f8f.58.1772241820909; Fri, 27 Feb 2026 17:23:40 -0800 (PST) Received: from WindFlash.powerhub ([2a0a:ef40:1b2a:fa01:9944:6a8c:dc37:eba5]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-4399c75b272sm10193021f8f.24.2026.02.27.17.23.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Feb 2026 17:23:40 -0800 (PST) From: Leonardo Bras To: Michal Hocko Cc: Leonardo Bras , Marcelo Tosatti , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feng , Frederic Weisbecker Subject: Re: [PATCH 0/4] Introduce QPW for per-cpu operations Date: Fri, 27 Feb 2026 22:23:27 -0300 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: <20260206143430.021026873@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit On Mon, Feb 23, 2026 at 10:06:32AM +0100, Michal Hocko wrote: > On Fri 20-02-26 18:58:14, Leonardo Bras wrote: > > On Mon, Feb 16, 2026 at 12:00:55PM +0100, Michal Hocko wrote: > > > On Sat 14-02-26 19:02:19, Leonardo Bras wrote: > > > > On Wed, Feb 11, 2026 at 05:38:47PM +0100, Michal Hocko wrote: > > > > > On Wed 11-02-26 09:01:12, Marcelo Tosatti wrote: > > > > > > On Tue, Feb 10, 2026 at 03:01:10PM +0100, Michal Hocko wrote: > > > > > [...] > > > > > > > What about !PREEMPT_RT? We have people running isolated workloads and > > > > > > > these sorts of pcp disruptions are really unwelcome as well. They do not > > > > > > > have requirements as strong as RT workloads but the underlying > > > > > > > fundamental problem is the same. Frederic (now CCed) is working on > > > > > > > moving those pcp book keeping activities to be executed to the return to > > > > > > > the userspace which should be taking care of both RT and non-RT > > > > > > > configurations AFAICS. > > > > > > > > > > > > Michal, > > > > > > > > > > > > For !PREEMPT_RT, _if_ you select CONFIG_QPW=y, then there is a kernel > > > > > > boot option qpw=y/n, which controls whether the behaviour will be > > > > > > similar (the spinlock is taken on local_lock, similar to PREEMPT_RT). > > > > > > > > > > My bad. I've misread the config space of this. > > > > > > > > > > > If CONFIG_QPW=n, or kernel boot option qpw=n, then only local_lock > > > > > > (and remote work via work_queue) is used. > > > > > > > > > > > > What "pcp book keeping activities" you refer to ? I don't see how > > > > > > moving certain activities that happen under SLUB or LRU spinlocks > > > > > > to happen before return to userspace changes things related > > > > > > to avoidance of CPU interruption ? > > > > > > > > > > Essentially delayed operations like pcp state flushing happens on return > > > > > to the userspace on isolated CPUs. No locking changes are required as > > > > > the work is still per-cpu. > > > > > > > > > > In other words the approach Frederic is working on is to not change the > > > > > locking of pcp delayed work but instead move that work into well defined > > > > > place - i.e. return to the userspace. > > > > > > > > > > Btw. have you measure the impact of preempt_disbale -> spinlock on hot > > > > > paths like SLUB sheeves? > > > > > > > > Hi Michal, > > > > > > > > I have done some study on this (which I presented on Plumbers 2023): > > > > https://lpc.events/event/17/contributions/1484/ > > > > > > > > Since they are per-cpu spinlocks, and the remote operations are not that > > > > frequent, as per design of the current approach, we are not supposed to see > > > > contention (I was not able to detect contention even after stress testing > > > > for weeks), nor relevant cacheline bouncing. > > > > > > > > That being said, for RT local_locks already get per-cpu spinlocks, so there > > > > is only difference for !RT, which as you mention, does preemtp_disable(): > > > > > > > > The performance impact noticed was mostly about jumping around in > > > > executable code, as inlining spinlocks (test #2 on presentation) took care > > > > of most of the added extra cycles, adding about 4-14 extra cycles per > > > > lock/unlock cycle. (tested on memcg with kmalloc test) > > > > > > > > Yeah, as expected there is some extra cycles, as we are doing extra atomic > > > > operations (even if in a local cacheline) in !RT case, but this could be > > > > enabled only if the user thinks this is an ok cost for reducing > > > > interruptions. > > > > > > > > What do you think? > > > > > > The fact that the behavior is opt-in for !RT is certainly a plus. I also > > > do not expect the overhead to be really be really big. > > > > Awesome! Thanks for reviewing! > > > > > To me, a much > > > more important question is which of the two approaches is easier to > > > maintain long term. The pcp work needs to be done one way or the other. > > > Whether we want to tweak locking or do it at a very well defined time is > > > the bigger question. > > > > That crossed my mind as well, and I went with the idea of changing locking > > because I was working on workloads in which deferring work to a kernel > > re-entry would cause deadline misses as well. Or more critically, the > > drains could take forever, as some of those tasks would avoid returning to > > kernel as much as possible. > > Could you be more specific please? Hi Michal, Sorry for the delay I think Marcelo covered some of the main topics earlier in this thread: https://lore.kernel.org/all/aZ3ejedS7nE5mnva@tpad/ But in syntax: - There are workloads that are projected not avoid as much as possible return to kernelspace, as they are either cpu intensive, or latency sensitive (RT workloads) such as low-latency automation. There are scenarios such as industrial automation in which the applications are supposed to reply a request in less than 50us since it was generated (IIRC), so sched-out, dealing with interruptions, or syscalls are a no-go. In those cases, using cpu isolation is a must, and since it can stay really long running in userspace, it may take a very long time to do any syscall to actually perform the scheduled flush. - Other workloads may need to use syscalls, or rely in interrupts, such as HPC, but it's also not interesting to take long on them, as the time spent there is time not used for processing the required data. Let's say that for the sake of cpu isolation, a lot of different requests made to given isolated cpu are batched to be run on syscall entry/exit. It means the next syscall may take much longer than usual. - This may break other RT workloads such as sensor/sound/image sampling, which could be generally ok with some of the faster syscalls for their application, and now may perceive an error because one of those syscalls took too long. While the qpw approach may cost a few extra cycles, it operates remotelly and makes the system a bit more predictable. Also, when I was planning the mechanism, I remember it was meant to add zero overhead in case of CONFIG_QPW=n, very little overhead in case of CONFIG_QPW=y + qpw=0 (a couple of static branches, possibly with the cost removed by the cpu branch predictor), and only add a few cycles in case of qpw=1 + !RT. Which means we may be missing just a few adjustments to get there. BTW, if the numbers are not that great for your workloads, we could take a look at adding an extra QPW mode in which local_locks are taken in the fastpath and it allows the flush wq to be posponed to that point in syscall return that you mentioned. What I mean is that we don't need to be limitted to choosing between solutions, but instead allow the user (or distro) to choose the desired behavior. Thanks! Leo