From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43FAA411636 for ; Tue, 3 Mar 2026 11:15:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772536557; cv=none; b=g2Y3BdeCRySnI4HZ/8NQo1o2iC7eFFG8wTZ9lDdT6xssNOwWu13gPCqDClxr/txzrSEnXzv6QysV71HNsh4g7VKjgObT5y007Og/M6YHiUdEp3d1SDOEU7iPqzIU7IOAkfrwIyZ4W09tZlfl8ggIerXBm4Qrl40tpov64jNcdUk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772536557; c=relaxed/simple; bh=gbUkpufRLcpwq/o5TVZcY0FyRdTGZbA7Wnl6cxmAkZE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Zpnp0oWsJnYC+NJjRAIWqxTh1QxUMS/+fSEe//Vj+hZ4Wcj662Bs4zrz4nwPrLFugvRc7y9topb9wJP1GePwz8e4twGmAK9t2Z/acxf2L1GbmvdQNo9mSHczXhOZIiA/h1w6+oTFbWRsXNpzLZvNT69kZ1vokkoUyLilW5xVUkA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PmoSSr0H; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PmoSSr0H" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6731AC116C6; Tue, 3 Mar 2026 11:15:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772536556; bh=gbUkpufRLcpwq/o5TVZcY0FyRdTGZbA7Wnl6cxmAkZE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=PmoSSr0Hgwi2wrBltXy7/VTBHV8HFr4yFmpxWEkATPt9ilT4W04Mi2Au4IB1pozT8 GTZc5GsnXNzpywgbw2CBbL0bAUE+9BmNkEXH4BR+ZVRzaffm8wYTZqsu8P2eCbVfwE C4Xb4wCJrrB4AiiU3YEGd+SiPY9VFnXpxtjAst5C9h5POzqoxbORLCN+ybT/ak6lgS 6guLr6cCAfzFQgkPIYjFSrj9llZmzkL82/whirciqsfvviiOXPoysaOBuGEL0Hlir7 Q0/QfZVDrcNdfeCG4PK+oDbS51clAwL0mR4EdIikEoo9SKZnvWfgbfCGKiUO5xeReW oFpDLwoF30nQQ== Date: Tue, 3 Mar 2026 12:15:53 +0100 From: Frederic Weisbecker To: Marcelo Tosatti Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feun Subject: Re: [PATCH v2 0/5] Introduce QPW for per-cpu operations (v2) Message-ID: References: <20260302154945.143996316@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20260302154945.143996316@redhat.com> Le Mon, Mar 02, 2026 at 12:49:45PM -0300, Marcelo Tosatti a écrit : > The problem: > Some places in the kernel implement a parallel programming strategy > consisting on local_locks() for most of the work, and some rare remote > operations are scheduled on target cpu. This keeps cache bouncing low since > cacheline tends to be mostly local, and avoids the cost of locks in non-RT > kernels, even though the very few remote operations will be expensive due > to scheduling overhead. > > On the other hand, for RT workloads this can represent a problem: getting > an important workload scheduled out to deal with remote requests is > sure to introduce unexpected deadline misses. > > The idea: > Currently with PREEMPT_RT=y, local_locks() become per-cpu spinlocks. > In this case, instead of scheduling work on a remote cpu, it should > be safe to grab that remote cpu's per-cpu spinlock and run the required > work locally. That major cost, which is un/locking in every local function, > already happens in PREEMPT_RT. > > Also, there is no need to worry about extra cache bouncing: > The cacheline invalidation already happens due to schedule_work_on(). > > This will avoid schedule_work_on(), and thus avoid scheduling-out an > RT workload. > > Proposed solution: > A new interface called Queue PerCPU Work (QPW), which should replace > Work Queue in the above mentioned use case. > > If CONFIG_QPW=n this interfaces just wraps the current > local_locks + WorkQueue behavior, so no expected change in runtime. > > If CONFIG_QPW=y, and qpw kernel boot option =1, > queue_percpu_work_on(cpu,...) will lock that cpu's per-cpu structure > and perform work on it locally. This is possible because on > functions that can be used for performing remote work on remote > per-cpu structures, the local_lock (which is already > a this_cpu spinlock()), will be replaced by a qpw_spinlock(), which > is able to get the per_cpu spinlock() for the cpu passed as parameter. Ok I'm slowly considering this as a more comfortable solution than the flush before userspace. Despite it being perhaps a bit more complicated, remote handling of housekeeping work is more surprise-free against all the possible nohz_full usecases that we are having a hard time to envision. Reviewing this more in details now. Thanks. -- Frederic Weisbecker SUSE Labs