From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A1A8349B1D for ; Tue, 10 Mar 2026 21:34:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773178465; cv=none; b=T1eQNjwSqFs6lBIHZB0D0pY+Ig59u8oD4e5WBAgoHtYc0UPCT5GLP5GguzMJHuf2O1YSh8/MWVC9WWGxKjn3AErXpjsbbjgBNEJMujujkdUCJSxNCdbIBcgZEhNhqbX9rrRyf0RSZSRpzmW21r9zfkdkuQbv2K4OSUczPhvUxKk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773178465; c=relaxed/simple; bh=anu5ccVYq0VybeUWOWkYcT+XFVzew7g77Ym8dOjH0AE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=KSMdGJ2gxJmJkUnWGsbDHQcIo3MF5pLmsCV6Dpsb/L9GwfzPbExnQKbi5Cfc2q04NrAT4fwtZU9pLKzBeXXK7v4COBXiRP8j17OQ0ij9Tt/ox1hiFvOkjdTc3uYmwdsGPKpPakpwvXb44T4dft4TXntT0Y4esBUto4NMQkDk0xA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XqGOoHJ0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XqGOoHJ0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1859C19423; Tue, 10 Mar 2026 21:34:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773178465; bh=anu5ccVYq0VybeUWOWkYcT+XFVzew7g77Ym8dOjH0AE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=XqGOoHJ0fvSL0gcNZ2Mxqk/lyN0hXZ6pJ5ewKL7Esv/H3WIzCPXa8WFvP9FE2mT7w uu39pd6fmw5uNktb/vrLS1971pwCqjGXhGaA11BLUk6zM6fbypj0ZR7Owo4Oq7Mnj8 LbvZqmUX/v1dmjhGMzmzkS69287Y+RAWbpAFg9RZgDbTTpjkNct6i5o82Ks+XRpU+F i0z0owJw+UyzHgiFtz6t4lkCKPqbFBcNyuLH9zkxAXVCOViFU5F0D2XoNBeU81TlAo vC7IqLop2H4OPVSQFMUxFzT2r/hERvL5lxQ2SNDeRkV6iseD0XeZSPizEZ3cjO8Di9 JLW75b+6yCOtA== Date: Tue, 10 Mar 2026 22:34:22 +0100 From: Frederic Weisbecker To: Marcelo Tosatti Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feun Subject: Re: [PATCH v2 0/5] Introduce QPW for per-cpu operations (v2) Message-ID: References: <20260302154945.143996316@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Le Thu, Mar 05, 2026 at 10:47:00PM -0300, Marcelo Tosatti a écrit : > On Thu, Mar 05, 2026 at 05:55:12PM +0100, Frederic Weisbecker wrote: > > So let me summarize what are the possible design solutions, on top of our discussions, > > so we can compare: > > I find this summary difficult to comprehend. The way i see it is: > > A certain class of data structures can be manipulated only by each individual CPU (the > per-CPU caches), since they lack proper locks for such data to be > manipulated by remote CPUs. > > There are certain operations which require such data to be manipulated, > therefore work is queued to execute on the owner CPUs. Right. > > > > 1) Never queue remotely but always queue locally and execute on userspace > > When you say "queue locally", do you mean to queue the data structure > manipulation to happen on return to userspace of the owner CPU ? Yes. > > What if it does not return to userspace ? (or takes a long time to return > to userspace?). Indeed it's a bet that syscalls eventually return "soon enough" for correctness to be maintained and that the CPU is not stuck on some kthread. But on isolation workloads, those assumptions are usually true. > > > return via task work. > > > > Pros: > > - Simple and easy to maintain. > > > > Cons: > > - Need a case by case handling. > > > > - Might be suitable for full userspace applications but not for > > some HPC usecases. In the best world MPI is fully implemented in > > userspace but that doesn't appear to be the case. > > > > 2) Queue locally the workqueue right away or do it remotely (if it's > > really necessary) if the isolated CPU is in userspace, otherwise queue > > it for execution on return to kernel. The work will be handled by preemption > > to a worker or by a workqueue flush on return to userspace. > > > > Pros: > > - The local queue handling is simple. > > > > Cons: > > - The remote queue must synchronize with return to userspace and > > eventually postpone to return to kernel if the target is in userspace. > > Also it may need to differentiate IRQs and syscalls. > > > > - Therefore still involve some case by case handling eventually. > > > > - Flushing the global workqueues to avoid deadlocks is unadvised as shown > > in the comment above flush_scheduled_work(). It even triggers a > > warning. Significant efforts have been put to convert all the existing > > users. It's not impossible to sell in our case because we shouldn't > > hold a lock upon return to userspace. But that will restore a new > > dangerous API. > > > > - Queueing the workqueue / flushing involves a context switch which > > induce more noise (eg: tick restart) > > > > - As above, probably not suitable for HPC. > > > > 3) QPW: Handle the work remotely > > > > Pros: > > - Works on all cases, without any surprise. > > > > Cons: > > - Introduce new locking scheme to maintain and debug. > > > > - Needs case by case handling. > > > > Thoughts? > > > > -- > > Frederic Weisbecker > > SUSE Labs > > Its hard for me to parse your concise summary (perhaps it could be more > verbose). > > Anyway, one thought is to use some sort of SRCU type protection on the > per-CPU caches. > But that adds cost as well (compared to non-SRCU), which then seems to > have cost similar to adding per-CPU spinlocks. Well, there is SRCU-fast now. Though do we care about housekeeping performance to be optimized on isolated workloads to the point we complicate things with a weaker and and trickier synchronization mechanism? Probably not. If we choose to pick up your solution, I'm fine with spinlocks. Thanks. -- Frederic Weisbecker SUSE Labs