From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 260612116F4 for ; Mon, 23 Feb 2026 09:18:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.50 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771838324; cv=none; b=qAK+mJ3ajc79AXc3ZJ0AwZh/S4+1o7h+x+yTVJmqU5ysOO4I80fU+Bqs8x1hXjAH0sU9Mo02O1GfCQLskgsbeIr/xLSPjh+vCajm9XhCwI2PqPgv7xJ0xDp66Kx4AT3JufQhsDrv6wNOY8Ar6QTo41ZJ22KwkrZhqD5crVPEBjA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771838324; c=relaxed/simple; bh=ZhnSteMOWX5xfIin9zqb7hGY/YD249s6BBVJ+FU/H4Y=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=gS/+B3h/oiQ47rHE9vJ5+/MricSpRWuD5t5s0qrUUqgZjathuwJ9h4UZ4K+kN0MeNw56Kult3xZwkO3EuDgoPmCHznBbzhzeO3wAIAidrpevEKMMXVxpEFDje7qofwurj80fjopuC2SkBR4ahFM3oyP9N8yEYojEZWSg7fGRWZg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=cvHF38rw; arc=none smtp.client-ip=209.85.128.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="cvHF38rw" Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-4837584120eso28646345e9.1 for ; Mon, 23 Feb 2026 01:18:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1771838321; x=1772443121; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=hH5OG2+RDDb8VrJivnQoBt0ft8/X+47GVBPq4Qe+3EI=; b=cvHF38rwTt5r+XXzxumZitkq74oqgoEvDIFkOvkG+cFzIIJQEq5bwDYEyFmKYV5jqI e6PXf3Wj96WpUwpx9qOTQiAG8R6M9BK7kI16g9y3pa0AKKcRMH8ORghwDOA28+PME54P 6OKcLX6Ab8JUKK7BgOVIUPaRPEFodVpLnPa1dr8umYEHn+wdJErzeXsZZmgotk/3/nSL n6cLKjHKffp0zG2ugYfpG5glZcKAAFmJNiQNGSeYHZ4bnYXrOueUNpFCXTqzR0bTW1cM mldl6PWlCYEvd3QUSOt/YQGs2HIpurb/6KJnDF9jkFvPKxt7zFe4h+xuFLmjlSZSUQqy E+4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771838321; x=1772443121; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hH5OG2+RDDb8VrJivnQoBt0ft8/X+47GVBPq4Qe+3EI=; b=hpoh/Y0+EgKQl9LzxgHzm5DSagkspkuNPj9v8mXv599HepAWsqKcLAuelwui9/PfS5 up0zwKGuabVWjAvmBdl+gR4on0GXUV6Boqty9OZQUIMROGXR8KQy68MzmmwGlEwz4zc0 pWo+0/A6vC8fvCniGl3A0SXSRNq6ied4H57zKmrw2eU3xa5djG6FbvmXxhiOLdlB5tv9 X4TIHNNaRl8iWj0EatwvVt04D5HQTBh5VI8Sm0rBoP2aL+XdkkQQ3g9hemHIiTrPfLjf 7z2lWN5wVHLr9xxYvUfhX3ix/2BT64TjednBxNdH6jA91AqZk1+mUErOxXEgXTk0msM5 vmfQ== X-Forwarded-Encrypted: i=1; AJvYcCUks3vBZ0WdQwttuG40WfjX4LWtlLggS7BY2EAfNFXeDqM3VgXF2PpD60tJfzk77l2Kj2cYxH7Zv1aTQa8=@vger.kernel.org X-Gm-Message-State: AOJu0YwM87my/+5UGnOKFW1BCxgYrR83ayWrjaWw8CIwDMAmVNrXOXRO rRlN4S0xF8XkPKTnFSD3Ycp8MvuR6dAzpGB4ZSAWW1nxeeoOiz/crpRNNDBW8OVDyYE= X-Gm-Gg: AZuq6aJJ6F074GC2iC1gRiD6nlSc+0qaP5ehVe4BnsJluaqs+THc3HWVCxK8Og+5XVL vJBfMqU3IAD/7MbsC1PS7aMe0daa7074gQ8X68HwSDJuw37B8aNF6HUo4ha96Ep3FoEQXCr/r2e 4btQuMBuBeuK3tbZCs3y7GpJoscY6U5f/gMqKvEi5PO0EIlCgHgiUR/o5Ivyz4n0JveZNJlPSUB UdaZUIXT/yGo9aPXV9N48s6Hi3Cud8AaZtdPdgTaROvVE4V9XkCEtOn3B1u+mWNeNpKtXANQrBr kSMI5NscM1D4KEh4KTz/lmLXEK6vuGWuHZ5DP0Z7G19w0m2W+nrbo6k93FkxUCElL8nfx+XmHuN IvYv3ki3gPo3bdWfxDDg5rmpVfslrxgUCKk9yEACD+rY8vGEEUr7LhdC7OCxNuFf0X092QmvSzH 4M3FHpyGxxUlv0tlw2WjIphCSfXaPpqMQ= X-Received: by 2002:a05:600c:8b71:b0:480:1dc6:2686 with SMTP id 5b1f17b1804b1-483a95c63ddmr129275105e9.13.1771838321486; Mon, 23 Feb 2026 01:18:41 -0800 (PST) Received: from localhost (109-81-84-7.rct.o2.cz. [109.81.84.7]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-483a9cab38dsm160005865e9.9.2026.02.23.01.18.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Feb 2026 01:18:41 -0800 (PST) Date: Mon, 23 Feb 2026 10:18:40 +0100 From: Michal Hocko To: Marcelo Tosatti Cc: Leonardo Bras , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feng , Frederic Weisbecker Subject: Re: [PATCH 0/4] Introduce QPW for per-cpu operations Message-ID: References: <20260206143430.021026873@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Fri 20-02-26 11:30:16, Marcelo Tosatti wrote: > On Thu, Feb 19, 2026 at 08:30:31PM +0100, Michal Hocko wrote: > > On Thu 19-02-26 12:27:23, Marcelo Tosatti wrote: [...] > > and delayed pcp work that migh disturb such workload > > after it has returned to the userspace. Right? > > That is usually hauskeeping work that for, performance reasons, doesn't > > happen in hot paths while the workload was executing in the kernel > > space. > > > > There are more ways to deal with that. You can either change the hot > > path to not require deferred operation (tricky withtout introducing > > regressions for most workloads) or you can define a more suitable place > > to perform the housekeeping while still running in the kernel. > > > > Your QWP work relies on local_lock -> spin_lock transition and > > performing the pcp work remotely so you do not need to disturb that > > remote cpu. Correct? > > > > Alternative approach is to define a moment when the housekeeping > > operation is performed on that local cpu while still running in the > > kernel space - e.g. when returning to the userspace. Delayed work is > > then not necessary and userspace is not disrupted after returning to the > > userspace. > > > > Do I make more sense or does the above sound like a complete gibberish? > > OK, sure, but can't see how you can do that with per-CPU caches for > kmalloc, for example. As we have discussed in other subthread. By flushing those pcp caches on the return to userspace. Those flushes are not needed immediately. They just need to happen to allow operations listed by Vlastimil to finish. Or to avoid the problem by not using them but that is a separate discussion. I believe we can establish that any pcp delayed operation implemented through WQs can be flushed on the way to the userspace, right? The performance might be suboptimal but correctness will be preserved. So doing this on isolated CPUs could be an alternative to making changes to the pcp WQ handling. I haven't checked the WQ code deeply but I believe it should be feasible to flush all pcp WQs with pending work on the isolated cpu when the isolated workload returns to the userspace. This way we wouldn't need to special case each and every one of them. -- Michal Hocko SUSE Labs