From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08047343D8F for ; Thu, 7 May 2026 21:27:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778189274; cv=none; b=meBmZ8aT3GselFKfglsTROfoWKSQ8jMlnAazykjqlD3AIOXmEDFoyBe8Y7bAlzWGbdvQpo61Yoa6rvZN0Mr2TudyYH/r4VqmSsm/DhOUk88pD+LsvVybkst1iy6IwlOQwXL053vnK+q5+T1AC+ORMDAcAqGQlr+QljidmsN7Fcw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778189274; c=relaxed/simple; bh=x88SFgwSIt+2YpGmSxv4aGUNGhwrow980Ze/As0q9IM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=LnywAufhQJYOnU4ZnCXEY/RAvxsX4KYlDrPXxKhOc+ReZe6KAJ5u5vRklZgwwypYe+936Fo2bxuDDwg8rHyrkEj4/XcbWY9CtlBhcGn5bR8jzRd+Uf0GL8/pTjDr8e5TdzQu2U/9MI/0jJr4+zySKiFf79eXwt2TZ8Z+zpbiTEM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HGib+TXn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HGib+TXn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F7B1C2BCB2; Thu, 7 May 2026 21:27:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778189273; bh=x88SFgwSIt+2YpGmSxv4aGUNGhwrow980Ze/As0q9IM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=HGib+TXn2kZn5IsMmj7mtPnU/ys6GviwKHkyzH09yIJzUBW4TUpqJlPM6nZmRKflO XCqfaVt2kIG5NYRSLy/+xiGmp8UfyQDPLHgX2JSoKXdIG8y9iyvf8akL0UlzDY8wTc Ruj5e4qeCwR7qv9DaTw/tnD/72R5n+CZLsNTpKSUObBzAPpP1noFFppA2jYb2PPRix ZrNvqTvlSWYeD2EZD3duc95MHHLhErNfBXwmNZcyf3aeqh5f/1J8cNo8SI59BjkJEd JqobtzsZ2qp1ApJFmct8PhcIKy0PwHpywaDDXvVQeytFNI+WQAQvlWbzhw4IBskwZL PbmJ5mQbYnGEw== Date: Thu, 7 May 2026 11:27:52 -1000 From: Tejun Heo To: Marco Crivellari Cc: Breno Leitao , linux-kernel@vger.kernel.org, Lai Jiangshan , Frederic Weisbecker , Sebastian Andrzej Siewior , Michal Hocko Subject: Re: [RFC PATCH 0/2] Add queue_*() functions and prefer per-cpu workqueue and flag Message-ID: References: <20260505161658.401998-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Hello, On Thu, May 07, 2026 at 12:25:30PM +0200, Marco Crivellari wrote: > So, either we're going to have an "unbound" version or we use > queue_work() directly that sounds good to me. I guess retire - in > future - schedule_work[_on]() would be cleaner: so that users must > also specify the workqueue they really need to use. Yeah, retiring would be my preference if we need to update them anyway. I don't think the thin wrappers add anything useful. > What do you both think about: > > - queue_percpu_work() > - queue_dfl_work() But if were to keep the wrappers, yeah, these are better names. > Let me share where this was discussed a year ago: > > https://lore.kernel.org/all/Z79E_gbWm9j9bkfR@slm.duckdns.org/ > > Perhaps - likely - I haven't understood the WQ_PREFER_PERCPU proposal > here; I thought it was a workqueue flag, to be used like WQ_PERCPU or > WQ_UNBOUND. > Reading Tejun's reply is also clearer now. Yeah, that was what was discussed then. > Anyhow, this idea is based on customer reports I've seen previously. > We noticed that with certain workloads, specific per-cpu work creates > noise on isolated CPUs. With a flag like that we can identify which > workqueues prefer to be per-cpu and *not* for correctness. This allows > using a boot parameter / sysctl, for example, to keep those workqueues > affined only to housekeeping CPUs. > > Of course, if we can achieve the same with a system workqueue (like > system_prefer_percpu_wq), that would also be fine. I think it would be > way easier, it should be similar to what we're doing with > system_power_efficient_wq [1]. WQ_AFFN_CPU is more flexible as the tasks aren't pinned to the CPU but there may be downsides: - Concurrency management isn't available. - Would create more kworkers. Maybe the original plan can be adapted to: - Add WQ_PERFER_PERCPU as discussed before. - At boot time, allow selecting whether to back them with percpu wqs or WQ_AFFN_X unbound ones. Maybe we can even experiment with default to WQ_AFFN_CPU. Thanks. -- tejun