From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD47548B39C for ; Tue, 5 May 2026 16:17:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777997833; cv=none; b=jB7n2MKKffzbdU2aj2oF2vWuT3fw+nV4iV/Zf4IqsULSMKq6Mnu/F7sweg4OWcjWcvSg1EwsMXf43jkxwrvMQ3oopC1G++BQ5rVXfidJ/c+ry+0b/XnU3mES9BhI1plATcEID2YFyscfPyWuJ0dq4PTMV497s52vPX4MTU2nauw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777997833; c=relaxed/simple; bh=TsjoLpftOlmCReXNo/31yHGBU/QQivOUvXmaAIgsJyA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j+iu0xDV4R6XH9sY+N3r+1RoP8v+Z9HJ5jge599+ao3xpq6a33CdRfsZjifRDd6N8vJBLG2QANxPiLzK6YLCwZ+CmT80/fVG7qQAaauBu6uvvMAa83OWQZWiEM2h/bYuVNPOEAMATDwN0mU0XW45rGlzd+PW2NeUH+I/P8DKDks= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=CLSSJjoD; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="CLSSJjoD" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-488af96f6b2so69938275e9.0 for ; Tue, 05 May 2026 09:17:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1777997827; x=1778602627; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rrZBpnoMhTbPRSObL2aN+Mos9NA0W/DcBKq63dAp7SE=; b=CLSSJjoDLzjoFMjDM3Be+J+PWA70U8xpQsa6ONd6z3uP2y6PS6jstxd+urHmnm/ffs udtBgWvo0iFhTJdnlsuqh8FaIYrQbG4c2eNt7BkxHiBj71nRZI9WgK8UMaDSCPS10ZmU ZFiim0GRvMcAeZHZERMN0U6aand9hFXNz/D+A5O10RwL1CT3a5dMG/31KJVOrpi+58oX VM9elQ1kzPnsON7XygSVzTUpIDxklKCiyLMh1mpTudolTZumc+4U49igKJyf5lFZxLPz V5nkipK+h6V4fRqML0/1NwGlOdf8csLZlRY5vx7YxIqRZbo5d11oAq4gt22XvJwRsuF5 0HvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777997827; x=1778602627; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=rrZBpnoMhTbPRSObL2aN+Mos9NA0W/DcBKq63dAp7SE=; b=fNAfjXeKMV88zYRYEkDn/oDRX3IBfuqMBlnp49tJm6N5+aCYUlNulhs1h5am59N5cE ZjIDpSk+YJlesZs5jlAjPTXZNpTDfiGSs3phGc1jzgA/WtaY/mF5vtHn1YVEf3nE3jCp +/i3Ox0I5o821vRoIlN7ToFmVABAsZuB9xirf5hVWWNiu1OcqRTayJLtykOhwm5rEsMC zhkAqyoXMK7AO+Dz/cnkr/ywilPUlRABcOvLAAszXIUMNzMuqSEAI/ZMy7Aeb5z2qy1P hSRkzZ0iKdiB6e5K52+hyWQt99N2kBFmQ1slwr2/hlx3FCq+GvhdsmUFe9XjJwBNg95A CnGQ== X-Gm-Message-State: AOJu0YzsolDvaPFsOqIaoZQ43TPdyHJmNURn8klUcGgpocN8FIS4O0S9 1mOY9AdeQiflKcQTJSYiQawqaDvDjwovbBZ+5kwZZxExeKtnbJdTNqU6dFFBes7SO84oAO3lT8J FL9Gqwak= X-Gm-Gg: AeBDieuEBwZqjhFJtskoMSb2FmRd4Yh+L0PoVXLrPqpO1f0Fj6Uh/y80istxvYNj7ca z891ekthFU7hFkmp/UiXXelQe5S8cpH0J3tdFq7komBf3toqlbwwn4Tp7E0hNlnWG/503VPet3h +nv/68zyNxwswrXQxHLF4Q0Y/04bmzx/Tg1fSqFupWbfikBWpxPIzsQL8aVouH8qN4bCc0pMyd0 EUD5tWU2YxRBHYaqyTDY3swk24pKRZKSkll+rLefi68qsa1ftrITTuuy7bGXAflq1s8Oq7T42TB N0f9IjFT5PBhLUeKqhevUWzxojYJYOympgpoUvbnv1Yco7QlXnmEl1RbktJP482jufJX+c60Utg i61dSpMpiA+O0AIRhuKMCdJH9D7BB2haQir9aeDTbS9iKFp5ODtFWJCaLmyKpLZaQ3DdeRCqKa0 7vpKkht40+A5A1O9iGN7JsngDNGJ3RxfqhrJm0TXKfZb9t+tHUTQL1y6mCOknb0cvZU9Fp X-Received: by 2002:a05:600c:8b04:b0:485:40db:d40c with SMTP id 5b1f17b1804b1-48a9852f332mr237936985e9.3.1777997827128; Tue, 05 May 2026 09:17:07 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48a82301adesm400997535e9.10.2026.05.05.09.17.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 May 2026 09:17:06 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko Subject: [RFC PATCH 1/2] workqueue: Add queue_*() functions, future schedule_*() replacement Date: Tue, 5 May 2026 18:16:57 +0200 Message-ID: <20260505161658.401998-2-marco.crivellari@suse.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260505161658.401998-1-marco.crivellari@suse.com> References: <20260505161658.401998-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit This is part of the workqueue refactoring. More details can be found at the Link below. The acqual schedule_*() interface used to schedule work items on a workqueue doesn't distinguish between bound and unbound workqueue but only system_percpu_wq is used. So introduce the bound and unbound versions. To better reflect what these function does, rename them with a cleaner and unified interface dropping the "schedule_*()" prefix and using "queue_*()". This change introduce: - queue_{bound|unbound}_work() with the bound version being the future replacement of schedule_work() - queue_bound_work_on() as future replacement of schedule_work_on() - queue_bound_delayed_work() as future replacement of schedule_delayed_work() - add queue_unbound_delayed_work() to offer the unbound version - queue_bound_delayed_work_on() as future replacement of schedule_delayed_work_on() A further step would be the conversion of all the users to the new introduced interfaces and the migration, whether locality is not strictly required, to the unbound version. In a future relase cycle and once users are migrated, the schedule_*() interface will be removed. Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/ Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari --- include/linux/workqueue.h | 101 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 101 insertions(+) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index ab6cb70ca1a5..f46379d937c9 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -732,12 +732,26 @@ static inline bool mod_delayed_work(struct workqueue_struct *wq, * @work: job to be done * * This puts a job on a specific cpu + * + * Note: this function will be replaced by queue_bound_work_on() */ static inline bool schedule_work_on(int cpu, struct work_struct *work) { return queue_work_on(cpu, system_percpu_wq, work); } +/** + * queue_bound_work_on - put work task on a specific cpu + * @cpu: cpu to put the work task on + * @work: job to be done + * + * This puts a job on a specific cpu + */ +static inline bool queue_bound_work_on(int cpu, struct work_struct *work) +{ + return queue_work_on(cpu, system_percpu_wq, work); +} + /** * schedule_work - put work task in per-CPU workqueue * @work: job to be done @@ -751,12 +765,53 @@ static inline bool schedule_work_on(int cpu, struct work_struct *work) * * Shares the same memory-ordering properties of queue_work(), cf. the * DocBook header of queue_work(). + * + * Note: this function will be removed in future, use schedule_{bound|unbound}_work() + * instead. */ static inline bool schedule_work(struct work_struct *work) { return queue_work(system_percpu_wq, work); } +/** + * queue_bound_work - put work task in per-CPU workqueue + * @work: job to be done + * + * Returns %false if @work was already on the system per-CPU workqueue and + * %true otherwise. + * + * This puts a job in the system per-CPU workqueue if it was not already + * queued and leaves it in the same position on the system per-CPU + * workqueue otherwise. + * + * Shares the same memory-ordering properties of queue_work(), cf. the + * DocBook header of queue_work(). + */ +static inline bool queue_bound_work(struct work_struct *work) +{ + return queue_work(system_percpu_wq, work); +} + +/** + * queue_unbound_work - put work task in unbound workqueue + * @work: job to be done + * + * Returns %false if @work was already on the system unbound workqueue and + * %true otherwise. + * + * This puts a job in the system unbound workqueue if it was not already + * queued and leaves it in the same position on the system unbound + * workqueue otherwise. + * + * Shares the same memory-ordering properties of queue_work(), cf. the + * DocBook header of queue_work(). + */ +static inline bool queue_unbound_work(struct work_struct *work) +{ + return queue_work(system_dfl_wq, work); +} + /** * enable_and_queue_work - Enable and queue a work item on a specific workqueue * @wq: The target workqueue @@ -832,6 +887,9 @@ extern void __warn_flushing_systemwide_wq(void) * * After waiting for a given time this puts a job in the system per-CPU * workqueue on the specified CPU. + * + * Note: this function will be removed. Please use queue_delayed_bound_work_on() + * instead */ static inline bool schedule_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay) @@ -839,6 +897,21 @@ static inline bool schedule_delayed_work_on(int cpu, struct delayed_work *dwork, return queue_delayed_work_on(cpu, system_percpu_wq, dwork, delay); } +/** + * queue_delayed_bound_work_on - queue work in per-CPU workqueue on CPU after delay + * @cpu: cpu to use + * @dwork: job to be done + * @delay: number of jiffies to wait + * + * After waiting for a given time this puts a job in the system per-CPU + * workqueue on the specified CPU. + */ +static inline bool queue_delayed_bound_work_on(int cpu, struct delayed_work *dwork, + unsigned long delay) +{ + return queue_delayed_work_on(cpu, system_percpu_wq, dwork, delay); +} + /** * schedule_delayed_work - put work task in per-CPU workqueue after delay * @dwork: job to be done @@ -853,6 +926,34 @@ static inline bool schedule_delayed_work(struct delayed_work *dwork, return queue_delayed_work(system_percpu_wq, dwork, delay); } +/** + * queue_delayed_bound_work - put work task in per-CPU workqueue after delay + * @dwork: job to be done + * @delay: number of jiffies to wait or 0 for immediate execution + * + * After waiting for a given time this puts a job in the system per-CPU + * workqueue. + */ +static inline bool queue_delayed_bound_work(struct delayed_work *dwork, + unsigned long delay) +{ + return queue_delayed_work(system_percpu_wq, dwork, delay); +} + +/** + * queue_delayed_unbound_work - put work task in unbound workqueue after delay + * @dwork: job to be done + * @delay: number of jiffies to wait or 0 for immediate execution + * + * After waiting for a given time this puts a job in the system unbound + * workqueue. + */ +static inline bool queue_delayed_unbound_work(struct delayed_work *dwork, + unsigned long delay) +{ + return queue_delayed_work(system_dfl_wq, dwork, delay); +} + #ifndef CONFIG_SMP static inline long work_on_cpu(int cpu, long (*fn)(void *), void *arg) { -- 2.53.0