* [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers
@ 2025-06-25 10:49 Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 01/10] Workqueue: net: replace use of system_wq with system_percpu_wq Marco Crivellari
` (10 more replies)
0 siblings, 11 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
Alexander Viro, Andrew Morton, Christian Brauner, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Hi!
Below is a summary of a discussion about the Workqueue API and cpu isolation
considerations. Details and more information are available here:
"workqueue: Always use wq_select_unbound_cpu() for WORK_CPU_UNBOUND."
https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
=== Current situation: problems ===
Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.
This leads to different scenarios if a work item is scheduled on an isolated
CPU where "delay" value is 0 or greater then 0:
schedule_delayed_work(, 0);
This will be handled by __queue_work() that will queue the work item on the
current local (isolated) CPU, while:
schedule_delayed_work(, 1);
Will move the timer on an housekeeping CPU, and schedule the work there.
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
=== Plan and future plans ===
This patchset is the first stone on a refactoring needed in order to
address the points aforementioned; it will have a positive impact also
on the cpu isolation, in the long term, moving away percpu workqueue in
favor to an unbound model.
These are the main steps:
1) API refactoring (that this patch is introducing)
- Make more clear and uniform the system wq names, both per-cpu and
unbound. This to avoid any possible confusion on what should be
used.
- Introduction of WQ_PERCPU: this flag is the complement of WQ_UNBOUND,
introduced in this patchset and used on all the callers that are not
currently using WQ_UNBOUND.
WQ_UNBOUND will be removed in a future release cycle.
Most users don't need to be per-cpu, because they don't have
locality requirements, because of that, a next future step will be
make "unbound" the default behavior.
2) Check who really needs to be per-cpu
- Remove the WQ_PERCPU flag when is not strictly required.
3) Add a new API (prefer local cpu)
- There are users that don't require a local execution, like mentioned
above; despite that, local execution yeld to performance gain.
This new API will prefer the local execution, without requiring it.
=== Introduced Changes by this patchset ===
1) [P 1-2-3] replace use of system_wq with system_percpu_wq per subsys:
system_wq is a per-CPU workqueue, but his name is not clear.
system_unbound_wq is to be used when locality is not required.
Because of that, system_wq has been renamed in system_percpu_wq in the
following subsystem: mm, net, fs (details on the next section).
2) [P 4-5] replace remaining system_wq and system_unbound_wq
system_unbound_wq is to be used when locality is not required.
Because of that, system_unbound_wq has been renamed in system_dfl_wq.
The old wq are still around, but if used in queue_work(), queue_delayed_work(),
mod_delayed_work(), a pr_warn_once() will be printed and the wq used is
automatically assigned to the new default (system_dfl_wq or system_percpu_wq).
3) [P 6-7-8] add WQ_PERCPU to alloc_workqueue() users (per subsystem)
Every alloc_workqueue() caller should use one among WQ_PERCPU or
WQ_UNBOUND. This is actually enforced warning if both or none of them
are present at the same time.
These patches introduce WQ_PERCPU in the following subsystems:
fs, net, mm (details on the next section).
WQ_UNBOUND will be removed in a next release cycle.
4) [P 9] add WQ_PERCPU to remaining alloc_workqueue() users
Every alloc_workqueue() caller should use one among WQ_PERCPU or
WQ_UNBOUND. This is actually enforced warning if both or none of them
are present at the same time.
WQ_UNBOUND will be removed in a next release cycle.
5) [P 10] upgraded WQ_UNBOUND documentation
Added a note about the WQ_UNBOUND removal in a next release cycle.
=== For fs, mm, net Maintainers ===
If you agree with these changes, one option is pull the preparation changes from
Tejun's wq branch [1].
As an alternative, the patches can be routed through a wq branch.
The preparation changes are described in the present cover letter, under the
"main steps" section. The changes done in summary are:
- add system_percpu_wq and system_dfl_wq, for now without replace the older wq(s)
(system_unbound_wq and system_wq).
- add WQ_PERCPU flag, currently without removing WQ_UNBOUND; it will be removed
in a future release cycle.
You can find the aforementioned changes reading:
("Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq")
https://lore.kernel.org/all/20250614133531.76742-1-marco.crivellari@suse.com/
- [1] git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git WQ_PERCPU
Marco Crivellari (10):
Workqueue: net: replace use of system_wq with system_percpu_wq
Workqueue: mm: replace use of system_wq with system_percpu_wq
Workqueue: fs: replace use of system_wq with system_percpu_wq
Workqueue: replace use of system_wq with system_percpu_wq
Workqueue: replace use of system_unbound_wq with system_dfl_wq
Workqueue: net: WQ_PERCPU added to alloc_workqueue users
Workqueue: mm: WQ_PERCPU added to alloc_workqueue users
Workqueue: fs: WQ_PERCPU added to alloc_workqueue users
Workqueue: WQ_PERCPU added to all the remaining users
[Doc] Workqueue: WQ_UNBOUND doc upgraded
Documentation/core-api/workqueue.rst | 4 ++
arch/s390/kernel/diag/diag324.c | 4 +-
arch/s390/kernel/hiperdispatch.c | 2 +-
block/bio-integrity-auto.c | 5 +-
block/bio.c | 3 +-
block/blk-core.c | 3 +-
block/blk-throttle.c | 3 +-
block/blk-zoned.c | 3 +-
crypto/cryptd.c | 3 +-
drivers/accel/ivpu/ivpu_hw_btrs.c | 2 +-
drivers/accel/ivpu/ivpu_ipc.c | 2 +-
drivers/accel/ivpu/ivpu_job.c | 2 +-
drivers/accel/ivpu/ivpu_mmu.c | 2 +-
drivers/accel/ivpu/ivpu_pm.c | 4 +-
drivers/acpi/ec.c | 3 +-
drivers/acpi/osl.c | 6 +-
drivers/acpi/scan.c | 2 +-
drivers/acpi/thermal.c | 3 +-
drivers/ata/libata-sff.c | 3 +-
drivers/base/core.c | 2 +-
drivers/base/dd.c | 2 +-
drivers/base/devcoredump.c | 2 +-
drivers/block/aoe/aoemain.c | 2 +-
drivers/block/nbd.c | 2 +-
drivers/block/rbd.c | 2 +-
drivers/block/rnbd/rnbd-clt.c | 2 +-
drivers/block/sunvdc.c | 4 +-
drivers/block/virtio_blk.c | 2 +-
drivers/block/zram/zram_drv.c | 2 +-
drivers/bus/mhi/ep/main.c | 2 +-
drivers/char/random.c | 8 +--
drivers/char/tpm/tpm-dev-common.c | 3 +-
drivers/char/xillybus/xillybus_core.c | 2 +-
drivers/char/xillybus/xillyusb.c | 4 +-
drivers/cpufreq/tegra194-cpufreq.c | 3 +-
drivers/crypto/atmel-i2c.c | 2 +-
drivers/crypto/cavium/nitrox/nitrox_mbx.c | 2 +-
drivers/crypto/intel/qat/qat_common/adf_aer.c | 4 +-
drivers/crypto/intel/qat/qat_common/adf_isr.c | 3 +-
.../crypto/intel/qat/qat_common/adf_sriov.c | 3 +-
.../crypto/intel/qat/qat_common/adf_vf_isr.c | 3 +-
drivers/cxl/pci.c | 2 +-
drivers/extcon/extcon-intel-int3496.c | 4 +-
drivers/firewire/core-transaction.c | 3 +-
drivers/firewire/ohci.c | 3 +-
drivers/gpio/gpiolib-cdev.c | 4 +-
drivers/gpu/drm/amd/amdgpu/aldebaran.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c | 2 +-
drivers/gpu/drm/amd/amdkfd/kfd_process.c | 3 +-
drivers/gpu/drm/bridge/analogix/anx7625.c | 3 +-
drivers/gpu/drm/bridge/ite-it6505.c | 2 +-
drivers/gpu/drm/bridge/ti-tfp410.c | 2 +-
drivers/gpu/drm/drm_atomic_helper.c | 6 +-
drivers/gpu/drm/drm_probe_helper.c | 2 +-
drivers/gpu/drm/drm_self_refresh_helper.c | 2 +-
drivers/gpu/drm/exynos/exynos_hdmi.c | 2 +-
.../drm/i915/display/intel_display_driver.c | 3 +-
.../drm/i915/display/intel_display_power.c | 2 +-
drivers/gpu/drm/i915/display/intel_tc.c | 4 +-
drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 2 +-
drivers/gpu/drm/i915/gt/uc/intel_guc.c | 4 +-
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 4 +-
.../gpu/drm/i915/gt/uc/intel_guc_submission.c | 6 +-
drivers/gpu/drm/i915/i915_active.c | 2 +-
drivers/gpu/drm/i915/i915_driver.c | 5 +-
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_sw_fence_work.c | 2 +-
drivers/gpu/drm/i915/i915_vma_resource.c | 2 +-
drivers/gpu/drm/i915/pxp/intel_pxp.c | 2 +-
drivers/gpu/drm/i915/pxp/intel_pxp_irq.c | 2 +-
.../gpu/drm/i915/selftests/i915_sw_fence.c | 2 +-
.../gpu/drm/i915/selftests/mock_gem_device.c | 2 +-
drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_sched.c | 3 +-
drivers/gpu/drm/radeon/radeon_display.c | 3 +-
.../gpu/drm/rockchip/dw_hdmi_qp-rockchip.c | 4 +-
drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 2 +-
drivers/gpu/drm/scheduler/sched_main.c | 2 +-
drivers/gpu/drm/tilcdc/tilcdc_crtc.c | 2 +-
drivers/gpu/drm/vc4/vc4_hdmi.c | 4 +-
drivers/gpu/drm/xe/xe_devcoredump.c | 2 +-
drivers/gpu/drm/xe/xe_device.c | 4 +-
drivers/gpu/drm/xe/xe_execlist.c | 2 +-
drivers/gpu/drm/xe/xe_ggtt.c | 2 +-
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 6 +-
drivers/gpu/drm/xe/xe_guc_ct.c | 4 +-
drivers/gpu/drm/xe/xe_hw_engine_group.c | 3 +-
drivers/gpu/drm/xe/xe_oa.c | 2 +-
drivers/gpu/drm/xe/xe_pt.c | 2 +-
drivers/gpu/drm/xe/xe_sriov.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 4 +-
drivers/greybus/operation.c | 2 +-
drivers/hid/hid-nintendo.c | 3 +-
drivers/hte/hte.c | 2 +-
drivers/hv/mshv_eventfd.c | 2 +-
drivers/i3c/master.c | 2 +-
drivers/iio/adc/pac1934.c | 2 +-
drivers/infiniband/core/cm.c | 2 +-
drivers/infiniband/core/device.c | 4 +-
drivers/infiniband/core/ucma.c | 2 +-
drivers/infiniband/hw/hfi1/init.c | 3 +-
drivers/infiniband/hw/hfi1/opfn.c | 3 +-
drivers/infiniband/hw/mlx4/cm.c | 2 +-
drivers/infiniband/hw/mlx5/odp.c | 4 +-
drivers/infiniband/sw/rdmavt/cq.c | 3 +-
drivers/infiniband/ulp/iser/iscsi_iser.c | 2 +-
drivers/infiniband/ulp/isert/ib_isert.c | 2 +-
drivers/infiniband/ulp/rtrs/rtrs-clt.c | 2 +-
drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +-
drivers/input/keyboard/gpio_keys.c | 2 +-
drivers/input/misc/palmas-pwrbutton.c | 2 +-
drivers/input/mouse/psmouse-smbus.c | 2 +-
drivers/input/mouse/synaptics_i2c.c | 8 +--
drivers/isdn/capi/kcapi.c | 2 +-
drivers/leds/trigger/ledtrig-input-events.c | 2 +-
drivers/md/bcache/btree.c | 3 +-
drivers/md/bcache/super.c | 30 +++++-----
drivers/md/bcache/writeback.c | 2 +-
drivers/md/dm-bufio.c | 3 +-
drivers/md/dm-cache-target.c | 3 +-
drivers/md/dm-clone-target.c | 3 +-
drivers/md/dm-crypt.c | 6 +-
drivers/md/dm-delay.c | 4 +-
drivers/md/dm-integrity.c | 15 +++--
drivers/md/dm-kcopyd.c | 3 +-
drivers/md/dm-log-userspace-base.c | 3 +-
drivers/md/dm-mpath.c | 5 +-
drivers/md/dm-raid1.c | 5 +-
drivers/md/dm-snap-persistent.c | 3 +-
drivers/md/dm-stripe.c | 2 +-
drivers/md/dm-verity-target.c | 4 +-
drivers/md/dm-writecache.c | 3 +-
drivers/md/dm.c | 3 +-
drivers/md/md.c | 4 +-
drivers/media/pci/ddbridge/ddbridge-core.c | 2 +-
.../platform/mediatek/mdp3/mtk-mdp3-core.c | 6 +-
.../platform/synopsys/hdmirx/snps_hdmirx.c | 8 +--
drivers/message/fusion/mptbase.c | 7 ++-
drivers/mmc/core/block.c | 3 +-
drivers/mmc/host/mtk-sd.c | 4 +-
drivers/mmc/host/omap.c | 2 +-
drivers/net/can/spi/hi311x.c | 3 +-
drivers/net/can/spi/mcp251x.c | 3 +-
.../net/ethernet/cavium/liquidio/lio_core.c | 2 +-
.../net/ethernet/cavium/liquidio/lio_main.c | 8 ++-
.../ethernet/cavium/liquidio/lio_vf_main.c | 3 +-
.../cavium/liquidio/request_manager.c | 2 +-
.../cavium/liquidio/response_manager.c | 3 +-
.../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 2 +-
.../hisilicon/hns3/hns3pf/hclge_main.c | 3 +-
drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +-
.../net/ethernet/marvell/octeontx2/af/cgx.c | 2 +-
.../marvell/octeontx2/af/mcs_rvu_if.c | 2 +-
.../ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 +-
.../ethernet/marvell/octeontx2/af/rvu_rep.c | 2 +-
.../marvell/octeontx2/nic/cn10k_ipsec.c | 3 +-
.../ethernet/marvell/prestera/prestera_main.c | 2 +-
.../ethernet/marvell/prestera/prestera_pci.c | 2 +-
drivers/net/ethernet/mellanox/mlxsw/core.c | 4 +-
drivers/net/ethernet/netronome/nfp/nfp_main.c | 2 +-
drivers/net/ethernet/qlogic/qed/qed_main.c | 3 +-
drivers/net/ethernet/sfc/efx_channels.c | 2 +-
drivers/net/ethernet/sfc/siena/efx_channels.c | 2 +-
drivers/net/ethernet/wiznet/w5100.c | 2 +-
drivers/net/fjes/fjes_main.c | 5 +-
drivers/net/macvlan.c | 2 +-
drivers/net/netdevsim/dev.c | 6 +-
drivers/net/phy/sfp.c | 12 ++--
drivers/net/wireguard/device.c | 6 +-
drivers/net/wireless/ath/ath6kl/usb.c | 2 +-
drivers/net/wireless/intel/ipw2x00/ipw2100.c | 6 +-
drivers/net/wireless/intel/ipw2x00/ipw2200.c | 2 +-
drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 4 +-
.../net/wireless/intel/iwlwifi/iwl-trans.h | 2 +-
drivers/net/wireless/intel/iwlwifi/mvm/tdls.c | 6 +-
.../net/wireless/marvell/libertas/if_sdio.c | 3 +-
.../net/wireless/marvell/libertas/if_spi.c | 3 +-
.../net/wireless/marvell/libertas_tf/main.c | 2 +-
.../net/wireless/mediatek/mt76/mt7921/init.c | 2 +-
.../net/wireless/mediatek/mt76/mt7925/init.c | 2 +-
drivers/net/wireless/quantenna/qtnfmac/core.c | 3 +-
drivers/net/wireless/realtek/rtlwifi/base.c | 2 +-
drivers/net/wireless/realtek/rtw88/usb.c | 3 +-
drivers/net/wireless/silabs/wfx/main.c | 2 +-
drivers/net/wireless/st/cw1200/bh.c | 4 +-
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 3 +-
drivers/net/wwan/wwan_hwsim.c | 2 +-
drivers/nvdimm/security.c | 4 +-
drivers/nvme/host/tcp.c | 2 +
drivers/nvme/target/admin-cmd.c | 2 +-
drivers/nvme/target/core.c | 5 +-
drivers/nvme/target/fabrics-cmd-auth.c | 2 +-
drivers/nvme/target/fc.c | 6 +-
drivers/nvme/target/tcp.c | 2 +-
drivers/pci/endpoint/functions/pci-epf-mhi.c | 2 +-
drivers/pci/endpoint/functions/pci-epf-ntb.c | 5 +-
drivers/pci/endpoint/functions/pci-epf-test.c | 3 +-
drivers/pci/endpoint/functions/pci-epf-vntb.c | 5 +-
drivers/pci/endpoint/pci-ep-cfs.c | 2 +-
drivers/pci/hotplug/pnv_php.c | 3 +-
drivers/pci/hotplug/shpchp_core.c | 3 +-
drivers/phy/allwinner/phy-sun4i-usb.c | 14 ++---
.../platform/cznic/turris-omnia-mcu-gpio.c | 2 +-
.../surface/aggregator/ssh_packet_layer.c | 2 +-
.../surface/aggregator/ssh_request_layer.c | 2 +-
.../platform/surface/surface_acpi_notify.c | 2 +-
drivers/platform/x86/gpd-pocket-fan.c | 4 +-
.../x86/x86-android-tablets/vexia_atla10_ec.c | 2 +-
drivers/power/supply/ab8500_btemp.c | 3 +-
drivers/power/supply/bq2415x_charger.c | 2 +-
drivers/power/supply/bq24190_charger.c | 2 +-
drivers/power/supply/bq27xxx_battery.c | 6 +-
drivers/power/supply/ipaq_micro_battery.c | 3 +-
drivers/power/supply/rk817_charger.c | 6 +-
drivers/power/supply/ucs1002_power.c | 2 +-
drivers/power/supply/ug3105_battery.c | 6 +-
drivers/rapidio/rio.c | 2 +-
drivers/ras/cec.c | 2 +-
drivers/regulator/irq_helpers.c | 2 +-
drivers/regulator/qcom-labibb-regulator.c | 4 +-
drivers/s390/char/tape_3590.c | 2 +-
drivers/scsi/be2iscsi/be_main.c | 3 +-
drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 2 +-
drivers/scsi/device_handler/scsi_dh_alua.c | 2 +-
drivers/scsi/fcoe/fcoe.c | 2 +-
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 3 +-
drivers/scsi/lpfc/lpfc_init.c | 2 +-
drivers/scsi/pm8001/pm8001_init.c | 2 +-
drivers/scsi/qedf/qedf_main.c | 15 +++--
drivers/scsi/qedi/qedi_main.c | 2 +-
drivers/scsi/qla2xxx/qla_os.c | 4 +-
drivers/scsi/qla2xxx/qla_target.c | 2 +-
drivers/scsi/qla2xxx/tcm_qla2xxx.c | 2 +-
drivers/scsi/qla4xxx/ql4_os.c | 3 +-
drivers/scsi/scsi_transport_fc.c | 7 ++-
drivers/scsi/scsi_transport_iscsi.c | 2 +-
drivers/soc/fsl/qbman/qman.c | 2 +-
drivers/soc/xilinx/zynqmp_power.c | 6 +-
drivers/staging/greybus/sdio.c | 2 +-
drivers/target/sbp/sbp_target.c | 8 +--
drivers/target/target_core_transport.c | 4 +-
drivers/target/target_core_xcopy.c | 2 +-
drivers/target/tcm_fc/tfc_conf.c | 2 +-
drivers/thunderbolt/tb.c | 2 +-
drivers/tty/serial/8250/8250_dw.c | 4 +-
drivers/tty/tty_buffer.c | 8 +--
drivers/usb/core/hub.c | 2 +-
drivers/usb/dwc3/gadget.c | 2 +-
drivers/usb/gadget/function/f_hid.c | 3 +-
drivers/usb/host/xhci-dbgcap.c | 8 +--
drivers/usb/host/xhci-ring.c | 2 +-
drivers/usb/storage/uas.c | 2 +-
drivers/usb/typec/anx7411.c | 3 +-
drivers/vdpa/vdpa_user/vduse_dev.c | 3 +-
drivers/virt/acrn/irqfd.c | 3 +-
drivers/virtio/virtio_balloon.c | 3 +-
drivers/xen/events/events_base.c | 6 +-
drivers/xen/privcmd.c | 3 +-
fs/afs/callback.c | 4 +-
fs/afs/main.c | 4 +-
fs/afs/write.c | 2 +-
fs/aio.c | 2 +-
fs/bcachefs/btree_write_buffer.c | 2 +-
fs/bcachefs/io_read.c | 12 ++--
fs/bcachefs/journal_io.c | 2 +-
fs/bcachefs/super.c | 10 ++--
fs/btrfs/async-thread.c | 3 +-
fs/btrfs/block-group.c | 2 +-
fs/btrfs/disk-io.c | 2 +-
fs/btrfs/extent_map.c | 2 +-
fs/btrfs/space-info.c | 4 +-
fs/btrfs/zoned.c | 2 +-
fs/ceph/super.c | 2 +-
fs/dlm/lowcomms.c | 2 +-
fs/dlm/main.c | 2 +-
fs/ext4/mballoc.c | 2 +-
fs/fs-writeback.c | 4 +-
fs/fuse/dev.c | 2 +-
fs/fuse/inode.c | 2 +-
fs/gfs2/main.c | 5 +-
fs/gfs2/ops_fstype.c | 6 +-
fs/netfs/objects.c | 2 +-
fs/netfs/read_collect.c | 2 +-
fs/netfs/write_collect.c | 2 +-
fs/nfs/namespace.c | 2 +-
fs/nfs/nfs4renewd.c | 2 +-
fs/nfsd/filecache.c | 2 +-
fs/notify/mark.c | 4 +-
fs/ocfs2/dlm/dlmdomain.c | 3 +-
fs/ocfs2/dlmfs/dlmfs.c | 3 +-
fs/quota/dquot.c | 2 +-
fs/smb/client/cifsfs.c | 16 +++--
fs/smb/server/ksmbd_work.c | 2 +-
fs/smb/server/transport_rdma.c | 3 +-
fs/super.c | 3 +-
fs/verity/verify.c | 2 +-
fs/xfs/xfs_log.c | 3 +-
fs/xfs/xfs_mru_cache.c | 3 +-
fs/xfs/xfs_super.c | 15 ++---
include/drm/gpu_scheduler.h | 2 +-
include/linux/closure.h | 2 +-
include/linux/workqueue.h | 60 ++++++++++++++-----
io_uring/io_uring.c | 4 +-
kernel/bpf/cgroup.c | 5 +-
kernel/bpf/cpumap.c | 2 +-
kernel/bpf/helpers.c | 4 +-
kernel/bpf/memalloc.c | 2 +-
kernel/bpf/syscall.c | 2 +-
kernel/cgroup/cgroup-v1.c | 2 +-
kernel/cgroup/cgroup.c | 4 +-
kernel/module/dups.c | 4 +-
kernel/padata.c | 9 +--
kernel/power/main.c | 2 +-
kernel/rcu/tasks.h | 4 +-
kernel/rcu/tree.c | 4 +-
kernel/sched/core.c | 4 +-
kernel/sched/ext.c | 4 +-
kernel/smp.c | 2 +-
kernel/trace/trace_events_user.c | 2 +-
kernel/umh.c | 2 +-
kernel/workqueue.c | 45 ++++++++++----
mm/backing-dev.c | 6 +-
mm/kfence/core.c | 6 +-
mm/memcontrol.c | 4 +-
mm/slub.c | 3 +-
mm/vmstat.c | 3 +-
net/bridge/br_cfm.c | 6 +-
net/bridge/br_mrp.c | 8 +--
net/ceph/messenger.c | 3 +-
net/ceph/mon_client.c | 2 +-
net/core/link_watch.c | 4 +-
net/core/skmsg.c | 2 +-
net/core/sock_diag.c | 2 +-
net/devlink/core.c | 2 +-
net/ipv4/inet_fragment.c | 2 +-
net/netfilter/nf_conntrack_ecache.c | 2 +-
net/openvswitch/dp_notify.c | 2 +-
net/rds/ib_rdma.c | 3 +-
net/rfkill/input.c | 2 +-
net/rxrpc/rxperf.c | 2 +-
net/smc/af_smc.c | 6 +-
net/smc/smc_core.c | 4 +-
net/tls/tls_device.c | 2 +-
net/unix/garbage.c | 2 +-
net/vmw_vsock/af_vsock.c | 2 +-
net/vmw_vsock/virtio_transport.c | 2 +-
net/vmw_vsock/vsock_loopback.c | 2 +-
net/wireless/core.c | 4 +-
net/wireless/sysfs.c | 2 +-
rust/kernel/workqueue.rs | 12 ++--
sound/soc/codecs/aw88081.c | 2 +-
sound/soc/codecs/aw88166.c | 2 +-
sound/soc/codecs/aw88261.c | 2 +-
sound/soc/codecs/aw88395/aw88395.c | 2 +-
sound/soc/codecs/aw88399.c | 2 +-
sound/soc/codecs/cs42l43-jack.c | 6 +-
sound/soc/codecs/cs42l43.c | 4 +-
sound/soc/codecs/es8326.c | 12 ++--
sound/soc/codecs/rt5663.c | 6 +-
sound/soc/codecs/wm_adsp.c | 2 +-
sound/soc/intel/boards/sof_es8336.c | 2 +-
sound/soc/sof/intel/cnl.c | 2 +-
sound/soc/sof/intel/hda-ipc.c | 2 +-
virt/kvm/eventfd.c | 2 +-
367 files changed, 750 insertions(+), 587 deletions(-)
--
2.49.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v1 01/10] Workqueue: net: replace use of system_wq with system_percpu_wq
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 02/10] Workqueue: mm: " Marco Crivellari
` (9 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_wq is a per-CPU worqueue, yet nothing in its name tells about that
CPU affinity constraint, which is very often not required by users.
Make it clear by adding a system_percpu_wq to all the network subsystem.
The old wq will be kept for a few release cylces.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
CC: "David S. Miller" <davem@davemloft.net>
CC: Eric Dumazet <edumazet@google.com>
CC: Jakub Kicinski <kuba@kernel.org>
CC: Paolo Abeni <pabeni@redhat.com>
---
net/bridge/br_cfm.c | 6 +++---
net/bridge/br_mrp.c | 8 ++++----
net/ceph/mon_client.c | 2 +-
net/core/skmsg.c | 2 +-
net/devlink/core.c | 2 +-
net/ipv4/inet_fragment.c | 2 +-
net/netfilter/nf_conntrack_ecache.c | 2 +-
net/openvswitch/dp_notify.c | 2 +-
net/rfkill/input.c | 2 +-
net/smc/smc_core.c | 2 +-
net/vmw_vsock/af_vsock.c | 2 +-
11 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/net/bridge/br_cfm.c b/net/bridge/br_cfm.c
index a3c755d0a09d..c2c1c7d44c61 100644
--- a/net/bridge/br_cfm.c
+++ b/net/bridge/br_cfm.c
@@ -134,7 +134,7 @@ static void ccm_rx_timer_start(struct br_cfm_peer_mep *peer_mep)
* of the configured CC 'expected_interval'
* in order to detect CCM defect after 3.25 interval.
*/
- queue_delayed_work(system_wq, &peer_mep->ccm_rx_dwork,
+ queue_delayed_work(system_percpu_wq, &peer_mep->ccm_rx_dwork,
usecs_to_jiffies(interval_us / 4));
}
@@ -285,7 +285,7 @@ static void ccm_tx_work_expired(struct work_struct *work)
ccm_frame_tx(skb);
interval_us = interval_to_us(mep->cc_config.exp_interval);
- queue_delayed_work(system_wq, &mep->ccm_tx_dwork,
+ queue_delayed_work(system_percpu_wq, &mep->ccm_tx_dwork,
usecs_to_jiffies(interval_us));
}
@@ -809,7 +809,7 @@ int br_cfm_cc_ccm_tx(struct net_bridge *br, const u32 instance,
* to send first frame immediately
*/
mep->ccm_tx_end = jiffies + usecs_to_jiffies(tx_info->period * 1000000);
- queue_delayed_work(system_wq, &mep->ccm_tx_dwork, 0);
+ queue_delayed_work(system_percpu_wq, &mep->ccm_tx_dwork, 0);
save:
mep->cc_ccm_tx_info = *tx_info;
diff --git a/net/bridge/br_mrp.c b/net/bridge/br_mrp.c
index fd2de35ffb3c..3c36fa24bc05 100644
--- a/net/bridge/br_mrp.c
+++ b/net/bridge/br_mrp.c
@@ -341,7 +341,7 @@ static void br_mrp_test_work_expired(struct work_struct *work)
out:
rcu_read_unlock();
- queue_delayed_work(system_wq, &mrp->test_work,
+ queue_delayed_work(system_percpu_wq, &mrp->test_work,
usecs_to_jiffies(mrp->test_interval));
}
@@ -418,7 +418,7 @@ static void br_mrp_in_test_work_expired(struct work_struct *work)
out:
rcu_read_unlock();
- queue_delayed_work(system_wq, &mrp->in_test_work,
+ queue_delayed_work(system_percpu_wq, &mrp->in_test_work,
usecs_to_jiffies(mrp->in_test_interval));
}
@@ -725,7 +725,7 @@ int br_mrp_start_test(struct net_bridge *br,
mrp->test_max_miss = test->max_miss;
mrp->test_monitor = test->monitor;
mrp->test_count_miss = 0;
- queue_delayed_work(system_wq, &mrp->test_work,
+ queue_delayed_work(system_percpu_wq, &mrp->test_work,
usecs_to_jiffies(test->interval));
return 0;
@@ -865,7 +865,7 @@ int br_mrp_start_in_test(struct net_bridge *br,
mrp->in_test_end = jiffies + usecs_to_jiffies(in_test->period);
mrp->in_test_max_miss = in_test->max_miss;
mrp->in_test_count_miss = 0;
- queue_delayed_work(system_wq, &mrp->in_test_work,
+ queue_delayed_work(system_percpu_wq, &mrp->in_test_work,
usecs_to_jiffies(in_test->interval));
return 0;
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index ab66b599ac47..c227ececa925 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -314,7 +314,7 @@ static void __schedule_delayed(struct ceph_mon_client *monc)
delay = CEPH_MONC_PING_INTERVAL;
dout("__schedule_delayed after %lu\n", delay);
- mod_delayed_work(system_wq, &monc->delayed_work,
+ mod_delayed_work(system_percpu_wq, &monc->delayed_work,
round_jiffies_relative(delay));
}
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 0ddc4c718833..83fc433f5461 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -855,7 +855,7 @@ void sk_psock_drop(struct sock *sk, struct sk_psock *psock)
sk_psock_stop(psock);
INIT_RCU_WORK(&psock->rwork, sk_psock_destroy);
- queue_rcu_work(system_wq, &psock->rwork);
+ queue_rcu_work(system_percpu_wq, &psock->rwork);
}
EXPORT_SYMBOL_GPL(sk_psock_drop);
diff --git a/net/devlink/core.c b/net/devlink/core.c
index 7203c39532fc..58093f49c090 100644
--- a/net/devlink/core.c
+++ b/net/devlink/core.c
@@ -320,7 +320,7 @@ static void devlink_release(struct work_struct *work)
void devlink_put(struct devlink *devlink)
{
if (refcount_dec_and_test(&devlink->refcount))
- queue_rcu_work(system_wq, &devlink->rwork);
+ queue_rcu_work(system_percpu_wq, &devlink->rwork);
}
struct devlink *devlinks_xa_find_get(struct net *net, unsigned long *indexp)
diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
index 470ab17ceb51..025895eb6ec5 100644
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -183,7 +183,7 @@ static void fqdir_work_fn(struct work_struct *work)
rhashtable_free_and_destroy(&fqdir->rhashtable, inet_frags_free_cb, NULL);
if (llist_add(&fqdir->free_list, &fqdir_free_list))
- queue_delayed_work(system_wq, &fqdir_free_work, HZ);
+ queue_delayed_work(system_percpu_wq, &fqdir_free_work, HZ);
}
int fqdir_init(struct fqdir **fqdirp, struct inet_frags *f, struct net *net)
diff --git a/net/netfilter/nf_conntrack_ecache.c b/net/netfilter/nf_conntrack_ecache.c
index af68c64acaab..81baf2082604 100644
--- a/net/netfilter/nf_conntrack_ecache.c
+++ b/net/netfilter/nf_conntrack_ecache.c
@@ -301,7 +301,7 @@ void nf_conntrack_ecache_work(struct net *net, enum nf_ct_ecache_state state)
net->ct.ecache_dwork_pending = true;
} else if (state == NFCT_ECACHE_DESTROY_SENT) {
if (!hlist_nulls_empty(&cnet->ecache.dying_list))
- mod_delayed_work(system_wq, &cnet->ecache.dwork, 0);
+ mod_delayed_work(system_percpu_wq, &cnet->ecache.dwork, 0);
else
net->ct.ecache_dwork_pending = false;
}
diff --git a/net/openvswitch/dp_notify.c b/net/openvswitch/dp_notify.c
index 7af0cde8b293..a2af90ee99af 100644
--- a/net/openvswitch/dp_notify.c
+++ b/net/openvswitch/dp_notify.c
@@ -75,7 +75,7 @@ static int dp_device_event(struct notifier_block *unused, unsigned long event,
/* schedule vport destroy, dev_put and genl notification */
ovs_net = net_generic(dev_net(dev), ovs_net_id);
- queue_work(system_wq, &ovs_net->dp_notify_work);
+ queue_work(system_percpu_wq, &ovs_net->dp_notify_work);
}
return NOTIFY_DONE;
diff --git a/net/rfkill/input.c b/net/rfkill/input.c
index 598d0a61bda7..53d286b10843 100644
--- a/net/rfkill/input.c
+++ b/net/rfkill/input.c
@@ -159,7 +159,7 @@ static void rfkill_schedule_global_op(enum rfkill_sched_op op)
rfkill_op_pending = true;
if (op == RFKILL_GLOBAL_OP_EPO && !rfkill_is_epo_lock_active()) {
/* bypass the limiter for EPO */
- mod_delayed_work(system_wq, &rfkill_op_work, 0);
+ mod_delayed_work(system_percpu_wq, &rfkill_op_work, 0);
rfkill_last_scheduled = jiffies;
} else
rfkill_schedule_ratelimited();
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index ac07b963aede..ab870109f916 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -85,7 +85,7 @@ static void smc_lgr_schedule_free_work(struct smc_link_group *lgr)
* otherwise there is a risk of out-of-sync link groups.
*/
if (!lgr->freeing) {
- mod_delayed_work(system_wq, &lgr->free_work,
+ mod_delayed_work(system_percpu_wq, &lgr->free_work,
(!lgr->is_smcd && lgr->role == SMC_CLNT) ?
SMC_LGR_FREE_DELAY_CLNT :
SMC_LGR_FREE_DELAY_SERV);
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index fc6afbc8d680..f8798d7b5de7 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1569,7 +1569,7 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
* reschedule it, then ungrab the socket refcount to
* keep it balanced.
*/
- if (mod_delayed_work(system_wq, &vsk->connect_work,
+ if (mod_delayed_work(system_percpu_wq, &vsk->connect_work,
timeout))
sock_put(sk);
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 02/10] Workqueue: mm: replace use of system_wq with system_percpu_wq
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 01/10] Workqueue: net: replace use of system_wq with system_percpu_wq Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 03/10] Workqueue: fs: " Marco Crivellari
` (8 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
Andrew Morton
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_wq is a per-CPU worqueue, yet nothing in its name tells about that
CPU affinity constraint, which is very often not required by users.
Make it clear by adding a system_percpu_wq to all the mm subsystem.
The old wq will be kept for a few release cylces.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
CC: Andrew Morton <akpm@linux-foundation.org>
---
mm/backing-dev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 783904d8c5ef..784605103202 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -966,7 +966,7 @@ static int __init cgwb_init(void)
{
/*
* There can be many concurrent release work items overwhelming
- * system_wq. Put them in a separate wq and limit concurrency.
+ * system_percpu_wq. Put them in a separate wq and limit concurrency.
* There's no point in executing many of these in parallel.
*/
cgwb_release_wq = alloc_workqueue("cgwb_release", 0, 1);
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 03/10] Workqueue: fs: replace use of system_wq with system_percpu_wq
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 01/10] Workqueue: net: replace use of system_wq with system_percpu_wq Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 02/10] Workqueue: mm: " Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 04/10] Workqueue: " Marco Crivellari
` (7 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
Alexander Viro, Christian Brauner
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_wq is a per-CPU worqueue, yet nothing in its name tells about that
CPU affinity constraint, which is very often not required by users.
Make it clear by adding a system_percpu_wq to all the fs subsystem.
The old wq will be kept for a few release cylces.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
CC: Alexander Viro <viro@zeniv.linux.org.uk>
CC: Christian Brauner <brauner@kernel.org>
---
fs/aio.c | 2 +-
fs/fs-writeback.c | 2 +-
fs/fuse/dev.c | 2 +-
fs/fuse/inode.c | 2 +-
fs/nfs/namespace.c | 2 +-
fs/nfs/nfs4renewd.c | 2 +-
6 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 7b976b564cfc..747e9b5bba23 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -636,7 +636,7 @@ static void free_ioctx_reqs(struct percpu_ref *ref)
/* Synchronize against RCU protected table->table[] dereferences */
INIT_RCU_WORK(&ctx->free_rwork, free_ioctx);
- queue_rcu_work(system_wq, &ctx->free_rwork);
+ queue_rcu_work(system_percpu_wq, &ctx->free_rwork);
}
/*
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index cc57367fb641..cf51a265bf27 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -2442,7 +2442,7 @@ static int dirtytime_interval_handler(const struct ctl_table *table, int write,
ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
if (ret == 0 && write)
- mod_delayed_work(system_wq, &dirtytime_work, 0);
+ mod_delayed_work(system_percpu_wq, &dirtytime_work, 0);
return ret;
}
diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
index 6dcbaa218b7a..64b623471a09 100644
--- a/fs/fuse/dev.c
+++ b/fs/fuse/dev.c
@@ -118,7 +118,7 @@ void fuse_check_timeout(struct work_struct *work)
goto abort_conn;
out:
- queue_delayed_work(system_wq, &fc->timeout.work,
+ queue_delayed_work(system_percpu_wq, &fc->timeout.work,
fuse_timeout_timer_freq);
return;
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index fd48e8d37f2e..6a608ea77d09 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -1268,7 +1268,7 @@ static void set_request_timeout(struct fuse_conn *fc, unsigned int timeout)
{
fc->timeout.req_timeout = secs_to_jiffies(timeout);
INIT_DELAYED_WORK(&fc->timeout.work, fuse_check_timeout);
- queue_delayed_work(system_wq, &fc->timeout.work,
+ queue_delayed_work(system_percpu_wq, &fc->timeout.work,
fuse_timeout_timer_freq);
}
diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c
index 973aed9cc5fe..0689369c8a63 100644
--- a/fs/nfs/namespace.c
+++ b/fs/nfs/namespace.c
@@ -336,7 +336,7 @@ static int param_set_nfs_timeout(const char *val, const struct kernel_param *kp)
num *= HZ;
*((int *)kp->arg) = num;
if (!list_empty(&nfs_automount_list))
- mod_delayed_work(system_wq, &nfs_automount_task, num);
+ mod_delayed_work(system_percpu_wq, &nfs_automount_task, num);
} else {
*((int *)kp->arg) = -1*HZ;
cancel_delayed_work(&nfs_automount_task);
diff --git a/fs/nfs/nfs4renewd.c b/fs/nfs/nfs4renewd.c
index db3811af0796..18ae614e5a6c 100644
--- a/fs/nfs/nfs4renewd.c
+++ b/fs/nfs/nfs4renewd.c
@@ -122,7 +122,7 @@ nfs4_schedule_state_renewal(struct nfs_client *clp)
timeout = 5 * HZ;
dprintk("%s: requeueing work. Lease period = %ld\n",
__func__, (timeout + HZ - 1) / HZ);
- mod_delayed_work(system_wq, &clp->cl_renewd, timeout);
+ mod_delayed_work(system_percpu_wq, &clp->cl_renewd, timeout);
set_bit(NFS_CS_RENEWD, &clp->cl_res_state);
spin_unlock(&clp->cl_lock);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 04/10] Workqueue: replace use of system_wq with system_percpu_wq
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (2 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 03/10] Workqueue: fs: " Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 05/10] Workqueue: replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
` (6 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_wq is a per-CPU worqueue, yet nothing in its name tells about that
CPU affinity constraint, which is very often not required by users. Make
it clear by adding a system_percpu_wq.
queue_work() / queue_delayed_work() mod_delayed_work() will now use the
new per-cpu wq: whether the user still stick on the old name a warn will
be printed along a wq redirect to the new one.
This patch add the new system_percpu_wq except for mm, fs and net
subsystem, whom are handled in separated patches.
The old wq will be kept for a few release cylces.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
arch/s390/kernel/diag/diag324.c | 4 +-
arch/s390/kernel/hiperdispatch.c | 2 +-
drivers/accel/ivpu/ivpu_hw_btrs.c | 2 +-
drivers/accel/ivpu/ivpu_ipc.c | 2 +-
drivers/accel/ivpu/ivpu_job.c | 2 +-
drivers/accel/ivpu/ivpu_mmu.c | 2 +-
drivers/accel/ivpu/ivpu_pm.c | 2 +-
drivers/acpi/osl.c | 2 +-
drivers/base/devcoredump.c | 2 +-
drivers/block/nbd.c | 2 +-
drivers/block/sunvdc.c | 2 +-
drivers/cxl/pci.c | 2 +-
drivers/extcon/extcon-intel-int3496.c | 4 +-
drivers/gpio/gpiolib-cdev.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +-
drivers/gpu/drm/bridge/ite-it6505.c | 2 +-
drivers/gpu/drm/bridge/ti-tfp410.c | 2 +-
drivers/gpu/drm/drm_probe_helper.c | 2 +-
drivers/gpu/drm/drm_self_refresh_helper.c | 2 +-
drivers/gpu/drm/exynos/exynos_hdmi.c | 2 +-
drivers/gpu/drm/i915/i915_driver.c | 2 +-
drivers/gpu/drm/i915/i915_drv.h | 2 +-
.../gpu/drm/rockchip/dw_hdmi_qp-rockchip.c | 4 +-
drivers/gpu/drm/scheduler/sched_main.c | 2 +-
drivers/gpu/drm/tilcdc/tilcdc_crtc.c | 2 +-
drivers/gpu/drm/vc4/vc4_hdmi.c | 4 +-
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 6 +--
drivers/gpu/drm/xe/xe_pt.c | 2 +-
drivers/iio/adc/pac1934.c | 2 +-
drivers/input/keyboard/gpio_keys.c | 2 +-
drivers/input/misc/palmas-pwrbutton.c | 2 +-
drivers/input/mouse/synaptics_i2c.c | 8 ++--
drivers/leds/trigger/ledtrig-input-events.c | 2 +-
drivers/md/bcache/super.c | 20 +++++-----
drivers/mmc/host/mtk-sd.c | 4 +-
drivers/net/ethernet/sfc/efx_channels.c | 2 +-
drivers/net/ethernet/sfc/siena/efx_channels.c | 2 +-
drivers/net/phy/sfp.c | 12 +++---
drivers/net/wireless/intel/ipw2x00/ipw2100.c | 6 +--
drivers/net/wireless/intel/ipw2x00/ipw2200.c | 2 +-
drivers/net/wireless/intel/iwlwifi/mvm/tdls.c | 6 +--
.../net/wireless/mediatek/mt76/mt7921/init.c | 2 +-
.../net/wireless/mediatek/mt76/mt7925/init.c | 2 +-
drivers/nvdimm/security.c | 4 +-
drivers/nvme/target/admin-cmd.c | 2 +-
drivers/nvme/target/fabrics-cmd-auth.c | 2 +-
drivers/pci/endpoint/pci-ep-cfs.c | 2 +-
drivers/phy/allwinner/phy-sun4i-usb.c | 14 +++----
.../platform/cznic/turris-omnia-mcu-gpio.c | 2 +-
.../surface/aggregator/ssh_packet_layer.c | 2 +-
.../surface/aggregator/ssh_request_layer.c | 2 +-
drivers/platform/x86/gpd-pocket-fan.c | 4 +-
.../x86/x86-android-tablets/vexia_atla10_ec.c | 2 +-
drivers/power/supply/bq2415x_charger.c | 2 +-
drivers/power/supply/bq24190_charger.c | 2 +-
drivers/power/supply/bq27xxx_battery.c | 6 +--
drivers/power/supply/rk817_charger.c | 6 +--
drivers/power/supply/ucs1002_power.c | 2 +-
drivers/power/supply/ug3105_battery.c | 6 +--
drivers/ras/cec.c | 2 +-
drivers/regulator/irq_helpers.c | 2 +-
drivers/regulator/qcom-labibb-regulator.c | 4 +-
drivers/thunderbolt/tb.c | 2 +-
drivers/usb/dwc3/gadget.c | 2 +-
drivers/usb/host/xhci-dbgcap.c | 8 ++--
drivers/usb/host/xhci-ring.c | 2 +-
drivers/xen/events/events_base.c | 6 +--
include/drm/gpu_scheduler.h | 2 +-
include/linux/closure.h | 2 +-
include/linux/workqueue.h | 37 +++++++++++++------
io_uring/io_uring.c | 2 +-
kernel/bpf/cgroup.c | 2 +-
kernel/bpf/cpumap.c | 2 +-
kernel/cgroup/cgroup.c | 2 +-
kernel/module/dups.c | 4 +-
kernel/rcu/tasks.h | 4 +-
kernel/smp.c | 2 +-
kernel/trace/trace_events_user.c | 2 +-
kernel/workqueue.c | 2 +-
rust/kernel/workqueue.rs | 6 +--
sound/soc/codecs/aw88081.c | 2 +-
sound/soc/codecs/aw88166.c | 2 +-
sound/soc/codecs/aw88261.c | 2 +-
sound/soc/codecs/aw88395/aw88395.c | 2 +-
sound/soc/codecs/aw88399.c | 2 +-
sound/soc/codecs/cs42l43-jack.c | 6 +--
sound/soc/codecs/cs42l43.c | 4 +-
sound/soc/codecs/es8326.c | 12 +++---
sound/soc/codecs/rt5663.c | 6 +--
sound/soc/intel/boards/sof_es8336.c | 2 +-
sound/soc/sof/intel/cnl.c | 2 +-
sound/soc/sof/intel/hda-ipc.c | 2 +-
92 files changed, 181 insertions(+), 166 deletions(-)
diff --git a/arch/s390/kernel/diag/diag324.c b/arch/s390/kernel/diag/diag324.c
index 7fa4c0b7eb6c..f0a8b4841fb9 100644
--- a/arch/s390/kernel/diag/diag324.c
+++ b/arch/s390/kernel/diag/diag324.c
@@ -116,7 +116,7 @@ static void pibwork_handler(struct work_struct *work)
mutex_lock(&pibmutex);
timedout = ktime_add_ns(data->expire, PIBWORK_DELAY);
if (ktime_before(ktime_get(), timedout)) {
- mod_delayed_work(system_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DELAY));
+ mod_delayed_work(system_percpu_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DELAY));
goto out;
}
vfree(data->pib);
@@ -174,7 +174,7 @@ long diag324_pibbuf(unsigned long arg)
pib_update(data);
data->sequence++;
data->expire = ktime_add_ns(ktime_get(), tod_to_ns(data->pib->intv));
- mod_delayed_work(system_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DELAY));
+ mod_delayed_work(system_percpu_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DELAY));
first = false;
}
rc = data->rc;
diff --git a/arch/s390/kernel/hiperdispatch.c b/arch/s390/kernel/hiperdispatch.c
index e7b66d046e8d..85b5508ab62c 100644
--- a/arch/s390/kernel/hiperdispatch.c
+++ b/arch/s390/kernel/hiperdispatch.c
@@ -191,7 +191,7 @@ int hd_enable_hiperdispatch(void)
return 0;
if (hd_online_cores <= hd_entitled_cores)
return 0;
- mod_delayed_work(system_wq, &hd_capacity_work, HD_DELAY_INTERVAL * hd_delay_factor);
+ mod_delayed_work(system_percpu_wq, &hd_capacity_work, HD_DELAY_INTERVAL * hd_delay_factor);
hd_update_capacities();
return 1;
}
diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.c b/drivers/accel/ivpu/ivpu_hw_btrs.c
index 56c56012b980..62f9dd7dceed 100644
--- a/drivers/accel/ivpu/ivpu_hw_btrs.c
+++ b/drivers/accel/ivpu/ivpu_hw_btrs.c
@@ -630,7 +630,7 @@ bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq)
if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) {
ivpu_dbg(vdev, IRQ, "Survivability IRQ\n");
- queue_work(system_wq, &vdev->irq_dct_work);
+ queue_work(system_percpu_wq, &vdev->irq_dct_work);
}
if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status))
diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
index 0e096fd9b95d..247dbb64b4d5 100644
--- a/drivers/accel/ivpu/ivpu_ipc.c
+++ b/drivers/accel/ivpu/ivpu_ipc.c
@@ -459,7 +459,7 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev)
}
}
- queue_work(system_wq, &vdev->irq_ipc_work);
+ queue_work(system_percpu_wq, &vdev->irq_ipc_work);
}
void ivpu_ipc_irq_work_fn(struct work_struct *work)
diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
index 004059e4f1e8..f63eba4c9d9f 100644
--- a/drivers/accel/ivpu/ivpu_job.c
+++ b/drivers/accel/ivpu/ivpu_job.c
@@ -549,7 +549,7 @@ static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32
* status and ensure both are handled in the same way
*/
job->file_priv->has_mmu_faults = true;
- queue_work(system_wq, &vdev->context_abort_work);
+ queue_work(system_percpu_wq, &vdev->context_abort_work);
return 0;
}
diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c
index 5ea010568faa..e1baf6b64935 100644
--- a/drivers/accel/ivpu/ivpu_mmu.c
+++ b/drivers/accel/ivpu/ivpu_mmu.c
@@ -970,7 +970,7 @@ void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev)
}
}
- queue_work(system_wq, &vdev->context_abort_work);
+ queue_work(system_percpu_wq, &vdev->context_abort_work);
}
void ivpu_mmu_evtq_dump(struct ivpu_device *vdev)
diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
index b5891e91f7ab..0c1f639931ad 100644
--- a/drivers/accel/ivpu/ivpu_pm.c
+++ b/drivers/accel/ivpu/ivpu_pm.c
@@ -198,7 +198,7 @@ void ivpu_start_job_timeout_detection(struct ivpu_device *vdev)
unsigned long timeout_ms = ivpu_tdr_timeout_ms ? ivpu_tdr_timeout_ms : vdev->timeout.tdr;
/* No-op if already queued */
- queue_delayed_work(system_wq, &vdev->pm->job_timeout_work, msecs_to_jiffies(timeout_ms));
+ queue_delayed_work(system_percpu_wq, &vdev->pm->job_timeout_work, msecs_to_jiffies(timeout_ms));
}
void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev)
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 5ff343096ece..a79a5d47bdb8 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -398,7 +398,7 @@ static void acpi_os_drop_map_ref(struct acpi_ioremap *map)
list_del_rcu(&map->list);
INIT_RCU_WORK(&map->track.rwork, acpi_os_map_remove);
- queue_rcu_work(system_wq, &map->track.rwork);
+ queue_rcu_work(system_percpu_wq, &map->track.rwork);
}
/**
diff --git a/drivers/base/devcoredump.c b/drivers/base/devcoredump.c
index 03a39c417dc4..8c4844ad7c6b 100644
--- a/drivers/base/devcoredump.c
+++ b/drivers/base/devcoredump.c
@@ -125,7 +125,7 @@ static ssize_t devcd_data_write(struct file *filp, struct kobject *kobj,
mutex_lock(&devcd->mutex);
if (!devcd->delete_work) {
devcd->delete_work = true;
- mod_delayed_work(system_wq, &devcd->del_wk, 0);
+ mod_delayed_work(system_percpu_wq, &devcd->del_wk, 0);
}
mutex_unlock(&devcd->mutex);
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 7bdc7eb808ea..7738fce177fa 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -311,7 +311,7 @@ static void nbd_mark_nsock_dead(struct nbd_device *nbd, struct nbd_sock *nsock,
if (args) {
INIT_WORK(&args->work, nbd_dead_link_work);
args->index = nbd->index;
- queue_work(system_wq, &args->work);
+ queue_work(system_percpu_wq, &args->work);
}
}
if (!nsock->dead) {
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index b5727dea15bd..442546b05df8 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -1187,7 +1187,7 @@ static void vdc_ldc_reset(struct vdc_port *port)
}
if (port->ldc_timeout)
- mod_delayed_work(system_wq, &port->ldc_reset_timer_work,
+ mod_delayed_work(system_percpu_wq, &port->ldc_reset_timer_work,
round_jiffies(jiffies + HZ * port->ldc_timeout));
mod_timer(&port->vio.timer, round_jiffies(jiffies + HZ));
return;
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index 7b14a154463c..c610551b41bf 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -136,7 +136,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id)
if (opcode == CXL_MBOX_OP_SANITIZE) {
mutex_lock(&cxl_mbox->mbox_mutex);
if (mds->security.sanitize_node)
- mod_delayed_work(system_wq, &mds->security.poll_dwork, 0);
+ mod_delayed_work(system_percpu_wq, &mds->security.poll_dwork, 0);
mutex_unlock(&cxl_mbox->mbox_mutex);
} else {
/* short-circuit the wait in __cxl_pci_mbox_send_cmd() */
diff --git a/drivers/extcon/extcon-intel-int3496.c b/drivers/extcon/extcon-intel-int3496.c
index ded1a85a5549..7d16d5b7d58f 100644
--- a/drivers/extcon/extcon-intel-int3496.c
+++ b/drivers/extcon/extcon-intel-int3496.c
@@ -106,7 +106,7 @@ static irqreturn_t int3496_thread_isr(int irq, void *priv)
struct int3496_data *data = priv;
/* Let the pin settle before processing it */
- mod_delayed_work(system_wq, &data->work, DEBOUNCE_TIME);
+ mod_delayed_work(system_percpu_wq, &data->work, DEBOUNCE_TIME);
return IRQ_HANDLED;
}
@@ -181,7 +181,7 @@ static int int3496_probe(struct platform_device *pdev)
}
/* process id-pin so that we start with the right status */
- queue_delayed_work(system_wq, &data->work, 0);
+ queue_delayed_work(system_percpu_wq, &data->work, 0);
flush_delayed_work(&data->work);
platform_set_drvdata(pdev, data);
diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c
index 107d75558b5a..3e9c037ff4cd 100644
--- a/drivers/gpio/gpiolib-cdev.c
+++ b/drivers/gpio/gpiolib-cdev.c
@@ -700,7 +700,7 @@ static enum hte_return process_hw_ts(struct hte_ts_data *ts, void *p)
if (READ_ONCE(line->sw_debounced)) {
line->total_discard_seq++;
line->last_seqno = ts->seq;
- mod_delayed_work(system_wq, &line->work,
+ mod_delayed_work(system_percpu_wq, &line->work,
usecs_to_jiffies(READ_ONCE(line->desc->debounce_period_us)));
} else {
if (unlikely(ts->seq < line->line_seqno))
@@ -841,7 +841,7 @@ static irqreturn_t debounce_irq_handler(int irq, void *p)
{
struct line *line = p;
- mod_delayed_work(system_wq, &line->work,
+ mod_delayed_work(system_percpu_wq, &line->work,
usecs_to_jiffies(READ_ONCE(line->desc->debounce_period_us)));
return IRQ_HANDLED;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index a30111d2c3ea..96c659389480 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -4610,7 +4610,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
}
/* must succeed. */
amdgpu_ras_resume(adev);
- queue_delayed_work(system_wq, &adev->delayed_init_work,
+ queue_delayed_work(system_percpu_wq, &adev->delayed_init_work,
msecs_to_jiffies(AMDGPU_RESUME_MS));
}
@@ -5085,7 +5085,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
if (r)
goto exit;
- queue_delayed_work(system_wq, &adev->delayed_init_work,
+ queue_delayed_work(system_percpu_wq, &adev->delayed_init_work,
msecs_to_jiffies(AMDGPU_RESUME_MS));
exit:
if (amdgpu_sriov_vf(adev)) {
diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
index 8a607558ac89..433e6620dad8 100644
--- a/drivers/gpu/drm/bridge/ite-it6505.c
+++ b/drivers/gpu/drm/bridge/ite-it6505.c
@@ -2082,7 +2082,7 @@ static void it6505_start_hdcp(struct it6505 *it6505)
DRM_DEV_DEBUG_DRIVER(dev, "start");
it6505_reset_hdcp(it6505);
- queue_delayed_work(system_wq, &it6505->hdcp_work,
+ queue_delayed_work(system_percpu_wq, &it6505->hdcp_work,
msecs_to_jiffies(2400));
}
diff --git a/drivers/gpu/drm/bridge/ti-tfp410.c b/drivers/gpu/drm/bridge/ti-tfp410.c
index 79ab5da827e1..d798c951ddcc 100644
--- a/drivers/gpu/drm/bridge/ti-tfp410.c
+++ b/drivers/gpu/drm/bridge/ti-tfp410.c
@@ -115,7 +115,7 @@ static void tfp410_hpd_callback(void *arg, enum drm_connector_status status)
{
struct tfp410 *dvi = arg;
- mod_delayed_work(system_wq, &dvi->hpd_work,
+ mod_delayed_work(system_percpu_wq, &dvi->hpd_work,
msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS));
}
diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
index 7ba16323e7c2..30e8d3467c83 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -625,7 +625,7 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
*/
dev->mode_config.delayed_event = true;
if (dev->mode_config.poll_enabled)
- mod_delayed_work(system_wq,
+ mod_delayed_work(system_percpu_wq,
&dev->mode_config.output_poll_work,
0);
}
diff --git a/drivers/gpu/drm/drm_self_refresh_helper.c b/drivers/gpu/drm/drm_self_refresh_helper.c
index dd33fec5aabd..12f5af633da3 100644
--- a/drivers/gpu/drm/drm_self_refresh_helper.c
+++ b/drivers/gpu/drm/drm_self_refresh_helper.c
@@ -217,7 +217,7 @@ void drm_self_refresh_helper_alter_state(struct drm_atomic_state *state)
ewma_psr_time_read(&sr_data->exit_avg_ms)) * 2;
mutex_unlock(&sr_data->avg_mutex);
- mod_delayed_work(system_wq, &sr_data->entry_work,
+ mod_delayed_work(system_percpu_wq, &sr_data->entry_work,
msecs_to_jiffies(delay));
}
}
diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
index 01813e11e6c6..8e76ac8ee4e2 100644
--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
+++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
@@ -1692,7 +1692,7 @@ static irqreturn_t hdmi_irq_thread(int irq, void *arg)
{
struct hdmi_context *hdata = arg;
- mod_delayed_work(system_wq, &hdata->hotplug_work,
+ mod_delayed_work(system_percpu_wq, &hdata->hotplug_work,
msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS));
return IRQ_HANDLED;
diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
index ce3cc93ea211..79b98ba4104e 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -141,7 +141,7 @@ static int i915_workqueues_init(struct drm_i915_private *dev_priv)
/*
* The unordered i915 workqueue should be used for all work
* scheduling that do not require running in order, which used
- * to be scheduled on the system_wq before moving to a driver
+ * to be scheduled on the system_percpu_wq before moving to a driver
* instance due deprecation of flush_scheduled_work().
*/
dev_priv->unordered_wq = alloc_workqueue("i915-unordered", 0, 0);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ffc346379cc2..b2c194b17eae 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -264,7 +264,7 @@ struct drm_i915_private {
*
* This workqueue should be used for all unordered work
* scheduling within i915, which used to be scheduled on the
- * system_wq before moving to a driver instance due
+ * system_percpu_wq before moving to a driver instance due
* deprecation of flush_scheduled_work().
*/
struct workqueue_struct *unordered_wq;
diff --git a/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
index 3d1dddb34603..b115fe655a4b 100644
--- a/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
+++ b/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
@@ -274,7 +274,7 @@ static irqreturn_t dw_hdmi_qp_rk3576_irq(int irq, void *dev_id)
val = HIWORD_UPDATE(RK3576_HDMI_HPD_INT_CLR, RK3576_HDMI_HPD_INT_CLR);
regmap_write(hdmi->regmap, RK3576_IOC_MISC_CON0, val);
- mod_delayed_work(system_wq, &hdmi->hpd_work,
+ mod_delayed_work(system_percpu_wq, &hdmi->hpd_work,
msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS));
val = HIWORD_UPDATE(0, RK3576_HDMI_HPD_INT_MSK);
@@ -321,7 +321,7 @@ static irqreturn_t dw_hdmi_qp_rk3588_irq(int irq, void *dev_id)
RK3588_HDMI0_HPD_INT_CLR);
regmap_write(hdmi->regmap, RK3588_GRF_SOC_CON2, val);
- mod_delayed_work(system_wq, &hdmi->hpd_work,
+ mod_delayed_work(system_percpu_wq, &hdmi->hpd_work,
msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS));
if (hdmi->port_id)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index bfea608a7106..d3c0a1ca0b2c 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1260,7 +1260,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_
sched->name = args->name;
sched->timeout = args->timeout;
sched->hang_limit = args->hang_limit;
- sched->timeout_wq = args->timeout_wq ? args->timeout_wq : system_wq;
+ sched->timeout_wq = args->timeout_wq ? args->timeout_wq : system_percpu_wq;
sched->score = args->score ? args->score : &sched->_score;
sched->dev = args->dev;
diff --git a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
index b5f60b2b2d0e..57518a4ab4e1 100644
--- a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
+++ b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
@@ -985,7 +985,7 @@ irqreturn_t tilcdc_crtc_irq(struct drm_crtc *crtc)
dev_err(dev->dev,
"%s(0x%08x): Sync lost flood detected, recovering",
__func__, stat);
- queue_work(system_wq,
+ queue_work(system_percpu_wq,
&tilcdc_crtc->recover_work);
tilcdc_write(dev, LCDC_INT_ENABLE_CLR_REG,
LCDC_SYNC_LOST);
diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
index 37238a12baa5..4ee5f4d6371e 100644
--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
+++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
@@ -744,7 +744,7 @@ static void vc4_hdmi_enable_scrambling(struct drm_encoder *encoder)
vc4_hdmi->scdc_enabled = true;
- queue_delayed_work(system_wq, &vc4_hdmi->scrambling_work,
+ queue_delayed_work(system_percpu_wq, &vc4_hdmi->scrambling_work,
msecs_to_jiffies(SCRAMBLING_POLLING_DELAY_MS));
}
@@ -793,7 +793,7 @@ static void vc4_hdmi_scrambling_wq(struct work_struct *work)
drm_scdc_set_high_tmds_clock_ratio(connector, true);
drm_scdc_set_scrambling(connector, true);
- queue_delayed_work(system_wq, &vc4_hdmi->scrambling_work,
+ queue_delayed_work(system_percpu_wq, &vc4_hdmi->scrambling_work,
msecs_to_jiffies(SCRAMBLING_POLLING_DELAY_MS));
}
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 03072e094991..2b27621a36e5 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -99,7 +99,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
invalidation_fence_signal(xe, fence);
}
if (!list_empty(>->tlb_invalidation.pending_fences))
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
>->tlb_invalidation.fence_tdr,
tlb_timeout_jiffies(gt));
spin_unlock_irq(>->tlb_invalidation.pending_lock);
@@ -218,7 +218,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
>->tlb_invalidation.pending_fences);
if (list_is_singular(>->tlb_invalidation.pending_fences))
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
>->tlb_invalidation.fence_tdr,
tlb_timeout_jiffies(gt));
}
@@ -512,7 +512,7 @@ int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
}
if (!list_empty(>->tlb_invalidation.pending_fences))
- mod_delayed_work(system_wq,
+ mod_delayed_work(system_percpu_wq,
>->tlb_invalidation.fence_tdr,
tlb_timeout_jiffies(gt));
else
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index ffaf0d02dc7d..228e25e98be1 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1474,7 +1474,7 @@ static void invalidation_fence_cb(struct dma_fence *fence,
trace_xe_gt_tlb_invalidation_fence_cb(xe, &ifence->base);
if (!ifence->fence->error) {
- queue_work(system_wq, &ifence->work);
+ queue_work(system_percpu_wq, &ifence->work);
} else {
ifence->base.base.error = ifence->fence->error;
xe_gt_tlb_invalidation_fence_signal(&ifence->base);
diff --git a/drivers/iio/adc/pac1934.c b/drivers/iio/adc/pac1934.c
index 20802b7f49ea..77f4679aadbd 100644
--- a/drivers/iio/adc/pac1934.c
+++ b/drivers/iio/adc/pac1934.c
@@ -767,7 +767,7 @@ static int pac1934_retrieve_data(struct pac1934_chip_info *info,
* Re-schedule the work for the read registers on timeout
* (to prevent chip registers saturation)
*/
- mod_delayed_work(system_wq, &info->work_chip_rfsh,
+ mod_delayed_work(system_percpu_wq, &info->work_chip_rfsh,
msecs_to_jiffies(PAC1934_MAX_RFSH_LIMIT_MS));
}
diff --git a/drivers/input/keyboard/gpio_keys.c b/drivers/input/keyboard/gpio_keys.c
index 5c39a217b94c..815f58e70671 100644
--- a/drivers/input/keyboard/gpio_keys.c
+++ b/drivers/input/keyboard/gpio_keys.c
@@ -434,7 +434,7 @@ static irqreturn_t gpio_keys_gpio_isr(int irq, void *dev_id)
ms_to_ktime(bdata->software_debounce),
HRTIMER_MODE_REL);
} else {
- mod_delayed_work(system_wq,
+ mod_delayed_work(system_percpu_wq,
&bdata->work,
msecs_to_jiffies(bdata->software_debounce));
}
diff --git a/drivers/input/misc/palmas-pwrbutton.c b/drivers/input/misc/palmas-pwrbutton.c
index 39fc451c56e9..2d471165334a 100644
--- a/drivers/input/misc/palmas-pwrbutton.c
+++ b/drivers/input/misc/palmas-pwrbutton.c
@@ -91,7 +91,7 @@ static irqreturn_t pwron_irq(int irq, void *palmas_pwron)
pm_wakeup_event(input_dev->dev.parent, 0);
input_sync(input_dev);
- mod_delayed_work(system_wq, &pwron->input_work,
+ mod_delayed_work(system_percpu_wq, &pwron->input_work,
msecs_to_jiffies(PALMAS_PWR_KEY_Q_TIME_MS));
return IRQ_HANDLED;
diff --git a/drivers/input/mouse/synaptics_i2c.c b/drivers/input/mouse/synaptics_i2c.c
index a0d707e47d93..d42c562c05e3 100644
--- a/drivers/input/mouse/synaptics_i2c.c
+++ b/drivers/input/mouse/synaptics_i2c.c
@@ -372,7 +372,7 @@ static irqreturn_t synaptics_i2c_irq(int irq, void *dev_id)
{
struct synaptics_i2c *touch = dev_id;
- mod_delayed_work(system_wq, &touch->dwork, 0);
+ mod_delayed_work(system_percpu_wq, &touch->dwork, 0);
return IRQ_HANDLED;
}
@@ -448,7 +448,7 @@ static void synaptics_i2c_work_handler(struct work_struct *work)
* We poll the device once in THREAD_IRQ_SLEEP_SECS and
* if error is detected, we try to reset and reconfigure the touchpad.
*/
- mod_delayed_work(system_wq, &touch->dwork, delay);
+ mod_delayed_work(system_percpu_wq, &touch->dwork, delay);
}
static int synaptics_i2c_open(struct input_dev *input)
@@ -461,7 +461,7 @@ static int synaptics_i2c_open(struct input_dev *input)
return ret;
if (polling_req)
- mod_delayed_work(system_wq, &touch->dwork,
+ mod_delayed_work(system_percpu_wq, &touch->dwork,
msecs_to_jiffies(NO_DATA_SLEEP_MSECS));
return 0;
@@ -620,7 +620,7 @@ static int synaptics_i2c_resume(struct device *dev)
if (ret)
return ret;
- mod_delayed_work(system_wq, &touch->dwork,
+ mod_delayed_work(system_percpu_wq, &touch->dwork,
msecs_to_jiffies(NO_DATA_SLEEP_MSECS));
return 0;
diff --git a/drivers/leds/trigger/ledtrig-input-events.c b/drivers/leds/trigger/ledtrig-input-events.c
index 1c79731562c2..3c6414259c27 100644
--- a/drivers/leds/trigger/ledtrig-input-events.c
+++ b/drivers/leds/trigger/ledtrig-input-events.c
@@ -66,7 +66,7 @@ static void input_events_event(struct input_handle *handle, unsigned int type,
spin_unlock_irqrestore(&data->lock, flags);
- mod_delayed_work(system_wq, &data->work, led_off_delay);
+ mod_delayed_work(system_percpu_wq, &data->work, led_off_delay);
}
static int input_events_connect(struct input_handler *handler, struct input_dev *dev,
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index e42f1400cea9..de0a8e5f5c49 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1388,7 +1388,7 @@ static CLOSURE_CALLBACK(cached_dev_flush)
bch_cache_accounting_destroy(&dc->accounting);
kobject_del(&d->kobj);
- continue_at(cl, cached_dev_free, system_wq);
+ continue_at(cl, cached_dev_free, system_percpu_wq);
}
static int cached_dev_init(struct cached_dev *dc, unsigned int block_size)
@@ -1400,7 +1400,7 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size)
__module_get(THIS_MODULE);
INIT_LIST_HEAD(&dc->list);
closure_init(&dc->disk.cl, NULL);
- set_closure_fn(&dc->disk.cl, cached_dev_flush, system_wq);
+ set_closure_fn(&dc->disk.cl, cached_dev_flush, system_percpu_wq);
kobject_init(&dc->disk.kobj, &bch_cached_dev_ktype);
INIT_WORK(&dc->detach, cached_dev_detach_finish);
sema_init(&dc->sb_write_mutex, 1);
@@ -1513,7 +1513,7 @@ static CLOSURE_CALLBACK(flash_dev_flush)
bcache_device_unlink(d);
mutex_unlock(&bch_register_lock);
kobject_del(&d->kobj);
- continue_at(cl, flash_dev_free, system_wq);
+ continue_at(cl, flash_dev_free, system_percpu_wq);
}
static int flash_dev_run(struct cache_set *c, struct uuid_entry *u)
@@ -1525,7 +1525,7 @@ static int flash_dev_run(struct cache_set *c, struct uuid_entry *u)
goto err_ret;
closure_init(&d->cl, NULL);
- set_closure_fn(&d->cl, flash_dev_flush, system_wq);
+ set_closure_fn(&d->cl, flash_dev_flush, system_percpu_wq);
kobject_init(&d->kobj, &bch_flash_dev_ktype);
@@ -1828,7 +1828,7 @@ static CLOSURE_CALLBACK(__cache_set_unregister)
mutex_unlock(&bch_register_lock);
- continue_at(cl, cache_set_flush, system_wq);
+ continue_at(cl, cache_set_flush, system_percpu_wq);
}
void bch_cache_set_stop(struct cache_set *c)
@@ -1858,10 +1858,10 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
__module_get(THIS_MODULE);
closure_init(&c->cl, NULL);
- set_closure_fn(&c->cl, cache_set_free, system_wq);
+ set_closure_fn(&c->cl, cache_set_free, system_percpu_wq);
closure_init(&c->caching, &c->cl);
- set_closure_fn(&c->caching, __cache_set_unregister, system_wq);
+ set_closure_fn(&c->caching, __cache_set_unregister, system_percpu_wq);
/* Maybe create continue_at_noreturn() and use it here? */
closure_set_stopped(&c->cl);
@@ -2493,7 +2493,7 @@ static void register_device_async(struct async_reg_args *args)
INIT_DELAYED_WORK(&args->reg_work, register_cache_worker);
/* 10 jiffies is enough for a delay */
- queue_delayed_work(system_wq, &args->reg_work, 10);
+ queue_delayed_work(system_percpu_wq, &args->reg_work, 10);
}
static void *alloc_holder_object(struct cache_sb *sb)
@@ -2874,11 +2874,11 @@ static int __init bcache_init(void)
/*
* Let's not make this `WQ_MEM_RECLAIM` for the following reasons:
*
- * 1. It used `system_wq` before which also does no memory reclaim.
+ * 1. It used `system_percpu_wq` before which also does no memory reclaim.
* 2. With `WQ_MEM_RECLAIM` desktop stalls, increased boot times, and
* reduced throughput can be observed.
*
- * We still want to user our own queue to not congest the `system_wq`.
+ * We still want to user our own queue to not congest the `system_percpu_wq`.
*/
bch_flush_wq = alloc_workqueue("bch_flush", 0, 0);
if (!bch_flush_wq)
diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
index 345ea91629e0..f99fdef0253d 100644
--- a/drivers/mmc/host/mtk-sd.c
+++ b/drivers/mmc/host/mtk-sd.c
@@ -1190,7 +1190,7 @@ static void msdc_start_data(struct msdc_host *host, struct mmc_command *cmd,
host->data = data;
read = data->flags & MMC_DATA_READ;
- mod_delayed_work(system_wq, &host->req_timeout, DAT_TIMEOUT);
+ mod_delayed_work(system_percpu_wq, &host->req_timeout, DAT_TIMEOUT);
msdc_dma_setup(host, &host->dma, data);
sdr_set_bits(host->base + MSDC_INTEN, data_ints_mask);
sdr_set_field(host->base + MSDC_DMA_CTRL, MSDC_DMA_CTRL_START, 1);
@@ -1420,7 +1420,7 @@ static void msdc_start_command(struct msdc_host *host,
WARN_ON(host->cmd);
host->cmd = cmd;
- mod_delayed_work(system_wq, &host->req_timeout, DAT_TIMEOUT);
+ mod_delayed_work(system_percpu_wq, &host->req_timeout, DAT_TIMEOUT);
if (!msdc_cmd_is_ready(host, mrq, cmd))
return;
diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
index 06b4f52713ef..4fba49d4f36c 100644
--- a/drivers/net/ethernet/sfc/efx_channels.c
+++ b/drivers/net/ethernet/sfc/efx_channels.c
@@ -1281,7 +1281,7 @@ static int efx_poll(struct napi_struct *napi, int budget)
time = jiffies - channel->rfs_last_expiry;
/* Would our quota be >= 20? */
if (channel->rfs_filter_count * time >= 600 * HZ)
- mod_delayed_work(system_wq, &channel->filter_work, 0);
+ mod_delayed_work(system_percpu_wq, &channel->filter_work, 0);
#endif
/* There is no race here; although napi_disable() will
diff --git a/drivers/net/ethernet/sfc/siena/efx_channels.c b/drivers/net/ethernet/sfc/siena/efx_channels.c
index d120b3c83ac0..2039083205bb 100644
--- a/drivers/net/ethernet/sfc/siena/efx_channels.c
+++ b/drivers/net/ethernet/sfc/siena/efx_channels.c
@@ -1300,7 +1300,7 @@ static int efx_poll(struct napi_struct *napi, int budget)
time = jiffies - channel->rfs_last_expiry;
/* Would our quota be >= 20? */
if (channel->rfs_filter_count * time >= 600 * HZ)
- mod_delayed_work(system_wq, &channel->filter_work, 0);
+ mod_delayed_work(system_percpu_wq, &channel->filter_work, 0);
#endif
/* There is no race here; although napi_disable() will
diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
index 347c1e0e94d9..19fcff02db51 100644
--- a/drivers/net/phy/sfp.c
+++ b/drivers/net/phy/sfp.c
@@ -890,7 +890,7 @@ static void sfp_soft_start_poll(struct sfp *sfp)
if (sfp->state_soft_mask & (SFP_F_LOS | SFP_F_TX_FAULT) &&
!sfp->need_poll)
- mod_delayed_work(system_wq, &sfp->poll, poll_jiffies);
+ mod_delayed_work(system_percpu_wq, &sfp->poll, poll_jiffies);
mutex_unlock(&sfp->st_mutex);
}
@@ -1661,7 +1661,7 @@ static void sfp_hwmon_probe(struct work_struct *work)
err = sfp_read(sfp, true, 0, &sfp->diag, sizeof(sfp->diag));
if (err < 0) {
if (sfp->hwmon_tries--) {
- mod_delayed_work(system_wq, &sfp->hwmon_probe,
+ mod_delayed_work(system_percpu_wq, &sfp->hwmon_probe,
T_PROBE_RETRY_SLOW);
} else {
dev_warn(sfp->dev, "hwmon probe failed: %pe\n",
@@ -1688,7 +1688,7 @@ static void sfp_hwmon_probe(struct work_struct *work)
static int sfp_hwmon_insert(struct sfp *sfp)
{
if (sfp->have_a2 && sfp->id.ext.diagmon & SFP_DIAGMON_DDM) {
- mod_delayed_work(system_wq, &sfp->hwmon_probe, 1);
+ mod_delayed_work(system_percpu_wq, &sfp->hwmon_probe, 1);
sfp->hwmon_tries = R_PROBE_RETRY_SLOW;
}
@@ -2542,7 +2542,7 @@ static void sfp_sm_module(struct sfp *sfp, unsigned int event)
/* Force a poll to re-read the hardware signal state after
* sfp_sm_mod_probe() changed state_hw_mask.
*/
- mod_delayed_work(system_wq, &sfp->poll, 1);
+ mod_delayed_work(system_percpu_wq, &sfp->poll, 1);
err = sfp_hwmon_insert(sfp);
if (err)
@@ -2987,7 +2987,7 @@ static void sfp_poll(struct work_struct *work)
// it's unimportant if we race while reading this.
if (sfp->state_soft_mask & (SFP_F_LOS | SFP_F_TX_FAULT) ||
sfp->need_poll)
- mod_delayed_work(system_wq, &sfp->poll, poll_jiffies);
+ mod_delayed_work(system_percpu_wq, &sfp->poll, poll_jiffies);
}
static struct sfp *sfp_alloc(struct device *dev)
@@ -3157,7 +3157,7 @@ static int sfp_probe(struct platform_device *pdev)
}
if (sfp->need_poll)
- mod_delayed_work(system_wq, &sfp->poll, poll_jiffies);
+ mod_delayed_work(system_percpu_wq, &sfp->poll, poll_jiffies);
/* We could have an issue in cases no Tx disable pin is available or
* wired as modules using a laser as their light source will continue to
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
index 215814861cbd..c7c5bc0f1650 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
@@ -2143,7 +2143,7 @@ static void isr_indicate_rf_kill(struct ipw2100_priv *priv, u32 status)
/* Make sure the RF Kill check timer is running */
priv->stop_rf_kill = 0;
- mod_delayed_work(system_wq, &priv->rf_kill, round_jiffies_relative(HZ));
+ mod_delayed_work(system_percpu_wq, &priv->rf_kill, round_jiffies_relative(HZ));
}
static void ipw2100_scan_event(struct work_struct *work)
@@ -2170,7 +2170,7 @@ static void isr_scan_complete(struct ipw2100_priv *priv, u32 status)
round_jiffies_relative(msecs_to_jiffies(4000)));
} else {
priv->user_requested_scan = 0;
- mod_delayed_work(system_wq, &priv->scan_event, 0);
+ mod_delayed_work(system_percpu_wq, &priv->scan_event, 0);
}
}
@@ -4252,7 +4252,7 @@ static int ipw_radio_kill_sw(struct ipw2100_priv *priv, int disable_radio)
"disabled by HW switch\n");
/* Make sure the RF_KILL check timer is running */
priv->stop_rf_kill = 0;
- mod_delayed_work(system_wq, &priv->rf_kill,
+ mod_delayed_work(system_percpu_wq, &priv->rf_kill,
round_jiffies_relative(HZ));
} else
schedule_reset(priv);
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
index 24a5624ef207..09035a77e775 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
@@ -4415,7 +4415,7 @@ static void handle_scan_event(struct ipw_priv *priv)
round_jiffies_relative(msecs_to_jiffies(4000)));
} else {
priv->user_requested_scan = 0;
- mod_delayed_work(system_wq, &priv->scan_event, 0);
+ mod_delayed_work(system_percpu_wq, &priv->scan_event, 0);
}
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
index 36379b738de1..0df31639fa5e 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
@@ -234,7 +234,7 @@ void iwl_mvm_rx_tdls_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
* Also convert TU to msec.
*/
delay = TU_TO_MS(vif->bss_conf.dtim_period * vif->bss_conf.beacon_int);
- mod_delayed_work(system_wq, &mvm->tdls_cs.dwork,
+ mod_delayed_work(system_percpu_wq, &mvm->tdls_cs.dwork,
msecs_to_jiffies(delay));
iwl_mvm_tdls_update_cs_state(mvm, IWL_MVM_TDLS_SW_ACTIVE);
@@ -548,7 +548,7 @@ iwl_mvm_tdls_channel_switch(struct ieee80211_hw *hw,
*/
delay = 2 * TU_TO_MS(vif->bss_conf.dtim_period *
vif->bss_conf.beacon_int);
- mod_delayed_work(system_wq, &mvm->tdls_cs.dwork,
+ mod_delayed_work(system_percpu_wq, &mvm->tdls_cs.dwork,
msecs_to_jiffies(delay));
return 0;
}
@@ -659,6 +659,6 @@ iwl_mvm_tdls_recv_channel_switch(struct ieee80211_hw *hw,
/* register a timeout in case we don't succeed in switching */
delay = vif->bss_conf.dtim_period * vif->bss_conf.beacon_int *
1024 / 1000;
- mod_delayed_work(system_wq, &mvm->tdls_cs.dwork,
+ mod_delayed_work(system_percpu_wq, &mvm->tdls_cs.dwork,
msecs_to_jiffies(delay));
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
index 14e17dc90256..cb97f69a9149 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
@@ -341,7 +341,7 @@ int mt7921_register_device(struct mt792x_dev *dev)
dev->mphy.hw->wiphy->available_antennas_rx = dev->mphy.chainmask;
dev->mphy.hw->wiphy->available_antennas_tx = dev->mphy.chainmask;
- queue_work(system_wq, &dev->init_work);
+ queue_work(system_percpu_wq, &dev->init_work);
return 0;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
index 63cb08f4d87c..090ecd1f2a0a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
@@ -410,7 +410,7 @@ int mt7925_register_device(struct mt792x_dev *dev)
dev->mphy.hw->wiphy->available_antennas_rx = dev->mphy.chainmask;
dev->mphy.hw->wiphy->available_antennas_tx = dev->mphy.chainmask;
- queue_work(system_wq, &dev->init_work);
+ queue_work(system_percpu_wq, &dev->init_work);
return 0;
}
diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
index a03e3c45f297..c8095cd1cf1c 100644
--- a/drivers/nvdimm/security.c
+++ b/drivers/nvdimm/security.c
@@ -427,7 +427,7 @@ static int security_overwrite(struct nvdimm *nvdimm, unsigned int keyid)
* query.
*/
get_device(dev);
- queue_delayed_work(system_wq, &nvdimm->dwork, 0);
+ queue_delayed_work(system_percpu_wq, &nvdimm->dwork, 0);
}
return rc;
@@ -460,7 +460,7 @@ static void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm)
/* setup delayed work again */
tmo += 10;
- queue_delayed_work(system_wq, &nvdimm->dwork, tmo * HZ);
+ queue_delayed_work(system_percpu_wq, &nvdimm->dwork, tmo * HZ);
nvdimm->sec.overwrite_tmo = min(15U * 60U, tmo);
return;
}
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index acc138bbf8f2..af3ec44a6490 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -1613,7 +1613,7 @@ void nvmet_execute_keep_alive(struct nvmet_req *req)
pr_debug("ctrl %d update keep-alive timer for %d secs\n",
ctrl->cntlid, ctrl->kato);
- mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
+ mod_delayed_work(system_percpu_wq, &ctrl->ka_work, ctrl->kato * HZ);
out:
nvmet_req_complete(req, status);
}
diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
index bf01ec414c55..8f504bf891de 100644
--- a/drivers/nvme/target/fabrics-cmd-auth.c
+++ b/drivers/nvme/target/fabrics-cmd-auth.c
@@ -390,7 +390,7 @@ void nvmet_execute_auth_send(struct nvmet_req *req)
req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) {
unsigned long auth_expire_secs = ctrl->kato ? ctrl->kato : 120;
- mod_delayed_work(system_wq, &req->sq->auth_expired_work,
+ mod_delayed_work(system_percpu_wq, &req->sq->auth_expired_work,
auth_expire_secs * HZ);
goto complete;
}
diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c
index d712c7a866d2..45462af6100d 100644
--- a/drivers/pci/endpoint/pci-ep-cfs.c
+++ b/drivers/pci/endpoint/pci-ep-cfs.c
@@ -638,7 +638,7 @@ static struct config_group *pci_epf_make(struct config_group *group,
kfree(epf_name);
INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work);
- queue_delayed_work(system_wq, &epf_group->cfs_work,
+ queue_delayed_work(system_percpu_wq, &epf_group->cfs_work,
msecs_to_jiffies(1));
return &epf_group->group;
diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
index 29b8fd4b9351..0f9887fda584 100644
--- a/drivers/phy/allwinner/phy-sun4i-usb.c
+++ b/drivers/phy/allwinner/phy-sun4i-usb.c
@@ -359,7 +359,7 @@ static int sun4i_usb_phy_init(struct phy *_phy)
/* Force ISCR and cable state updates */
data->id_det = -1;
data->vbus_det = -1;
- queue_delayed_work(system_wq, &data->detect, 0);
+ queue_delayed_work(system_percpu_wq, &data->detect, 0);
}
return 0;
@@ -482,7 +482,7 @@ static int sun4i_usb_phy_power_on(struct phy *_phy)
/* We must report Vbus high within OTG_TIME_A_WAIT_VRISE msec. */
if (phy->index == 0 && sun4i_usb_phy0_poll(data))
- mod_delayed_work(system_wq, &data->detect, DEBOUNCE_TIME);
+ mod_delayed_work(system_percpu_wq, &data->detect, DEBOUNCE_TIME);
return 0;
}
@@ -503,7 +503,7 @@ static int sun4i_usb_phy_power_off(struct phy *_phy)
* Vbus gpio to not trigger an edge irq on Vbus off, so force a rescan.
*/
if (phy->index == 0 && !sun4i_usb_phy0_poll(data))
- mod_delayed_work(system_wq, &data->detect, POLL_TIME);
+ mod_delayed_work(system_percpu_wq, &data->detect, POLL_TIME);
return 0;
}
@@ -542,7 +542,7 @@ static int sun4i_usb_phy_set_mode(struct phy *_phy,
data->id_det = -1; /* Force reprocessing of id */
data->force_session_end = true;
- queue_delayed_work(system_wq, &data->detect, 0);
+ queue_delayed_work(system_percpu_wq, &data->detect, 0);
return 0;
}
@@ -654,7 +654,7 @@ static void sun4i_usb_phy0_id_vbus_det_scan(struct work_struct *work)
extcon_set_state_sync(data->extcon, EXTCON_USB, vbus_det);
if (sun4i_usb_phy0_poll(data))
- queue_delayed_work(system_wq, &data->detect, POLL_TIME);
+ queue_delayed_work(system_percpu_wq, &data->detect, POLL_TIME);
}
static irqreturn_t sun4i_usb_phy0_id_vbus_det_irq(int irq, void *dev_id)
@@ -662,7 +662,7 @@ static irqreturn_t sun4i_usb_phy0_id_vbus_det_irq(int irq, void *dev_id)
struct sun4i_usb_phy_data *data = dev_id;
/* vbus or id changed, let the pins settle and then scan them */
- mod_delayed_work(system_wq, &data->detect, DEBOUNCE_TIME);
+ mod_delayed_work(system_percpu_wq, &data->detect, DEBOUNCE_TIME);
return IRQ_HANDLED;
}
@@ -676,7 +676,7 @@ static int sun4i_usb_phy0_vbus_notify(struct notifier_block *nb,
/* Properties on the vbus_power_supply changed, scan vbus_det */
if (val == PSY_EVENT_PROP_CHANGED && psy == data->vbus_power_supply)
- mod_delayed_work(system_wq, &data->detect, DEBOUNCE_TIME);
+ mod_delayed_work(system_percpu_wq, &data->detect, DEBOUNCE_TIME);
return NOTIFY_OK;
}
diff --git a/drivers/platform/cznic/turris-omnia-mcu-gpio.c b/drivers/platform/cznic/turris-omnia-mcu-gpio.c
index 5f35f7c5d5d7..18f7e1c41a86 100644
--- a/drivers/platform/cznic/turris-omnia-mcu-gpio.c
+++ b/drivers/platform/cznic/turris-omnia-mcu-gpio.c
@@ -883,7 +883,7 @@ static bool omnia_irq_read_pending_old(struct omnia_mcu *mcu,
if (status & OMNIA_STS_BUTTON_PRESSED) {
mcu->button_pressed_emul = true;
- mod_delayed_work(system_wq, &mcu->button_release_emul_work,
+ mod_delayed_work(system_percpu_wq, &mcu->button_release_emul_work,
msecs_to_jiffies(FRONT_BUTTON_RELEASE_DELAY_MS));
} else if (mcu->button_pressed_emul) {
status |= OMNIA_STS_BUTTON_PRESSED;
diff --git a/drivers/platform/surface/aggregator/ssh_packet_layer.c b/drivers/platform/surface/aggregator/ssh_packet_layer.c
index 6081b0146d5f..3dd22856570f 100644
--- a/drivers/platform/surface/aggregator/ssh_packet_layer.c
+++ b/drivers/platform/surface/aggregator/ssh_packet_layer.c
@@ -671,7 +671,7 @@ static void ssh_ptl_timeout_reaper_mod(struct ssh_ptl *ptl, ktime_t now,
/* Re-adjust / schedule reaper only if it is above resolution delta. */
if (ktime_before(aexp, ptl->rtx_timeout.expires)) {
ptl->rtx_timeout.expires = expires;
- mod_delayed_work(system_wq, &ptl->rtx_timeout.reaper, delta);
+ mod_delayed_work(system_percpu_wq, &ptl->rtx_timeout.reaper, delta);
}
spin_unlock(&ptl->rtx_timeout.lock);
diff --git a/drivers/platform/surface/aggregator/ssh_request_layer.c b/drivers/platform/surface/aggregator/ssh_request_layer.c
index 879ca9ee7ff6..a356e4956562 100644
--- a/drivers/platform/surface/aggregator/ssh_request_layer.c
+++ b/drivers/platform/surface/aggregator/ssh_request_layer.c
@@ -434,7 +434,7 @@ static void ssh_rtl_timeout_reaper_mod(struct ssh_rtl *rtl, ktime_t now,
/* Re-adjust / schedule reaper only if it is above resolution delta. */
if (ktime_before(aexp, rtl->rtx_timeout.expires)) {
rtl->rtx_timeout.expires = expires;
- mod_delayed_work(system_wq, &rtl->rtx_timeout.reaper, delta);
+ mod_delayed_work(system_percpu_wq, &rtl->rtx_timeout.reaper, delta);
}
spin_unlock(&rtl->rtx_timeout.lock);
diff --git a/drivers/platform/x86/gpd-pocket-fan.c b/drivers/platform/x86/gpd-pocket-fan.c
index 7a20f68ae206..c9236738f896 100644
--- a/drivers/platform/x86/gpd-pocket-fan.c
+++ b/drivers/platform/x86/gpd-pocket-fan.c
@@ -112,14 +112,14 @@ static void gpd_pocket_fan_worker(struct work_struct *work)
gpd_pocket_fan_set_speed(fan, speed);
/* When mostly idle (low temp/speed), slow down the poll interval. */
- queue_delayed_work(system_wq, &fan->work,
+ queue_delayed_work(system_percpu_wq, &fan->work,
msecs_to_jiffies(4000 / (speed + 1)));
}
static void gpd_pocket_fan_force_update(struct gpd_pocket_fan_data *fan)
{
fan->last_speed = -1;
- mod_delayed_work(system_wq, &fan->work, 0);
+ mod_delayed_work(system_percpu_wq, &fan->work, 0);
}
static int gpd_pocket_fan_probe(struct platform_device *pdev)
diff --git a/drivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c b/drivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c
index 5d02af1c5aaa..94465a62f7e7 100644
--- a/drivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c
+++ b/drivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c
@@ -183,7 +183,7 @@ static void atla10_ec_external_power_changed(struct power_supply *psy)
struct atla10_ec_data *data = power_supply_get_drvdata(psy);
/* After charger plug in/out wait 0.5s for things to stabilize */
- mod_delayed_work(system_wq, &data->work, HZ / 2);
+ mod_delayed_work(system_percpu_wq, &data->work, HZ / 2);
}
static const enum power_supply_property atla10_ec_psy_props[] = {
diff --git a/drivers/power/supply/bq2415x_charger.c b/drivers/power/supply/bq2415x_charger.c
index 9e3b9181ee76..03837c831643 100644
--- a/drivers/power/supply/bq2415x_charger.c
+++ b/drivers/power/supply/bq2415x_charger.c
@@ -842,7 +842,7 @@ static int bq2415x_notifier_call(struct notifier_block *nb,
if (bq->automode < 1)
return NOTIFY_OK;
- mod_delayed_work(system_wq, &bq->work, 0);
+ mod_delayed_work(system_percpu_wq, &bq->work, 0);
return NOTIFY_OK;
}
diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
index f0d97ab45bd8..a19fca6d0a29 100644
--- a/drivers/power/supply/bq24190_charger.c
+++ b/drivers/power/supply/bq24190_charger.c
@@ -1474,7 +1474,7 @@ static void bq24190_charger_external_power_changed(struct power_supply *psy)
* too low default 500mA iinlim. Delay setting the input-current-limit
* for 300ms to avoid this.
*/
- queue_delayed_work(system_wq, &bdi->input_current_limit_work,
+ queue_delayed_work(system_percpu_wq, &bdi->input_current_limit_work,
msecs_to_jiffies(300));
}
diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
index 2f31d750a4c1..d670ccf9661b 100644
--- a/drivers/power/supply/bq27xxx_battery.c
+++ b/drivers/power/supply/bq27xxx_battery.c
@@ -1127,7 +1127,7 @@ static int poll_interval_param_set(const char *val, const struct kernel_param *k
mutex_lock(&bq27xxx_list_lock);
list_for_each_entry(di, &bq27xxx_battery_devices, list)
- mod_delayed_work(system_wq, &di->work, 0);
+ mod_delayed_work(system_percpu_wq, &di->work, 0);
mutex_unlock(&bq27xxx_list_lock);
return ret;
@@ -1945,7 +1945,7 @@ static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
di->last_update = jiffies;
if (!di->removed && poll_interval > 0)
- mod_delayed_work(system_wq, &di->work, poll_interval * HZ);
+ mod_delayed_work(system_percpu_wq, &di->work, poll_interval * HZ);
}
void bq27xxx_battery_update(struct bq27xxx_device_info *di)
@@ -2221,7 +2221,7 @@ static void bq27xxx_external_power_changed(struct power_supply *psy)
struct bq27xxx_device_info *di = power_supply_get_drvdata(psy);
/* After charger plug in/out wait 0.5s for things to stabilize */
- mod_delayed_work(system_wq, &di->work, HZ / 2);
+ mod_delayed_work(system_percpu_wq, &di->work, HZ / 2);
}
static void bq27xxx_battery_mutex_destroy(void *data)
diff --git a/drivers/power/supply/rk817_charger.c b/drivers/power/supply/rk817_charger.c
index 945c7720c4ae..032b191ddbf5 100644
--- a/drivers/power/supply/rk817_charger.c
+++ b/drivers/power/supply/rk817_charger.c
@@ -1046,7 +1046,7 @@ static void rk817_charging_monitor(struct work_struct *work)
rk817_read_props(charger);
/* Run every 8 seconds like the BSP driver did. */
- queue_delayed_work(system_wq, &charger->work, msecs_to_jiffies(8000));
+ queue_delayed_work(system_percpu_wq, &charger->work, msecs_to_jiffies(8000));
}
static void rk817_cleanup_node(void *data)
@@ -1206,7 +1206,7 @@ static int rk817_charger_probe(struct platform_device *pdev)
return ret;
/* Force the first update immediately. */
- mod_delayed_work(system_wq, &charger->work, 0);
+ mod_delayed_work(system_percpu_wq, &charger->work, 0);
return 0;
}
@@ -1226,7 +1226,7 @@ static int __maybe_unused rk817_resume(struct device *dev)
struct rk817_charger *charger = dev_get_drvdata(dev);
/* force an immediate update */
- mod_delayed_work(system_wq, &charger->work, 0);
+ mod_delayed_work(system_percpu_wq, &charger->work, 0);
return 0;
}
diff --git a/drivers/power/supply/ucs1002_power.c b/drivers/power/supply/ucs1002_power.c
index d32a7633f9e7..fe94435340de 100644
--- a/drivers/power/supply/ucs1002_power.c
+++ b/drivers/power/supply/ucs1002_power.c
@@ -493,7 +493,7 @@ static irqreturn_t ucs1002_alert_irq(int irq, void *data)
{
struct ucs1002_info *info = data;
- mod_delayed_work(system_wq, &info->health_poll, 0);
+ mod_delayed_work(system_percpu_wq, &info->health_poll, 0);
return IRQ_HANDLED;
}
diff --git a/drivers/power/supply/ug3105_battery.c b/drivers/power/supply/ug3105_battery.c
index 38e23bdd4603..15b62952f953 100644
--- a/drivers/power/supply/ug3105_battery.c
+++ b/drivers/power/supply/ug3105_battery.c
@@ -276,7 +276,7 @@ static void ug3105_work(struct work_struct *work)
out:
mutex_unlock(&chip->lock);
- queue_delayed_work(system_wq, &chip->work,
+ queue_delayed_work(system_percpu_wq, &chip->work,
(chip->poll_count <= UG3105_INIT_POLL_COUNT) ?
UG3105_INIT_POLL_TIME : UG3105_POLL_TIME);
@@ -352,7 +352,7 @@ static void ug3105_external_power_changed(struct power_supply *psy)
struct ug3105_chip *chip = power_supply_get_drvdata(psy);
dev_dbg(&chip->client->dev, "external power changed\n");
- mod_delayed_work(system_wq, &chip->work, UG3105_SETTLE_TIME);
+ mod_delayed_work(system_percpu_wq, &chip->work, UG3105_SETTLE_TIME);
}
static const struct power_supply_desc ug3105_psy_desc = {
@@ -373,7 +373,7 @@ static void ug3105_init(struct ug3105_chip *chip)
UG3105_MODE_RUN);
i2c_smbus_write_byte_data(chip->client, UG3105_REG_CTRL1,
UG3105_CTRL1_RESET_COULOMB_CNT);
- queue_delayed_work(system_wq, &chip->work, 0);
+ queue_delayed_work(system_percpu_wq, &chip->work, 0);
flush_delayed_work(&chip->work);
}
diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index e440b15fbabc..15f7f043c8ef 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -166,7 +166,7 @@ static void cec_mod_work(unsigned long interval)
unsigned long iv;
iv = interval * HZ;
- mod_delayed_work(system_wq, &cec_work, round_jiffies(iv));
+ mod_delayed_work(system_percpu_wq, &cec_work, round_jiffies(iv));
}
static void cec_work_fn(struct work_struct *work)
diff --git a/drivers/regulator/irq_helpers.c b/drivers/regulator/irq_helpers.c
index 5742faee8071..54dd19e1e94c 100644
--- a/drivers/regulator/irq_helpers.c
+++ b/drivers/regulator/irq_helpers.c
@@ -146,7 +146,7 @@ static void regulator_notifier_isr_work(struct work_struct *work)
reschedule:
if (!d->high_prio)
- mod_delayed_work(system_wq, &h->isr_work,
+ mod_delayed_work(system_percpu_wq, &h->isr_work,
msecs_to_jiffies(tmo));
else
mod_delayed_work(system_highpri_wq, &h->isr_work,
diff --git a/drivers/regulator/qcom-labibb-regulator.c b/drivers/regulator/qcom-labibb-regulator.c
index ba3f9391565f..ad65d264cfe0 100644
--- a/drivers/regulator/qcom-labibb-regulator.c
+++ b/drivers/regulator/qcom-labibb-regulator.c
@@ -230,7 +230,7 @@ static void qcom_labibb_ocp_recovery_worker(struct work_struct *work)
return;
reschedule:
- mod_delayed_work(system_wq, &vreg->ocp_recovery_work,
+ mod_delayed_work(system_percpu_wq, &vreg->ocp_recovery_work,
msecs_to_jiffies(OCP_RECOVERY_INTERVAL_MS));
}
@@ -510,7 +510,7 @@ static void qcom_labibb_sc_recovery_worker(struct work_struct *work)
* taking action is not truly urgent anymore.
*/
vreg->sc_count++;
- mod_delayed_work(system_wq, &vreg->sc_recovery_work,
+ mod_delayed_work(system_percpu_wq, &vreg->sc_recovery_work,
msecs_to_jiffies(SC_RECOVERY_INTERVAL_MS));
}
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 8c527af98927..e842dda55f71 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -2617,7 +2617,7 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up,
* the 10s already expired and we should
* give the reserved back to others).
*/
- mod_delayed_work(system_wq, &group->release_work,
+ mod_delayed_work(system_percpu_wq, &group->release_work,
msecs_to_jiffies(TB_RELEASE_BW_TIMEOUT));
}
}
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 47e73c4ed62d..17c6fb417231 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -3888,7 +3888,7 @@ static void dwc3_gadget_endpoint_stream_event(struct dwc3_ep *dep,
case DEPEVT_STREAM_NOSTREAM:
dep->flags &= ~DWC3_EP_STREAM_PRIMED;
if (dep->flags & DWC3_EP_FORCE_RESTART_STREAM)
- queue_delayed_work(system_wq, &dep->nostream_work,
+ queue_delayed_work(system_percpu_wq, &dep->nostream_work,
msecs_to_jiffies(100));
break;
}
diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
index fd7895b24367..8b3052954530 100644
--- a/drivers/usb/host/xhci-dbgcap.c
+++ b/drivers/usb/host/xhci-dbgcap.c
@@ -365,7 +365,7 @@ int dbc_ep_queue(struct dbc_request *req)
ret = dbc_ep_do_queue(req);
spin_unlock_irqrestore(&dbc->lock, flags);
- mod_delayed_work(system_wq, &dbc->event_work, 0);
+ mod_delayed_work(system_percpu_wq, &dbc->event_work, 0);
trace_xhci_dbc_queue_request(req);
@@ -637,7 +637,7 @@ static int xhci_dbc_start(struct xhci_dbc *dbc)
return ret;
}
- return mod_delayed_work(system_wq, &dbc->event_work,
+ return mod_delayed_work(system_percpu_wq, &dbc->event_work,
msecs_to_jiffies(dbc->poll_interval));
}
@@ -964,7 +964,7 @@ static void xhci_dbc_handle_events(struct work_struct *work)
return;
}
- mod_delayed_work(system_wq, &dbc->event_work,
+ mod_delayed_work(system_percpu_wq, &dbc->event_work,
msecs_to_jiffies(poll_interval));
}
@@ -1215,7 +1215,7 @@ static ssize_t dbc_poll_interval_ms_store(struct device *dev,
dbc->poll_interval = value;
- mod_delayed_work(system_wq, &dbc->event_work, 0);
+ mod_delayed_work(system_percpu_wq, &dbc->event_work, 0);
return size;
}
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 5d64c297721c..79704fbbba50 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -434,7 +434,7 @@ void xhci_ring_cmd_db(struct xhci_hcd *xhci)
static bool xhci_mod_cmd_timer(struct xhci_hcd *xhci)
{
- return mod_delayed_work(system_wq, &xhci->cmd_timer,
+ return mod_delayed_work(system_percpu_wq, &xhci->cmd_timer,
msecs_to_jiffies(xhci->current_cmd->timeout_ms));
}
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 41309d38f78c..114c2af0857a 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -581,7 +581,7 @@ static void lateeoi_list_add(struct irq_info *info)
eoi_list);
if (!elem || info->eoi_time < elem->eoi_time) {
list_add(&info->eoi_list, &eoi->eoi_list);
- mod_delayed_work_on(info->eoi_cpu, system_wq,
+ mod_delayed_work_on(info->eoi_cpu, system_percpu_wq,
&eoi->delayed, delay);
} else {
list_for_each_entry_reverse(elem, &eoi->eoi_list, eoi_list) {
@@ -666,7 +666,7 @@ static void xen_irq_lateeoi_worker(struct work_struct *work)
break;
if (now < info->eoi_time) {
- mod_delayed_work_on(info->eoi_cpu, system_wq,
+ mod_delayed_work_on(info->eoi_cpu, system_percpu_wq,
&eoi->delayed,
info->eoi_time - now);
break;
@@ -782,7 +782,7 @@ static void xen_free_irq(struct irq_info *info)
WARN_ON(info->refcnt > 0);
- queue_rcu_work(system_wq, &info->rwork);
+ queue_rcu_work(system_percpu_wq, &info->rwork);
}
/* Not called for lateeoi events. */
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 50928a7ae98e..a89d228146ea 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -542,7 +542,7 @@ struct drm_gpu_scheduler {
* @hang_limit: number of times to allow a job to hang before dropping it.
* This mechanism is DEPRECATED. Set it to 0.
* @timeout: timeout value in jiffies for submitted jobs.
- * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is used.
+ * @timeout_wq: workqueue to use for timeout work. If NULL, the system_percpu_wq is used.
* @score: score atomic shared with other schedulers. May be NULL.
* @name: name (typically the driver's name). Used for debugging
* @dev: associated device. Used for debugging
diff --git a/include/linux/closure.h b/include/linux/closure.h
index 880fe85e35e9..959b3c584254 100644
--- a/include/linux/closure.h
+++ b/include/linux/closure.h
@@ -58,7 +58,7 @@
* bio2->bi_endio = foo_endio;
* bio_submit(bio2);
*
- * continue_at(cl, complete_some_read, system_wq);
+ * continue_at(cl, complete_some_read, system_percpu_wq);
*
* If closure's refcount started at 0, complete_some_read() could run before the
* second bio was submitted - which is almost always not what you want! More
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index f19072605faa..69cc81e670f6 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -433,10 +433,10 @@ enum wq_consts {
* short queue flush time. Don't queue works which can run for too
* long.
*
- * system_highpri_wq is similar to system_wq but for work items which
+ * system_highpri_wq is similar to system_percpu_wq but for work items which
* require WQ_HIGHPRI.
*
- * system_long_wq is similar to system_wq but may host long running
+ * system_long_wq is similar to system_percpu_wq but may host long running
* works. Queue flushing might take relatively long.
*
* system_dfl_wq is unbound workqueue. Workers are not bound to
@@ -444,13 +444,13 @@ enum wq_consts {
* executed immediately as long as max_active limit is not reached and
* resources are available.
*
- * system_freezable_wq is equivalent to system_wq except that it's
+ * system_freezable_wq is equivalent to system_percpu_wq except that it's
* freezable.
*
* *_power_efficient_wq are inclined towards saving power and converted
* into WQ_UNBOUND variants if 'wq_power_efficient' is enabled; otherwise,
* they are same as their non-power-efficient counterparts - e.g.
- * system_power_efficient_wq is identical to system_wq if
+ * system_power_efficient_wq is identical to system_percpu_wq if
* 'wq_power_efficient' is disabled. See WQ_POWER_EFFICIENT for more info.
*
* system_bh[_highpri]_wq are convenience interface to softirq. BH work items
@@ -662,6 +662,11 @@ extern void wq_worker_comm(char *buf, size_t size, struct task_struct *task);
static inline bool queue_work(struct workqueue_struct *wq,
struct work_struct *work)
{
+ if (wq == system_wq) {
+ pr_warn_once("system_wq will be removed in the near future. Please use the new system_percpu_wq. wq set to system_percpu_wq\n");
+ wq = system_percpu_wq;
+ }
+
return queue_work_on(WORK_CPU_UNBOUND, wq, work);
}
@@ -677,6 +682,11 @@ static inline bool queue_delayed_work(struct workqueue_struct *wq,
struct delayed_work *dwork,
unsigned long delay)
{
+ if (wq == system_wq) {
+ pr_warn_once("system_wq will be removed in the near future. Please use the new system_percpu_wq. wq set to system_percpu_wq\n");
+ wq = system_percpu_wq;
+ }
+
return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
}
@@ -692,6 +702,11 @@ static inline bool mod_delayed_work(struct workqueue_struct *wq,
struct delayed_work *dwork,
unsigned long delay)
{
+ if (wq == system_wq) {
+ pr_warn_once("system_wq will be removed in the near future. Please use the new system_percpu_wq. wq set to system_percpu_wq\n");
+ wq = system_percpu_wq;
+ }
+
return mod_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
}
@@ -704,7 +719,7 @@ static inline bool mod_delayed_work(struct workqueue_struct *wq,
*/
static inline bool schedule_work_on(int cpu, struct work_struct *work)
{
- return queue_work_on(cpu, system_wq, work);
+ return queue_work_on(cpu, system_percpu_wq, work);
}
/**
@@ -723,7 +738,7 @@ static inline bool schedule_work_on(int cpu, struct work_struct *work)
*/
static inline bool schedule_work(struct work_struct *work)
{
- return queue_work(system_wq, work);
+ return queue_work(system_percpu_wq, work);
}
/**
@@ -766,15 +781,15 @@ extern void __warn_flushing_systemwide_wq(void)
#define flush_scheduled_work() \
({ \
__warn_flushing_systemwide_wq(); \
- __flush_workqueue(system_wq); \
+ __flush_workqueue(system_percpu_wq); \
})
#define flush_workqueue(wq) \
({ \
struct workqueue_struct *_wq = (wq); \
\
- if ((__builtin_constant_p(_wq == system_wq) && \
- _wq == system_wq) || \
+ if ((__builtin_constant_p(_wq == system_percpu_wq) && \
+ _wq == system_percpu_wq) || \
(__builtin_constant_p(_wq == system_highpri_wq) && \
_wq == system_highpri_wq) || \
(__builtin_constant_p(_wq == system_long_wq) && \
@@ -803,7 +818,7 @@ extern void __warn_flushing_systemwide_wq(void)
static inline bool schedule_delayed_work_on(int cpu, struct delayed_work *dwork,
unsigned long delay)
{
- return queue_delayed_work_on(cpu, system_wq, dwork, delay);
+ return queue_delayed_work_on(cpu, system_percpu_wq, dwork, delay);
}
/**
@@ -817,7 +832,7 @@ static inline bool schedule_delayed_work_on(int cpu, struct delayed_work *dwork,
static inline bool schedule_delayed_work(struct delayed_work *dwork,
unsigned long delay)
{
- return queue_delayed_work(system_wq, dwork, delay);
+ return queue_delayed_work(system_percpu_wq, dwork, delay);
}
#ifndef CONFIG_SMP
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c6209fe44cb1..2a6ead3c7d36 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2986,7 +2986,7 @@ static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
* Use system_unbound_wq to avoid spawning tons of event kworkers
* if we're exiting a ton of rings at the same time. It just adds
* noise and overhead, there's no discernable change in runtime
- * over using system_wq.
+ * over using system_percpu_wq.
*/
queue_work(iou_wq, &ctx->exit_work);
}
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 84f58f3d028a..b8699ec4d766 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -27,7 +27,7 @@ EXPORT_SYMBOL(cgroup_bpf_enabled_key);
/*
* cgroup bpf destruction makes heavy use of work items and there can be a lot
* of concurrent destructions. Use a separate workqueue so that cgroup bpf
- * destruction work items don't end up filling up max_active of system_wq
+ * destruction work items don't end up filling up max_active of system_percpu_wq
* which may lead to deadlock.
*/
static struct workqueue_struct *cgroup_bpf_destroy_wq;
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 67e8a2fc1a99..1ab8e6876618 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -551,7 +551,7 @@ static void __cpu_map_entry_replace(struct bpf_cpu_map *cmap,
old_rcpu = unrcu_pointer(xchg(&cmap->cpu_map[key_cpu], RCU_INITIALIZER(rcpu)));
if (old_rcpu) {
INIT_RCU_WORK(&old_rcpu->free_work, __cpu_map_entry_free);
- queue_rcu_work(system_wq, &old_rcpu->free_work);
+ queue_rcu_work(system_percpu_wq, &old_rcpu->free_work);
}
}
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 3caf2cd86e65..1e39355194fd 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -121,7 +121,7 @@ DEFINE_PERCPU_RWSEM(cgroup_threadgroup_rwsem);
/*
* cgroup destruction makes heavy use of work items and there can be a lot
* of concurrent destructions. Use a separate workqueue so that cgroup
- * destruction work items don't end up filling up max_active of system_wq
+ * destruction work items don't end up filling up max_active of system_percpu_wq
* which may lead to deadlock.
*/
static struct workqueue_struct *cgroup_destroy_wq;
diff --git a/kernel/module/dups.c b/kernel/module/dups.c
index bd2149fbe117..e72fa393a2ec 100644
--- a/kernel/module/dups.c
+++ b/kernel/module/dups.c
@@ -113,7 +113,7 @@ static void kmod_dup_request_complete(struct work_struct *work)
* let this linger forever as this is just a boot optimization for
* possible abuses of vmalloc() incurred by finit_module() thrashing.
*/
- queue_delayed_work(system_wq, &kmod_req->delete_work, 60 * HZ);
+ queue_delayed_work(system_percpu_wq, &kmod_req->delete_work, 60 * HZ);
}
bool kmod_dup_request_exists_wait(char *module_name, bool wait, int *dup_ret)
@@ -240,7 +240,7 @@ void kmod_dup_request_announce(char *module_name, int ret)
* There is no rush. But we also don't want to hold the
* caller up forever or introduce any boot delays.
*/
- queue_work(system_wq, &kmod_req->complete_work);
+ queue_work(system_percpu_wq, &kmod_req->complete_work);
out:
mutex_unlock(&kmod_dup_mutex);
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index c0cc7ae41106..5fddd7168391 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -552,13 +552,13 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
rtpcp_next = rtp->rtpcp_array[index];
if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
cpuwq = rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu : WORK_CPU_UNBOUND;
- queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+ queue_work_on(cpuwq, system_percpu_wq, &rtpcp_next->rtp_work);
index++;
if (index < num_possible_cpus()) {
rtpcp_next = rtp->rtpcp_array[index];
if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
cpuwq = rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu : WORK_CPU_UNBOUND;
- queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+ queue_work_on(cpuwq, system_percpu_wq, &rtpcp_next->rtp_work);
}
}
}
diff --git a/kernel/smp.c b/kernel/smp.c
index 974f3a3962e8..c3b93476d645 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -1146,7 +1146,7 @@ int smp_call_on_cpu(unsigned int cpu, int (*func)(void *), void *par, bool phys)
if (cpu >= nr_cpu_ids || !cpu_online(cpu))
return -ENXIO;
- queue_work_on(cpu, system_wq, &sscs.work);
+ queue_work_on(cpu, system_percpu_wq, &sscs.work);
wait_for_completion(&sscs.done);
destroy_work_on_stack(&sscs.work);
diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index af42aaa3d172..3169182229ad 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -835,7 +835,7 @@ void user_event_mm_remove(struct task_struct *t)
* so we use a work queue after call_rcu() to run within.
*/
INIT_RCU_WORK(&mm->put_rwork, delayed_user_event_mm_put);
- queue_rcu_work(system_wq, &mm->put_rwork);
+ queue_rcu_work(system_percpu_wq, &mm->put_rwork);
}
void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 62f020050de1..94f87c3fa909 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -7660,7 +7660,7 @@ static int wq_watchdog_param_set_thresh(const char *val,
if (ret)
return ret;
- if (system_wq)
+ if (system_percpu_wq)
wq_watchdog_set_thresh(thresh);
else
wq_watchdog_thresh = thresh;
diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
index f98bd02b838f..7c7e99a8c033 100644
--- a/rust/kernel/workqueue.rs
+++ b/rust/kernel/workqueue.rs
@@ -633,15 +633,15 @@ unsafe fn __enqueue<F>(self, queue_work_on: F) -> Self::EnqueueOutput
}
}
-/// Returns the system work queue (`system_wq`).
+/// Returns the system work queue (`system_percpu_wq`).
///
/// It is the one used by `schedule[_delayed]_work[_on]()`. Multi-CPU multi-threaded. There are
/// users which expect relatively short queue flush time.
///
/// Callers shouldn't queue work items which can run for too long.
pub fn system() -> &'static Queue {
- // SAFETY: `system_wq` is a C global, always available.
- unsafe { Queue::from_raw(bindings::system_wq) }
+ // SAFETY: `system_percpu_wq` is a C global, always available.
+ unsafe { Queue::from_raw(bindings::system_percpu_wq) }
}
/// Returns the system high-priority work queue (`system_highpri_wq`).
diff --git a/sound/soc/codecs/aw88081.c b/sound/soc/codecs/aw88081.c
index ad16ab6812cd..e61c58dcd606 100644
--- a/sound/soc/codecs/aw88081.c
+++ b/sound/soc/codecs/aw88081.c
@@ -779,7 +779,7 @@ static void aw88081_start(struct aw88081 *aw88081, bool sync_start)
if (sync_start == AW88081_SYNC_START)
aw88081_start_pa(aw88081);
else
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
&aw88081->start_work,
AW88081_START_WORK_DELAY_MS);
}
diff --git a/sound/soc/codecs/aw88166.c b/sound/soc/codecs/aw88166.c
index 6c50c4a18b6a..c9c3ebb9a739 100644
--- a/sound/soc/codecs/aw88166.c
+++ b/sound/soc/codecs/aw88166.c
@@ -1313,7 +1313,7 @@ static void aw88166_start(struct aw88166 *aw88166, bool sync_start)
if (sync_start == AW88166_SYNC_START)
aw88166_start_pa(aw88166);
else
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
&aw88166->start_work,
AW88166_START_WORK_DELAY_MS);
}
diff --git a/sound/soc/codecs/aw88261.c b/sound/soc/codecs/aw88261.c
index fb99871578c5..c8e62af8949e 100644
--- a/sound/soc/codecs/aw88261.c
+++ b/sound/soc/codecs/aw88261.c
@@ -705,7 +705,7 @@ static void aw88261_start(struct aw88261 *aw88261, bool sync_start)
if (sync_start == AW88261_SYNC_START)
aw88261_start_pa(aw88261);
else
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
&aw88261->start_work,
AW88261_START_WORK_DELAY_MS);
}
diff --git a/sound/soc/codecs/aw88395/aw88395.c b/sound/soc/codecs/aw88395/aw88395.c
index aea44a199b98..c6fe69cc5e73 100644
--- a/sound/soc/codecs/aw88395/aw88395.c
+++ b/sound/soc/codecs/aw88395/aw88395.c
@@ -75,7 +75,7 @@ static void aw88395_start(struct aw88395 *aw88395, bool sync_start)
if (sync_start == AW88395_SYNC_START)
aw88395_start_pa(aw88395);
else
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
&aw88395->start_work,
AW88395_START_WORK_DELAY_MS);
}
diff --git a/sound/soc/codecs/aw88399.c b/sound/soc/codecs/aw88399.c
index ee3cc2a95f85..dfa8ce355e3c 100644
--- a/sound/soc/codecs/aw88399.c
+++ b/sound/soc/codecs/aw88399.c
@@ -1281,7 +1281,7 @@ static void aw88399_start(struct aw88399 *aw88399, bool sync_start)
if (sync_start == AW88399_SYNC_START)
aw88399_start_pa(aw88399);
else
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
&aw88399->start_work,
AW88399_START_WORK_DELAY_MS);
}
diff --git a/sound/soc/codecs/cs42l43-jack.c b/sound/soc/codecs/cs42l43-jack.c
index ac19a572fe70..38c73c8dcc45 100644
--- a/sound/soc/codecs/cs42l43-jack.c
+++ b/sound/soc/codecs/cs42l43-jack.c
@@ -301,7 +301,7 @@ irqreturn_t cs42l43_bias_detect_clamp(int irq, void *data)
{
struct cs42l43_codec *priv = data;
- queue_delayed_work(system_wq, &priv->bias_sense_timeout,
+ queue_delayed_work(system_percpu_wq, &priv->bias_sense_timeout,
msecs_to_jiffies(1000));
return IRQ_HANDLED;
@@ -432,7 +432,7 @@ irqreturn_t cs42l43_button_press(int irq, void *data)
struct cs42l43_codec *priv = data;
// Wait for 2 full cycles of comb filter to ensure good reading
- queue_delayed_work(system_wq, &priv->button_press_work,
+ queue_delayed_work(system_percpu_wq, &priv->button_press_work,
msecs_to_jiffies(20));
return IRQ_HANDLED;
@@ -470,7 +470,7 @@ irqreturn_t cs42l43_button_release(int irq, void *data)
{
struct cs42l43_codec *priv = data;
- queue_work(system_wq, &priv->button_release_work);
+ queue_work(system_percpu_wq, &priv->button_release_work);
return IRQ_HANDLED;
}
diff --git a/sound/soc/codecs/cs42l43.c b/sound/soc/codecs/cs42l43.c
index ea84ac64c775..105ad53bae0c 100644
--- a/sound/soc/codecs/cs42l43.c
+++ b/sound/soc/codecs/cs42l43.c
@@ -161,7 +161,7 @@ static void cs42l43_hp_ilimit_clear_work(struct work_struct *work)
priv->hp_ilimit_count--;
if (priv->hp_ilimit_count)
- queue_delayed_work(system_wq, &priv->hp_ilimit_clear_work,
+ queue_delayed_work(system_percpu_wq, &priv->hp_ilimit_clear_work,
msecs_to_jiffies(CS42L43_HP_ILIMIT_DECAY_MS));
snd_soc_dapm_mutex_unlock(dapm);
@@ -178,7 +178,7 @@ static void cs42l43_hp_ilimit_work(struct work_struct *work)
if (priv->hp_ilimit_count < CS42L43_HP_ILIMIT_MAX_COUNT) {
if (!priv->hp_ilimit_count)
- queue_delayed_work(system_wq, &priv->hp_ilimit_clear_work,
+ queue_delayed_work(system_percpu_wq, &priv->hp_ilimit_clear_work,
msecs_to_jiffies(CS42L43_HP_ILIMIT_DECAY_MS));
priv->hp_ilimit_count++;
diff --git a/sound/soc/codecs/es8326.c b/sound/soc/codecs/es8326.c
index 066d92b54312..4ba4de184d2c 100644
--- a/sound/soc/codecs/es8326.c
+++ b/sound/soc/codecs/es8326.c
@@ -812,12 +812,12 @@ static void es8326_jack_button_handler(struct work_struct *work)
press_count = 0;
}
button_to_report = cur_button;
- queue_delayed_work(system_wq, &es8326->button_press_work,
+ queue_delayed_work(system_percpu_wq, &es8326->button_press_work,
msecs_to_jiffies(35));
} else if (prev_button != cur_button) {
/* mismatch, detect again */
prev_button = cur_button;
- queue_delayed_work(system_wq, &es8326->button_press_work,
+ queue_delayed_work(system_percpu_wq, &es8326->button_press_work,
msecs_to_jiffies(35));
} else {
/* released or no pressed */
@@ -912,7 +912,7 @@ static void es8326_jack_detect_handler(struct work_struct *work)
(ES8326_INT_SRC_PIN9 | ES8326_INT_SRC_BUTTON));
regmap_write(es8326->regmap, ES8326_SYS_BIAS, 0x1f);
regmap_update_bits(es8326->regmap, ES8326_HP_DRIVER_REF, 0x0f, 0x0d);
- queue_delayed_work(system_wq, &es8326->jack_detect_work,
+ queue_delayed_work(system_percpu_wq, &es8326->jack_detect_work,
msecs_to_jiffies(400));
es8326->hp = 1;
goto exit;
@@ -923,7 +923,7 @@ static void es8326_jack_detect_handler(struct work_struct *work)
regmap_write(es8326->regmap, ES8326_INT_SOURCE,
(ES8326_INT_SRC_PIN9 | ES8326_INT_SRC_BUTTON));
es8326_enable_micbias(es8326->component);
- queue_delayed_work(system_wq, &es8326->button_press_work, 10);
+ queue_delayed_work(system_percpu_wq, &es8326->button_press_work, 10);
goto exit;
}
if ((iface & ES8326_HPBUTTON_FLAG) == 0x01) {
@@ -958,10 +958,10 @@ static irqreturn_t es8326_irq(int irq, void *dev_id)
goto out;
if (es8326->jack->status & SND_JACK_HEADSET)
- queue_delayed_work(system_wq, &es8326->jack_detect_work,
+ queue_delayed_work(system_percpu_wq, &es8326->jack_detect_work,
msecs_to_jiffies(10));
else
- queue_delayed_work(system_wq, &es8326->jack_detect_work,
+ queue_delayed_work(system_percpu_wq, &es8326->jack_detect_work,
msecs_to_jiffies(300));
out:
diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c
index 45057562c0c8..44cfec76ad96 100644
--- a/sound/soc/codecs/rt5663.c
+++ b/sound/soc/codecs/rt5663.c
@@ -1859,7 +1859,7 @@ static irqreturn_t rt5663_irq(int irq, void *data)
dev_dbg(regmap_get_device(rt5663->regmap), "%s IRQ queue work\n",
__func__);
- queue_delayed_work(system_wq, &rt5663->jack_detect_work,
+ queue_delayed_work(system_percpu_wq, &rt5663->jack_detect_work,
msecs_to_jiffies(250));
return IRQ_HANDLED;
@@ -1974,7 +1974,7 @@ static void rt5663_jack_detect_work(struct work_struct *work)
cancel_delayed_work_sync(
&rt5663->jd_unplug_work);
} else {
- queue_delayed_work(system_wq,
+ queue_delayed_work(system_percpu_wq,
&rt5663->jd_unplug_work,
msecs_to_jiffies(500));
}
@@ -2024,7 +2024,7 @@ static void rt5663_jd_unplug_work(struct work_struct *work)
SND_JACK_BTN_0 | SND_JACK_BTN_1 |
SND_JACK_BTN_2 | SND_JACK_BTN_3);
} else {
- queue_delayed_work(system_wq, &rt5663->jd_unplug_work,
+ queue_delayed_work(system_percpu_wq, &rt5663->jd_unplug_work,
msecs_to_jiffies(500));
}
}
diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
index a0b3679b17b4..e60dd85f5552 100644
--- a/sound/soc/intel/boards/sof_es8336.c
+++ b/sound/soc/intel/boards/sof_es8336.c
@@ -163,7 +163,7 @@ static int sof_es8316_speaker_power_event(struct snd_soc_dapm_widget *w,
priv->speaker_en = !SND_SOC_DAPM_EVENT_ON(event);
- queue_delayed_work(system_wq, &priv->pcm_pop_work, msecs_to_jiffies(70));
+ queue_delayed_work(system_percpu_wq, &priv->pcm_pop_work, msecs_to_jiffies(70));
return 0;
}
diff --git a/sound/soc/sof/intel/cnl.c b/sound/soc/sof/intel/cnl.c
index 385e5339f0a4..207eb18560dd 100644
--- a/sound/soc/sof/intel/cnl.c
+++ b/sound/soc/sof/intel/cnl.c
@@ -329,7 +329,7 @@ int cnl_ipc_send_msg(struct snd_sof_dev *sdev, struct snd_sof_ipc_msg *msg)
* CTX_SAVE IPC, which is sent before the DSP enters D3.
*/
if (hdr->cmd != (SOF_IPC_GLB_PM_MSG | SOF_IPC_PM_CTX_SAVE))
- mod_delayed_work(system_wq, &hdev->d0i3_work,
+ mod_delayed_work(system_percpu_wq, &hdev->d0i3_work,
msecs_to_jiffies(SOF_HDA_D0I3_WORK_DELAY_MS));
return 0;
diff --git a/sound/soc/sof/intel/hda-ipc.c b/sound/soc/sof/intel/hda-ipc.c
index f3fbf43a70c2..d8fde18145b4 100644
--- a/sound/soc/sof/intel/hda-ipc.c
+++ b/sound/soc/sof/intel/hda-ipc.c
@@ -96,7 +96,7 @@ void hda_dsp_ipc4_schedule_d0i3_work(struct sof_intel_hda_dev *hdev,
if (hda_dsp_ipc4_pm_msg(msg_data->primary))
return;
- mod_delayed_work(system_wq, &hdev->d0i3_work,
+ mod_delayed_work(system_percpu_wq, &hdev->d0i3_work,
msecs_to_jiffies(SOF_HDA_D0I3_WORK_DELAY_MS));
}
EXPORT_SYMBOL_NS(hda_dsp_ipc4_schedule_d0i3_work, "SND_SOC_SOF_INTEL_HDA_COMMON");
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 05/10] Workqueue: replace use of system_unbound_wq with system_dfl_wq
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (3 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 04/10] Workqueue: " Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 06/10] Workqueue: net: WQ_PERCPU added to alloc_workqueue users Marco Crivellari
` (5 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
system_unbound_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.
Adding system_dfl_wq to encourage its use when unbound work should be used.
queue_work() / queue_delayed_work() / mod_delayed_work() will now use the
new unbound wq: whether the user still use the old wq a warn will be
printed along with a wq redirect to the new one.
The old system_unbound_wq will be kept for a few release cycles.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
drivers/accel/ivpu/ivpu_pm.c | 2 +-
drivers/acpi/scan.c | 2 +-
drivers/base/dd.c | 2 +-
drivers/block/zram/zram_drv.c | 2 +-
drivers/char/random.c | 8 ++++----
drivers/gpu/drm/amd/amdgpu/aldebaran.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c | 2 +-
drivers/gpu/drm/drm_atomic_helper.c | 6 +++---
.../drm/i915/display/intel_display_power.c | 2 +-
drivers/gpu/drm/i915/display/intel_tc.c | 4 ++--
drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 2 +-
drivers/gpu/drm/i915/gt/uc/intel_guc.c | 4 ++--
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 4 ++--
.../gpu/drm/i915/gt/uc/intel_guc_submission.c | 6 +++---
drivers/gpu/drm/i915/i915_active.c | 2 +-
drivers/gpu/drm/i915/i915_sw_fence_work.c | 2 +-
drivers/gpu/drm/i915/i915_vma_resource.c | 2 +-
drivers/gpu/drm/i915/pxp/intel_pxp.c | 2 +-
drivers/gpu/drm/i915/pxp/intel_pxp_irq.c | 2 +-
drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 +-
drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 2 +-
drivers/gpu/drm/xe/xe_devcoredump.c | 2 +-
drivers/gpu/drm/xe/xe_execlist.c | 2 +-
drivers/gpu/drm/xe/xe_guc_ct.c | 4 ++--
drivers/gpu/drm/xe/xe_oa.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 4 ++--
drivers/hte/hte.c | 2 +-
drivers/infiniband/core/ucma.c | 2 +-
drivers/infiniband/hw/mlx5/odp.c | 4 ++--
.../platform/synopsys/hdmirx/snps_hdmirx.c | 8 ++++----
drivers/net/macvlan.c | 2 +-
drivers/net/netdevsim/dev.c | 6 +++---
drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 4 ++--
.../net/wireless/intel/iwlwifi/iwl-trans.h | 2 +-
drivers/scsi/qla2xxx/qla_os.c | 2 +-
drivers/scsi/scsi_transport_iscsi.c | 2 +-
drivers/soc/xilinx/zynqmp_power.c | 6 +++---
drivers/target/sbp/sbp_target.c | 8 ++++----
drivers/tty/serial/8250/8250_dw.c | 4 ++--
drivers/tty/tty_buffer.c | 8 ++++----
fs/afs/callback.c | 4 ++--
fs/afs/write.c | 2 +-
fs/bcachefs/btree_write_buffer.c | 2 +-
fs/bcachefs/io_read.c | 12 ++++++------
fs/bcachefs/journal_io.c | 2 +-
fs/btrfs/block-group.c | 2 +-
fs/btrfs/extent_map.c | 2 +-
fs/btrfs/space-info.c | 4 ++--
fs/btrfs/zoned.c | 2 +-
fs/ext4/mballoc.c | 2 +-
fs/netfs/objects.c | 2 +-
fs/netfs/read_collect.c | 2 +-
fs/netfs/write_collect.c | 2 +-
fs/nfsd/filecache.c | 2 +-
fs/notify/mark.c | 4 ++--
fs/quota/dquot.c | 2 +-
include/linux/workqueue.h | 19 +++++++++++++++++--
io_uring/io_uring.c | 2 +-
kernel/bpf/helpers.c | 4 ++--
kernel/bpf/memalloc.c | 2 +-
kernel/bpf/syscall.c | 2 +-
kernel/padata.c | 4 ++--
kernel/sched/core.c | 4 ++--
kernel/sched/ext.c | 4 ++--
kernel/umh.c | 2 +-
kernel/workqueue.c | 2 +-
mm/backing-dev.c | 2 +-
mm/kfence/core.c | 6 +++---
mm/memcontrol.c | 4 ++--
net/core/link_watch.c | 4 ++--
net/unix/garbage.c | 2 +-
net/wireless/core.c | 4 ++--
net/wireless/sysfs.c | 2 +-
rust/kernel/workqueue.rs | 6 +++---
sound/soc/codecs/wm_adsp.c | 2 +-
76 files changed, 139 insertions(+), 124 deletions(-)
diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
index 0c1f639931ad..f6a5c494621e 100644
--- a/drivers/accel/ivpu/ivpu_pm.c
+++ b/drivers/accel/ivpu/ivpu_pm.c
@@ -181,7 +181,7 @@ void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason)
if (atomic_cmpxchg(&vdev->pm->reset_pending, 0, 1) == 0) {
ivpu_hw_diagnose_failure(vdev);
ivpu_hw_irq_disable(vdev); /* Disable IRQ early to protect from IRQ storm */
- queue_work(system_unbound_wq, &vdev->pm->recovery_work);
+ queue_work(system_dfl_wq, &vdev->pm->recovery_work);
}
}
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index fb1fe9f3b1a3..14fbac0b65c8 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -2389,7 +2389,7 @@ static bool acpi_scan_clear_dep_queue(struct acpi_device *adev)
* initial enumeration of devices is complete, put it into the unbound
* workqueue.
*/
- queue_work(system_unbound_wq, &cdw->work);
+ queue_work(system_dfl_wq, &cdw->work);
return true;
}
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index f0e4b4aba885..fc778ed5552d 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -192,7 +192,7 @@ void driver_deferred_probe_trigger(void)
* Kick the re-probe thread. It may already be scheduled, but it is
* safe to kick it again.
*/
- queue_work(system_unbound_wq, &deferred_probe_work);
+ queue_work(system_dfl_wq, &deferred_probe_work);
}
/**
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index fda7d8624889..c7e0fa29a572 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -975,7 +975,7 @@ static int read_from_bdev_sync(struct zram *zram, struct page *page,
work.entry = entry;
INIT_WORK_ONSTACK(&work.work, zram_sync_read);
- queue_work(system_unbound_wq, &work.work);
+ queue_work(system_dfl_wq, &work.work);
flush_work(&work.work);
destroy_work_on_stack(&work.work);
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 38f2fab29c56..97435cd6b819 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -259,8 +259,8 @@ static void crng_reseed(struct work_struct *work)
u8 key[CHACHA_KEY_SIZE];
/* Immediately schedule the next reseeding, so that it fires sooner rather than later. */
- if (likely(system_unbound_wq))
- queue_delayed_work(system_unbound_wq, &next_reseed, crng_reseed_interval());
+ if (likely(system_dfl_wq))
+ queue_delayed_work(system_dfl_wq, &next_reseed, crng_reseed_interval());
extract_entropy(key, sizeof(key));
@@ -739,8 +739,8 @@ static void __cold _credit_init_bits(size_t bits)
if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
crng_reseed(NULL); /* Sets crng_init to CRNG_READY under base_crng.lock. */
- if (static_key_initialized && system_unbound_wq)
- queue_work(system_unbound_wq, &set_ready);
+ if (static_key_initialized && system_dfl_wq)
+ queue_work(system_dfl_wq, &set_ready);
atomic_notifier_call_chain(&random_ready_notifier, 0, NULL);
#ifdef CONFIG_VDSO_GETRANDOM
WRITE_ONCE(vdso_k_rng_data->is_ready, true);
diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
index e13fbd974141..d6acacfb6f91 100644
--- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
@@ -164,7 +164,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
/* For XGMI run all resets in parallel to speed up the process */
if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
- if (!queue_work(system_unbound_wq,
+ if (!queue_work(system_dfl_wq,
&tmp_adev->reset_cntl->reset_work))
r = -EALREADY;
} else
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 96c659389480..14ebfcd1636a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -5762,7 +5762,7 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
list_for_each_entry(tmp_adev, device_list_handle, reset_list) {
/* For XGMI run all resets in parallel to speed up the process */
if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
- if (!queue_work(system_unbound_wq,
+ if (!queue_work(system_dfl_wq,
&tmp_adev->xgmi_reset_work))
r = -EALREADY;
} else
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
index dabfbdf6f1ce..1596b94b110d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
@@ -116,7 +116,7 @@ static int amdgpu_reset_xgmi_reset_on_init_perform_reset(
/* Mode1 reset needs to be triggered on all devices together */
list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
/* For XGMI run all resets in parallel to speed up the process */
- if (!queue_work(system_unbound_wq, &tmp_adev->xgmi_reset_work))
+ if (!queue_work(system_dfl_wq, &tmp_adev->xgmi_reset_work))
r = -EALREADY;
if (r) {
dev_err(tmp_adev->dev,
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index 5302ab324898..aa539f316bf8 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -2100,13 +2100,13 @@ int drm_atomic_helper_commit(struct drm_device *dev,
* current layout.
*
* NOTE: Commit work has multiple phases, first hardware commit, then
- * cleanup. We want them to overlap, hence need system_unbound_wq to
+ * cleanup. We want them to overlap, hence need system_dfl_wq to
* make sure work items don't artificially stall on each another.
*/
drm_atomic_state_get(state);
if (nonblock)
- queue_work(system_unbound_wq, &state->commit_work);
+ queue_work(system_dfl_wq, &state->commit_work);
else
commit_tail(state);
@@ -2139,7 +2139,7 @@ EXPORT_SYMBOL(drm_atomic_helper_commit);
*
* Asynchronous workers need to have sufficient parallelism to be able to run
* different atomic commits on different CRTCs in parallel. The simplest way to
- * achieve this is by running them on the &system_unbound_wq work queue. Note
+ * achieve this is by running them on the &system_dfl_wq work queue. Note
* that drivers are not required to split up atomic commits and run an
* individual commit in parallel - userspace is supposed to do that if it cares.
* But it might be beneficial to do that for modesets, since those necessarily
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index f7171e6932dc..ff5166037ab5 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -611,7 +611,7 @@ queue_async_put_domains_work(struct i915_power_domains *power_domains,
power.domains);
drm_WARN_ON(display->drm, power_domains->async_put_wakeref);
power_domains->async_put_wakeref = wakeref;
- drm_WARN_ON(display->drm, !queue_delayed_work(system_unbound_wq,
+ drm_WARN_ON(display->drm, !queue_delayed_work(system_dfl_wq,
&power_domains->async_put_work,
msecs_to_jiffies(delay_ms)));
}
diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
index b8d14ed8a56e..7de1006f844d 100644
--- a/drivers/gpu/drm/i915/display/intel_tc.c
+++ b/drivers/gpu/drm/i915/display/intel_tc.c
@@ -1760,7 +1760,7 @@ bool intel_tc_port_link_reset(struct intel_digital_port *dig_port)
if (!intel_tc_port_link_needs_reset(dig_port))
return false;
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&to_tc_port(dig_port)->link_reset_work,
msecs_to_jiffies(2000));
@@ -1842,7 +1842,7 @@ void intel_tc_port_unlock(struct intel_digital_port *dig_port)
struct intel_tc_port *tc = to_tc_port(dig_port);
if (!tc->link_refcount && tc->mode != TC_PORT_DISCONNECTED)
- queue_delayed_work(system_unbound_wq, &tc->disconnect_phy_work,
+ queue_delayed_work(system_dfl_wq, &tc->disconnect_phy_work,
msecs_to_jiffies(1000));
mutex_unlock(&tc->lock);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
index 2f6b33edb9c9..008d5909a010 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
@@ -408,7 +408,7 @@ static void __memcpy_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
if (unlikely(fence->error || I915_SELFTEST_ONLY(fail_gpu_migration))) {
INIT_WORK(©_work->work, __memcpy_work);
- queue_work(system_unbound_wq, ©_work->work);
+ queue_work(system_dfl_wq, ©_work->work);
} else {
init_irq_work(©_work->irq_work, __memcpy_irq_work);
irq_work_queue(©_work->irq_work);
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
index 9df80c325fc1..8dbf6c82e241 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
@@ -617,7 +617,7 @@ int intel_guc_crash_process_msg(struct intel_guc *guc, u32 action)
else
guc_err(guc, "Unknown crash notification: 0x%04X\n", action);
- queue_work(system_unbound_wq, &guc->dead_guc_worker);
+ queue_work(system_dfl_wq, &guc->dead_guc_worker);
return 0;
}
@@ -639,7 +639,7 @@ int intel_guc_to_host_process_recv_msg(struct intel_guc *guc,
guc_err(guc, "Received early exception notification!\n");
if (msg & (INTEL_GUC_RECV_MSG_CRASH_DUMP_POSTED | INTEL_GUC_RECV_MSG_EXCEPTION))
- queue_work(system_unbound_wq, &guc->dead_guc_worker);
+ queue_work(system_dfl_wq, &guc->dead_guc_worker);
return 0;
}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index 0d5197c0824a..2575f380d17d 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -30,7 +30,7 @@ static void ct_dead_ct_worker_func(struct work_struct *w);
do { \
if (!(ct)->dead_ct_reported) { \
(ct)->dead_ct_reason |= 1 << CT_DEAD_##reason; \
- queue_work(system_unbound_wq, &(ct)->dead_ct_worker); \
+ queue_work(system_dfl_wq, &(ct)->dead_ct_worker); \
} \
} while (0)
#else
@@ -1240,7 +1240,7 @@ static int ct_handle_event(struct intel_guc_ct *ct, struct ct_incoming_msg *requ
list_add_tail(&request->link, &ct->requests.incoming);
spin_unlock_irqrestore(&ct->requests.lock, flags);
- queue_work(system_unbound_wq, &ct->requests.worker);
+ queue_work(system_dfl_wq, &ct->requests.worker);
return 0;
}
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index f8cb7c630d5b..54d17548d4aa 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -3385,7 +3385,7 @@ static void guc_context_sched_disable(struct intel_context *ce)
} else if (!intel_context_is_closed(ce) && !guc_id_pressure(guc, ce) &&
delay) {
spin_unlock_irqrestore(&ce->guc_state.lock, flags);
- mod_delayed_work(system_unbound_wq,
+ mod_delayed_work(system_dfl_wq,
&ce->guc_state.sched_disable_delay_work,
msecs_to_jiffies(delay));
} else {
@@ -3600,7 +3600,7 @@ static void guc_context_destroy(struct kref *kref)
* take the GT PM for the first time which isn't allowed from an atomic
* context.
*/
- queue_work(system_unbound_wq, &guc->submission_state.destroyed_worker);
+ queue_work(system_dfl_wq, &guc->submission_state.destroyed_worker);
}
static int guc_context_alloc(struct intel_context *ce)
@@ -5371,7 +5371,7 @@ int intel_guc_engine_failure_process_msg(struct intel_guc *guc,
* A GT reset flushes this worker queue (G2H handler) so we must use
* another worker to trigger a GT reset.
*/
- queue_work(system_unbound_wq, &guc->submission_state.reset_fail_worker);
+ queue_work(system_dfl_wq, &guc->submission_state.reset_fail_worker);
return 0;
}
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 0dbc4e289300..4b7238db08c4 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -193,7 +193,7 @@ active_retire(struct i915_active *ref)
return;
if (ref->flags & I915_ACTIVE_RETIRE_SLEEPS) {
- queue_work(system_unbound_wq, &ref->work);
+ queue_work(system_dfl_wq, &ref->work);
return;
}
diff --git a/drivers/gpu/drm/i915/i915_sw_fence_work.c b/drivers/gpu/drm/i915/i915_sw_fence_work.c
index d2e56b387993..366418108f78 100644
--- a/drivers/gpu/drm/i915/i915_sw_fence_work.c
+++ b/drivers/gpu/drm/i915/i915_sw_fence_work.c
@@ -38,7 +38,7 @@ fence_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
if (test_bit(DMA_FENCE_WORK_IMM, &f->dma.flags))
fence_work(&f->work);
else
- queue_work(system_unbound_wq, &f->work);
+ queue_work(system_dfl_wq, &f->work);
} else {
fence_complete(f);
}
diff --git a/drivers/gpu/drm/i915/i915_vma_resource.c b/drivers/gpu/drm/i915/i915_vma_resource.c
index 53d619ef0c3d..a8f2112ce81f 100644
--- a/drivers/gpu/drm/i915/i915_vma_resource.c
+++ b/drivers/gpu/drm/i915/i915_vma_resource.c
@@ -202,7 +202,7 @@ i915_vma_resource_fence_notify(struct i915_sw_fence *fence,
i915_vma_resource_unbind_work(&vma_res->work);
} else {
INIT_WORK(&vma_res->work, i915_vma_resource_unbind_work);
- queue_work(system_unbound_wq, &vma_res->work);
+ queue_work(system_dfl_wq, &vma_res->work);
}
break;
case FENCE_FREE:
diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp.c b/drivers/gpu/drm/i915/pxp/intel_pxp.c
index f8da693ad3ce..df854c961c6e 100644
--- a/drivers/gpu/drm/i915/pxp/intel_pxp.c
+++ b/drivers/gpu/drm/i915/pxp/intel_pxp.c
@@ -276,7 +276,7 @@ static void pxp_queue_termination(struct intel_pxp *pxp)
spin_lock_irq(gt->irq_lock);
intel_pxp_mark_termination_in_progress(pxp);
pxp->session_events |= PXP_TERMINATION_REQUEST;
- queue_work(system_unbound_wq, &pxp->session_work);
+ queue_work(system_dfl_wq, &pxp->session_work);
spin_unlock_irq(gt->irq_lock);
}
diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_irq.c b/drivers/gpu/drm/i915/pxp/intel_pxp_irq.c
index d81750b9bdda..735325e828bc 100644
--- a/drivers/gpu/drm/i915/pxp/intel_pxp_irq.c
+++ b/drivers/gpu/drm/i915/pxp/intel_pxp_irq.c
@@ -48,7 +48,7 @@ void intel_pxp_irq_handler(struct intel_pxp *pxp, u16 iir)
pxp->session_events |= PXP_TERMINATION_COMPLETE | PXP_EVENT_TYPE_IRQ;
if (pxp->session_events)
- queue_work(system_unbound_wq, &pxp->session_work);
+ queue_work(system_dfl_wq, &pxp->session_work);
}
static inline void __pxp_set_interrupts(struct intel_gt *gt, u32 interrupts)
diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
index 504cb3f2054b..d179c81d8306 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
+++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
@@ -2466,7 +2466,7 @@ nv50_disp_atomic_commit(struct drm_device *dev,
pm_runtime_get_noresume(dev->dev);
if (nonblock)
- queue_work(system_unbound_wq, &state->commit_work);
+ queue_work(system_dfl_wq, &state->commit_work);
else
nv50_disp_atomic_commit_tail(state);
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
index e3596e2b557d..a13098ec5df0 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
@@ -1771,7 +1771,7 @@ static void vop_handle_vblank(struct vop *vop)
spin_unlock(&drm->event_lock);
if (test_and_clear_bit(VOP_PENDING_FB_UNREF, &vop->pending))
- drm_flip_work_commit(&vop->fb_unref_work, system_unbound_wq);
+ drm_flip_work_commit(&vop->fb_unref_work, system_dfl_wq);
}
static irqreturn_t vop_isr(int irq, void *data)
diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index 81b9d9bb3f57..02ca9abd9e76 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -316,7 +316,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
xe_engine_snapshot_capture_for_queue(q);
- queue_work(system_unbound_wq, &ss->work);
+ queue_work(system_dfl_wq, &ss->work);
xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
dma_fence_end_signalling(cookie);
diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
index 788f56b066b6..171a5796e0fb 100644
--- a/drivers/gpu/drm/xe/xe_execlist.c
+++ b/drivers/gpu/drm/xe/xe_execlist.c
@@ -416,7 +416,7 @@ static void execlist_exec_queue_kill(struct xe_exec_queue *q)
static void execlist_exec_queue_fini(struct xe_exec_queue *q)
{
INIT_WORK(&q->execlist->fini_async, execlist_exec_queue_fini_async);
- queue_work(system_unbound_wq, &q->execlist->fini_async);
+ queue_work(system_dfl_wq, &q->execlist->fini_async);
}
static int execlist_exec_queue_set_priority(struct xe_exec_queue *q,
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 72ad576fc18e..4e239d8195cd 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -472,7 +472,7 @@ int xe_guc_ct_enable(struct xe_guc_ct *ct)
spin_lock_irq(&ct->dead.lock);
if (ct->dead.reason) {
ct->dead.reason |= (1 << CT_DEAD_STATE_REARM);
- queue_work(system_unbound_wq, &ct->dead.worker);
+ queue_work(system_dfl_wq, &ct->dead.worker);
}
spin_unlock_irq(&ct->dead.lock);
#endif
@@ -1811,7 +1811,7 @@ static void ct_dead_capture(struct xe_guc_ct *ct, struct guc_ctb *ctb, u32 reaso
spin_unlock_irqrestore(&ct->dead.lock, flags);
- queue_work(system_unbound_wq, &(ct)->dead.worker);
+ queue_work(system_dfl_wq, &(ct)->dead.worker);
}
static void ct_dead_print(struct xe_dead_ct *dead)
diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
index 7ffc98f67e69..1878e50eb687 100644
--- a/drivers/gpu/drm/xe/xe_oa.c
+++ b/drivers/gpu/drm/xe/xe_oa.c
@@ -956,7 +956,7 @@ static void xe_oa_config_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
struct xe_oa_fence *ofence = container_of(cb, typeof(*ofence), cb);
INIT_DELAYED_WORK(&ofence->work, xe_oa_fence_work_fn);
- queue_delayed_work(system_unbound_wq, &ofence->work,
+ queue_delayed_work(system_dfl_wq, &ofence->work,
usecs_to_jiffies(NOA_PROGRAM_ADDITIONAL_DELAY_US));
dma_fence_put(fence);
}
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 60303998bd61..3e25b71749d4 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1289,7 +1289,7 @@ static void vma_destroy_cb(struct dma_fence *fence,
struct xe_vma *vma = container_of(cb, struct xe_vma, destroy_cb);
INIT_WORK(&vma->destroy_work, vma_destroy_work_func);
- queue_work(system_unbound_wq, &vma->destroy_work);
+ queue_work(system_dfl_wq, &vma->destroy_work);
}
static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
@@ -1973,7 +1973,7 @@ static void xe_vm_free(struct drm_gpuvm *gpuvm)
struct xe_vm *vm = container_of(gpuvm, struct xe_vm, gpuvm);
/* To destroy the VM we need to be able to sleep */
- queue_work(system_unbound_wq, &vm->destroy_work);
+ queue_work(system_dfl_wq, &vm->destroy_work);
}
struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id)
diff --git a/drivers/hte/hte.c b/drivers/hte/hte.c
index 23a6eeb8c506..e2804636f2bd 100644
--- a/drivers/hte/hte.c
+++ b/drivers/hte/hte.c
@@ -826,7 +826,7 @@ int hte_push_ts_ns(const struct hte_chip *chip, u32 xlated_id,
ret = ei->cb(data, ei->cl_data);
if (ret == HTE_RUN_SECOND_CB && ei->tcb) {
- queue_work(system_unbound_wq, &ei->cb_work);
+ queue_work(system_dfl_wq, &ei->cb_work);
set_bit(HTE_TS_QUEUE_WK, &ei->flags);
}
diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
index 6e700b974033..ccfcf8e4b712 100644
--- a/drivers/infiniband/core/ucma.c
+++ b/drivers/infiniband/core/ucma.c
@@ -361,7 +361,7 @@ static int ucma_event_handler(struct rdma_cm_id *cm_id,
if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
xa_lock(&ctx_table);
if (xa_load(&ctx_table, ctx->id) == ctx)
- queue_work(system_unbound_wq, &ctx->close_work);
+ queue_work(system_dfl_wq, &ctx->close_work);
xa_unlock(&ctx_table);
}
return 0;
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 86d8fa63bf69..24efd9a2d82b 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -253,7 +253,7 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
/* Freeing a MR is a sleeping operation, so bounce to a work queue */
INIT_WORK(&mr->odp_destroy.work, free_implicit_child_mr_work);
- queue_work(system_unbound_wq, &mr->odp_destroy.work);
+ queue_work(system_dfl_wq, &mr->odp_destroy.work);
}
static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
@@ -2062,6 +2062,6 @@ int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd,
destroy_prefetch_work(work);
return rc;
}
- queue_work(system_unbound_wq, &work->work);
+ queue_work(system_dfl_wq, &work->work);
return 0;
}
diff --git a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
index 3d2913de9a86..8c5142fc80ef 100644
--- a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
+++ b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
@@ -1735,7 +1735,7 @@ static void process_signal_change(struct snps_hdmirx_dev *hdmirx_dev)
FIFO_UNDERFLOW_INT_EN |
HDMIRX_AXI_ERROR_INT_EN, 0);
hdmirx_reset_dma(hdmirx_dev);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_res_change,
msecs_to_jiffies(50));
}
@@ -2190,7 +2190,7 @@ static void hdmirx_delayed_work_res_change(struct work_struct *work)
if (hdmirx_wait_signal_lock(hdmirx_dev)) {
hdmirx_plugout(hdmirx_dev);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_hotplug,
msecs_to_jiffies(200));
} else {
@@ -2209,7 +2209,7 @@ static irqreturn_t hdmirx_5v_det_irq_handler(int irq, void *dev_id)
val = gpiod_get_value(hdmirx_dev->detect_5v_gpio);
v4l2_dbg(3, debug, &hdmirx_dev->v4l2_dev, "%s: 5v:%d\n", __func__, val);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_hotplug,
msecs_to_jiffies(10));
@@ -2441,7 +2441,7 @@ static void hdmirx_enable_irq(struct device *dev)
enable_irq(hdmirx_dev->dma_irq);
enable_irq(hdmirx_dev->det_irq);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&hdmirx_dev->delayed_work_hotplug,
msecs_to_jiffies(110));
}
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index d0dfa6bca6cc..8572748b79f6 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -369,7 +369,7 @@ static void macvlan_broadcast_enqueue(struct macvlan_port *port,
}
spin_unlock(&port->bc_queue.lock);
- queue_work(system_unbound_wq, &port->bc_work);
+ queue_work(system_dfl_wq, &port->bc_work);
if (err)
goto free_nskb;
diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
index 3e0b61202f0c..0b7c945a0e96 100644
--- a/drivers/net/netdevsim/dev.c
+++ b/drivers/net/netdevsim/dev.c
@@ -836,7 +836,7 @@ static void nsim_dev_trap_report_work(struct work_struct *work)
nsim_dev = nsim_trap_data->nsim_dev;
if (!devl_trylock(priv_to_devlink(nsim_dev))) {
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&nsim_dev->trap_data->trap_report_dw, 1);
return;
}
@@ -852,7 +852,7 @@ static void nsim_dev_trap_report_work(struct work_struct *work)
cond_resched();
}
devl_unlock(priv_to_devlink(nsim_dev));
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&nsim_dev->trap_data->trap_report_dw,
msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS));
}
@@ -909,7 +909,7 @@ static int nsim_dev_traps_init(struct devlink *devlink)
INIT_DELAYED_WORK(&nsim_dev->trap_data->trap_report_dw,
nsim_dev_trap_report_work);
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&nsim_dev->trap_data->trap_report_dw,
msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS));
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
index 03f639fbf9b6..2467b5d56014 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
@@ -2950,7 +2950,7 @@ int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt,
IWL_WARN(fwrt, "Collecting data: trigger %d fired.\n",
le32_to_cpu(desc->trig_desc.type));
- queue_delayed_work(system_unbound_wq, &wk_data->wk,
+ queue_delayed_work(system_dfl_wq, &wk_data->wk,
usecs_to_jiffies(delay));
return 0;
@@ -3254,7 +3254,7 @@ int iwl_fw_dbg_ini_collect(struct iwl_fw_runtime *fwrt,
if (sync)
iwl_fw_dbg_collect_sync(fwrt, idx);
else
- queue_delayed_work(system_unbound_wq,
+ queue_delayed_work(system_dfl_wq,
&fwrt->dump.wks[idx].wk,
usecs_to_jiffies(delay));
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
index 25fb4c50e38b..29ff021b5779 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
@@ -1163,7 +1163,7 @@ static inline void iwl_trans_schedule_reset(struct iwl_trans *trans,
*/
trans->restart.during_reset = test_bit(STATUS_IN_SW_RESET,
&trans->status);
- queue_work(system_unbound_wq, &trans->restart.wk);
+ queue_work(system_dfl_wq, &trans->restart.wk);
}
static inline void iwl_trans_fw_error(struct iwl_trans *trans,
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index b44d134e7105..87eeb8607b60 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -5292,7 +5292,7 @@ void qla24xx_sched_upd_fcport(fc_port_t *fcport)
qla2x00_set_fcport_disc_state(fcport, DSC_UPD_FCPORT);
spin_unlock_irqrestore(&fcport->vha->work_lock, flags);
- queue_work(system_unbound_wq, &fcport->reg_work);
+ queue_work(system_dfl_wq, &fcport->reg_work);
}
static
diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
index 9c347c64c315..e2754c1cb0a5 100644
--- a/drivers/scsi/scsi_transport_iscsi.c
+++ b/drivers/scsi/scsi_transport_iscsi.c
@@ -3957,7 +3957,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
list_del_init(&session->sess_list);
spin_unlock_irqrestore(&sesslock, flags);
- queue_work(system_unbound_wq, &session->destroy_work);
+ queue_work(system_dfl_wq, &session->destroy_work);
}
break;
case ISCSI_UEVENT_UNBIND_SESSION:
diff --git a/drivers/soc/xilinx/zynqmp_power.c b/drivers/soc/xilinx/zynqmp_power.c
index ae59bf16659a..6145c4fe192e 100644
--- a/drivers/soc/xilinx/zynqmp_power.c
+++ b/drivers/soc/xilinx/zynqmp_power.c
@@ -82,7 +82,7 @@ static void subsystem_restart_event_callback(const u32 *payload, void *data)
memcpy(zynqmp_pm_init_restart_work->args, &payload[0],
sizeof(zynqmp_pm_init_restart_work->args));
- queue_work(system_unbound_wq, &zynqmp_pm_init_restart_work->callback_work);
+ queue_work(system_dfl_wq, &zynqmp_pm_init_restart_work->callback_work);
}
static void suspend_event_callback(const u32 *payload, void *data)
@@ -95,7 +95,7 @@ static void suspend_event_callback(const u32 *payload, void *data)
memcpy(zynqmp_pm_init_suspend_work->args, &payload[1],
sizeof(zynqmp_pm_init_suspend_work->args));
- queue_work(system_unbound_wq, &zynqmp_pm_init_suspend_work->callback_work);
+ queue_work(system_dfl_wq, &zynqmp_pm_init_suspend_work->callback_work);
}
static irqreturn_t zynqmp_pm_isr(int irq, void *data)
@@ -140,7 +140,7 @@ static void ipi_receive_callback(struct mbox_client *cl, void *data)
memcpy(zynqmp_pm_init_suspend_work->args, &payload[1],
sizeof(zynqmp_pm_init_suspend_work->args));
- queue_work(system_unbound_wq,
+ queue_work(system_dfl_wq,
&zynqmp_pm_init_suspend_work->callback_work);
/* Send NULL message to mbox controller to ack the message */
diff --git a/drivers/target/sbp/sbp_target.c b/drivers/target/sbp/sbp_target.c
index 3b89b5a70331..b8457477cee9 100644
--- a/drivers/target/sbp/sbp_target.c
+++ b/drivers/target/sbp/sbp_target.c
@@ -730,7 +730,7 @@ static int tgt_agent_rw_orb_pointer(struct fw_card *card, int tcode, void *data,
pr_debug("tgt_agent ORB_POINTER write: 0x%llx\n",
agent->orb_pointer);
- queue_work(system_unbound_wq, &agent->work);
+ queue_work(system_dfl_wq, &agent->work);
return RCODE_COMPLETE;
@@ -764,7 +764,7 @@ static int tgt_agent_rw_doorbell(struct fw_card *card, int tcode, void *data,
pr_debug("tgt_agent DOORBELL\n");
- queue_work(system_unbound_wq, &agent->work);
+ queue_work(system_dfl_wq, &agent->work);
return RCODE_COMPLETE;
@@ -990,7 +990,7 @@ static void tgt_agent_fetch_work(struct work_struct *work)
if (tgt_agent_check_active(agent) && !doorbell) {
INIT_WORK(&req->work, tgt_agent_process_work);
- queue_work(system_unbound_wq, &req->work);
+ queue_work(system_dfl_wq, &req->work);
} else {
/* don't process this request, just check next_ORB */
sbp_free_request(req);
@@ -1618,7 +1618,7 @@ static void sbp_mgt_agent_rw(struct fw_card *card,
agent->orb_offset = sbp2_pointer_to_addr(ptr);
agent->request = req;
- queue_work(system_unbound_wq, &agent->work);
+ queue_work(system_dfl_wq, &agent->work);
rcode = RCODE_COMPLETE;
} else if (tcode == TCODE_READ_BLOCK_REQUEST) {
addr_to_sbp2_pointer(agent->orb_offset, ptr);
diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
index 1902f29444a1..50a5ee546373 100644
--- a/drivers/tty/serial/8250/8250_dw.c
+++ b/drivers/tty/serial/8250/8250_dw.c
@@ -361,7 +361,7 @@ static int dw8250_clk_notifier_cb(struct notifier_block *nb,
* deferred event handling complication.
*/
if (event == POST_RATE_CHANGE) {
- queue_work(system_unbound_wq, &d->clk_work);
+ queue_work(system_dfl_wq, &d->clk_work);
return NOTIFY_OK;
}
@@ -678,7 +678,7 @@ static int dw8250_probe(struct platform_device *pdev)
err = clk_notifier_register(data->clk, &data->clk_notifier);
if (err)
return dev_err_probe(dev, err, "Failed to set the clock notifier\n");
- queue_work(system_unbound_wq, &data->clk_work);
+ queue_work(system_dfl_wq, &data->clk_work);
}
platform_set_drvdata(pdev, data);
diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
index 79f0ff94ce00..60066ece9d96 100644
--- a/drivers/tty/tty_buffer.c
+++ b/drivers/tty/tty_buffer.c
@@ -76,7 +76,7 @@ void tty_buffer_unlock_exclusive(struct tty_port *port)
mutex_unlock(&buf->lock);
if (restart)
- queue_work(system_unbound_wq, &buf->work);
+ queue_work(system_dfl_wq, &buf->work);
}
EXPORT_SYMBOL_GPL(tty_buffer_unlock_exclusive);
@@ -531,7 +531,7 @@ void tty_flip_buffer_push(struct tty_port *port)
struct tty_bufhead *buf = &port->buf;
tty_flip_buffer_commit(buf->tail);
- queue_work(system_unbound_wq, &buf->work);
+ queue_work(system_dfl_wq, &buf->work);
}
EXPORT_SYMBOL(tty_flip_buffer_push);
@@ -561,7 +561,7 @@ int tty_insert_flip_string_and_push_buffer(struct tty_port *port,
tty_flip_buffer_commit(buf->tail);
spin_unlock_irqrestore(&port->lock, flags);
- queue_work(system_unbound_wq, &buf->work);
+ queue_work(system_dfl_wq, &buf->work);
return size;
}
@@ -614,7 +614,7 @@ void tty_buffer_set_lock_subclass(struct tty_port *port)
bool tty_buffer_restart_work(struct tty_port *port)
{
- return queue_work(system_unbound_wq, &port->buf.work);
+ return queue_work(system_dfl_wq, &port->buf.work);
}
bool tty_buffer_cancel_work(struct tty_port *port)
diff --git a/fs/afs/callback.c b/fs/afs/callback.c
index 69e1dd55b160..894d2bad6b6c 100644
--- a/fs/afs/callback.c
+++ b/fs/afs/callback.c
@@ -42,7 +42,7 @@ static void afs_volume_init_callback(struct afs_volume *volume)
list_for_each_entry(vnode, &volume->open_mmaps, cb_mmap_link) {
if (vnode->cb_v_check != atomic_read(&volume->cb_v_break)) {
afs_clear_cb_promise(vnode, afs_cb_promise_clear_vol_init_cb);
- queue_work(system_unbound_wq, &vnode->cb_work);
+ queue_work(system_dfl_wq, &vnode->cb_work);
}
}
@@ -90,7 +90,7 @@ void __afs_break_callback(struct afs_vnode *vnode, enum afs_cb_break_reason reas
if (reason != afs_cb_break_for_deleted &&
vnode->status.type == AFS_FTYPE_FILE &&
atomic_read(&vnode->cb_nr_mmap))
- queue_work(system_unbound_wq, &vnode->cb_work);
+ queue_work(system_dfl_wq, &vnode->cb_work);
trace_afs_cb_break(&vnode->fid, vnode->cb_break, reason, true);
} else {
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 18b0a9f1615e..fe3421435e05 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -172,7 +172,7 @@ static void afs_issue_write_worker(struct work_struct *work)
void afs_issue_write(struct netfs_io_subrequest *subreq)
{
subreq->work.func = afs_issue_write_worker;
- if (!queue_work(system_unbound_wq, &subreq->work))
+ if (!queue_work(system_dfl_wq, &subreq->work))
WARN_ON_ONCE(1);
}
diff --git a/fs/bcachefs/btree_write_buffer.c b/fs/bcachefs/btree_write_buffer.c
index adbe576ec77e..8b9cd4cfd488 100644
--- a/fs/bcachefs/btree_write_buffer.c
+++ b/fs/bcachefs/btree_write_buffer.c
@@ -822,7 +822,7 @@ int bch2_journal_keys_to_write_buffer_end(struct bch_fs *c, struct journal_keys_
if (bch2_btree_write_buffer_should_flush(c) &&
__bch2_write_ref_tryget(c, BCH_WRITE_REF_btree_write_buffer) &&
- !queue_work(system_unbound_wq, &c->btree_write_buffer.flush_work))
+ !queue_work(system_dfl_wq, &c->btree_write_buffer.flush_work))
bch2_write_ref_put(c, BCH_WRITE_REF_btree_write_buffer);
if (dst->wb == &wb->flushing)
diff --git a/fs/bcachefs/io_read.c b/fs/bcachefs/io_read.c
index 417bb0c7bbfa..1b05ad45220c 100644
--- a/fs/bcachefs/io_read.c
+++ b/fs/bcachefs/io_read.c
@@ -553,7 +553,7 @@ static void bch2_rbio_error(struct bch_read_bio *rbio,
if (bch2_err_matches(ret, BCH_ERR_data_read_retry)) {
bch2_rbio_punt(rbio, bch2_rbio_retry,
- RBIO_CONTEXT_UNBOUND, system_unbound_wq);
+ RBIO_CONTEXT_UNBOUND, system_dfl_wq);
} else {
rbio = bch2_rbio_free(rbio);
@@ -833,13 +833,13 @@ static void __bch2_read_endio(struct work_struct *work)
memalloc_nofs_restore(nofs_flags);
return;
csum_err:
- bch2_rbio_punt(rbio, bch2_read_csum_err, RBIO_CONTEXT_UNBOUND, system_unbound_wq);
+ bch2_rbio_punt(rbio, bch2_read_csum_err, RBIO_CONTEXT_UNBOUND, system_dfl_wq);
goto out;
decompression_err:
- bch2_rbio_punt(rbio, bch2_read_decompress_err, RBIO_CONTEXT_UNBOUND, system_unbound_wq);
+ bch2_rbio_punt(rbio, bch2_read_decompress_err, RBIO_CONTEXT_UNBOUND, system_dfl_wq);
goto out;
decrypt_err:
- bch2_rbio_punt(rbio, bch2_read_decrypt_err, RBIO_CONTEXT_UNBOUND, system_unbound_wq);
+ bch2_rbio_punt(rbio, bch2_read_decrypt_err, RBIO_CONTEXT_UNBOUND, system_dfl_wq);
goto out;
}
@@ -859,7 +859,7 @@ static void bch2_read_endio(struct bio *bio)
rbio->bio.bi_end_io = rbio->end_io;
if (unlikely(bio->bi_status)) {
- bch2_rbio_punt(rbio, bch2_read_io_err, RBIO_CONTEXT_UNBOUND, system_unbound_wq);
+ bch2_rbio_punt(rbio, bch2_read_io_err, RBIO_CONTEXT_UNBOUND, system_dfl_wq);
return;
}
@@ -878,7 +878,7 @@ static void bch2_read_endio(struct bio *bio)
rbio->promote ||
crc_is_compressed(rbio->pick.crc) ||
bch2_csum_type_is_encryption(rbio->pick.crc.csum_type))
- context = RBIO_CONTEXT_UNBOUND, wq = system_unbound_wq;
+ context = RBIO_CONTEXT_UNBOUND, wq = system_dfl_wq;
else if (rbio->pick.crc.csum_type)
context = RBIO_CONTEXT_HIGHPRI, wq = system_highpri_wq;
diff --git a/fs/bcachefs/journal_io.c b/fs/bcachefs/journal_io.c
index 1b7961f4f609..298be7748e99 100644
--- a/fs/bcachefs/journal_io.c
+++ b/fs/bcachefs/journal_io.c
@@ -1256,7 +1256,7 @@ int bch2_journal_read(struct bch_fs *c,
percpu_ref_tryget(&ca->io_ref[READ]))
closure_call(&ca->journal.read,
bch2_journal_read_device,
- system_unbound_wq,
+ system_dfl_wq,
&jlist.cl);
else
degraded = true;
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index a8129f1ce78c..eb25a4acd54d 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -2026,7 +2026,7 @@ void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info)
btrfs_reclaim_sweep(fs_info);
spin_lock(&fs_info->unused_bgs_lock);
if (!list_empty(&fs_info->reclaim_bgs))
- queue_work(system_unbound_wq, &fs_info->reclaim_bgs_work);
+ queue_work(system_dfl_wq, &fs_info->reclaim_bgs_work);
spin_unlock(&fs_info->unused_bgs_lock);
}
diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
index 7f46abbd6311..812823b93b66 100644
--- a/fs/btrfs/extent_map.c
+++ b/fs/btrfs/extent_map.c
@@ -1373,7 +1373,7 @@ void btrfs_free_extent_maps(struct btrfs_fs_info *fs_info, long nr_to_scan)
if (atomic64_cmpxchg(&fs_info->em_shrinker_nr_to_scan, 0, nr_to_scan) != 0)
return;
- queue_work(system_unbound_wq, &fs_info->em_shrinker_work);
+ queue_work(system_dfl_wq, &fs_info->em_shrinker_work);
}
void btrfs_init_extent_map_shrinker_work(struct btrfs_fs_info *fs_info)
diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index ff089e3e4103..719d8d13d63e 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -1764,7 +1764,7 @@ static int __reserve_bytes(struct btrfs_fs_info *fs_info,
space_info->flags,
orig_bytes, flush,
"enospc");
- queue_work(system_unbound_wq, async_work);
+ queue_work(system_dfl_wq, async_work);
}
} else {
list_add_tail(&ticket.list,
@@ -1781,7 +1781,7 @@ static int __reserve_bytes(struct btrfs_fs_info *fs_info,
need_preemptive_reclaim(fs_info, space_info)) {
trace_btrfs_trigger_flush(fs_info, space_info->flags,
orig_bytes, flush, "preempt");
- queue_work(system_unbound_wq,
+ queue_work(system_dfl_wq,
&fs_info->preempt_reclaim_work);
}
}
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index fb8b8b29c169..7ab51a1e857e 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2429,7 +2429,7 @@ void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
atomic_inc(&eb->refs);
bg->last_eb = eb;
INIT_WORK(&bg->zone_finish_work, btrfs_zone_finish_endio_workfn);
- queue_work(system_unbound_wq, &bg->zone_finish_work);
+ queue_work(system_dfl_wq, &bg->zone_finish_work);
}
void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 0d523e9fb3d5..689950520e28 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -3927,7 +3927,7 @@ void ext4_process_freed_data(struct super_block *sb, tid_t commit_tid)
list_splice_tail(&freed_data_list, &sbi->s_discard_list);
spin_unlock(&sbi->s_md_lock);
if (wake)
- queue_work(system_unbound_wq, &sbi->s_discard_work);
+ queue_work(system_dfl_wq, &sbi->s_discard_work);
} else {
list_for_each_entry_safe(entry, tmp, &freed_data_list, efd_list)
kmem_cache_free(ext4_free_data_cachep, entry);
diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
index dc6b41ef18b0..da9cf4747728 100644
--- a/fs/netfs/objects.c
+++ b/fs/netfs/objects.c
@@ -159,7 +159,7 @@ void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
if (dead) {
if (was_async) {
rreq->work.func = netfs_free_request;
- if (!queue_work(system_unbound_wq, &rreq->work))
+ if (!queue_work(system_dfl_wq, &rreq->work))
WARN_ON(1);
} else {
netfs_free_request(&rreq->work);
diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index 23c75755ad4e..3f64a9f6c688 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -474,7 +474,7 @@ void netfs_wake_read_collector(struct netfs_io_request *rreq)
!test_bit(NETFS_RREQ_RETRYING, &rreq->flags)) {
if (!work_pending(&rreq->work)) {
netfs_get_request(rreq, netfs_rreq_trace_get_work);
- if (!queue_work(system_unbound_wq, &rreq->work))
+ if (!queue_work(system_dfl_wq, &rreq->work))
netfs_put_request(rreq, true, netfs_rreq_trace_put_work_nq);
}
} else {
diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
index 3fca59e6475d..7ef3859e36d0 100644
--- a/fs/netfs/write_collect.c
+++ b/fs/netfs/write_collect.c
@@ -451,7 +451,7 @@ void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async)
{
if (!work_pending(&wreq->work)) {
netfs_get_request(wreq, netfs_rreq_trace_get_work);
- if (!queue_work(system_unbound_wq, &wreq->work))
+ if (!queue_work(system_dfl_wq, &wreq->work))
netfs_put_request(wreq, was_async, netfs_rreq_trace_put_work_nq);
}
}
diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index ab85e6a2454f..910fde3240a9 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -113,7 +113,7 @@ static void
nfsd_file_schedule_laundrette(void)
{
if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags))
- queue_delayed_work(system_unbound_wq, &nfsd_filecache_laundrette,
+ queue_delayed_work(system_dfl_wq, &nfsd_filecache_laundrette,
NFSD_LAUNDRETTE_DELAY);
}
diff --git a/fs/notify/mark.c b/fs/notify/mark.c
index 798340db69d7..55a03bb05aa1 100644
--- a/fs/notify/mark.c
+++ b/fs/notify/mark.c
@@ -428,7 +428,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
conn->destroy_next = connector_destroy_list;
connector_destroy_list = conn;
spin_unlock(&destroy_lock);
- queue_work(system_unbound_wq, &connector_reaper_work);
+ queue_work(system_dfl_wq, &connector_reaper_work);
}
/*
* Note that we didn't update flags telling whether inode cares about
@@ -439,7 +439,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
spin_lock(&destroy_lock);
list_add(&mark->g_list, &destroy_list);
spin_unlock(&destroy_lock);
- queue_delayed_work(system_unbound_wq, &reaper_work,
+ queue_delayed_work(system_dfl_wq, &reaper_work,
FSNOTIFY_REAPER_DELAY);
}
EXPORT_SYMBOL_GPL(fsnotify_put_mark);
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 825c5c2e0962..39d9756a9cef 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -881,7 +881,7 @@ void dqput(struct dquot *dquot)
put_releasing_dquots(dquot);
atomic_dec(&dquot->dq_count);
spin_unlock(&dq_list_lock);
- queue_delayed_work(system_unbound_wq, "a_release_work, 1);
+ queue_delayed_work(system_dfl_wq, "a_release_work, 1);
}
EXPORT_SYMBOL(dqput);
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 69cc81e670f6..90258f228ea5 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -667,6 +667,11 @@ static inline bool queue_work(struct workqueue_struct *wq,
wq = system_percpu_wq;
}
+ if (wq == system_unbound_wq) {
+ pr_warn_once("system_unbound_wq will be removed in the near future. Please use the new system_dfl_wq. wq set to system_dfl_wq\n");
+ wq = system_dfl_wq;
+ }
+
return queue_work_on(WORK_CPU_UNBOUND, wq, work);
}
@@ -687,6 +692,11 @@ static inline bool queue_delayed_work(struct workqueue_struct *wq,
wq = system_percpu_wq;
}
+ if (wq == system_unbound_wq) {
+ pr_warn_once("system_unbound_wq will be removed in the near future. Please use the new system_dfl_wq. wq set to system_dfl_wq\n");
+ wq = system_dfl_wq;
+ }
+
return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
}
@@ -707,6 +717,11 @@ static inline bool mod_delayed_work(struct workqueue_struct *wq,
wq = system_percpu_wq;
}
+ if (wq == system_unbound_wq) {
+ pr_warn_once("system_unbound_wq will be removed in the near future. Please use the new system_dfl_wq. wq set to system_dfl_wq\n");
+ wq = system_dfl_wq;
+ }
+
return mod_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay);
}
@@ -794,8 +809,8 @@ extern void __warn_flushing_systemwide_wq(void)
_wq == system_highpri_wq) || \
(__builtin_constant_p(_wq == system_long_wq) && \
_wq == system_long_wq) || \
- (__builtin_constant_p(_wq == system_unbound_wq) && \
- _wq == system_unbound_wq) || \
+ (__builtin_constant_p(_wq == system_dfl_wq) && \
+ _wq == system_dfl_wq) || \
(__builtin_constant_p(_wq == system_freezable_wq) && \
_wq == system_freezable_wq) || \
(__builtin_constant_p(_wq == system_power_efficient_wq) && \
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2a6ead3c7d36..74972ecf2045 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2983,7 +2983,7 @@ static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
INIT_WORK(&ctx->exit_work, io_ring_exit_work);
/*
- * Use system_unbound_wq to avoid spawning tons of event kworkers
+ * Use system_dfl_wq to avoid spawning tons of event kworkers
* if we're exiting a ton of rings at the same time. It just adds
* noise and overhead, there's no discernable change in runtime
* over using system_percpu_wq.
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index e3a2662f4e33..b969ca4d7af0 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1593,7 +1593,7 @@ void bpf_timer_cancel_and_free(void *val)
* timer callback.
*/
if (this_cpu_read(hrtimer_running)) {
- queue_work(system_unbound_wq, &t->cb.delete_work);
+ queue_work(system_dfl_wq, &t->cb.delete_work);
return;
}
@@ -1606,7 +1606,7 @@ void bpf_timer_cancel_and_free(void *val)
if (hrtimer_try_to_cancel(&t->timer) >= 0)
kfree_rcu(t, cb.rcu);
else
- queue_work(system_unbound_wq, &t->cb.delete_work);
+ queue_work(system_dfl_wq, &t->cb.delete_work);
} else {
bpf_timer_delete_work(&t->cb.delete_work);
}
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 889374722d0a..bd45dda9dc35 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -736,7 +736,7 @@ static void destroy_mem_alloc(struct bpf_mem_alloc *ma, int rcu_in_progress)
/* Defer barriers into worker to let the rest of map memory to be freed */
memset(ma, 0, sizeof(*ma));
INIT_WORK(©->work, free_mem_alloc_deferred);
- queue_work(system_unbound_wq, ©->work);
+ queue_work(system_dfl_wq, ©->work);
}
void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 9794446bc8c6..bb6f85fda240 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -901,7 +901,7 @@ static void bpf_map_free_in_work(struct bpf_map *map)
/* Avoid spawning kworkers, since they all might contend
* for the same mutex like slab_mutex.
*/
- queue_work(system_unbound_wq, &map->work);
+ queue_work(system_dfl_wq, &map->work);
}
static void bpf_map_free_rcu_gp(struct rcu_head *rcu)
diff --git a/kernel/padata.c b/kernel/padata.c
index b3d4eacc4f5d..76b39fc8b326 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -551,9 +551,9 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
do {
nid = next_node_in(old_node, node_states[N_CPU]);
} while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid));
- queue_work_node(nid, system_unbound_wq, &pw->pw_work);
+ queue_work_node(nid, system_dfl_wq, &pw->pw_work);
} else {
- queue_work(system_unbound_wq, &pw->pw_work);
+ queue_work(system_dfl_wq, &pw->pw_work);
}
/* Use the current thread, which saves starting a workqueue worker. */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c81cf642dba0..baa096e543f1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5772,7 +5772,7 @@ static void sched_tick_remote(struct work_struct *work)
os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
if (os == TICK_SCHED_REMOTE_RUNNING)
- queue_delayed_work(system_unbound_wq, dwork, HZ);
+ queue_delayed_work(system_dfl_wq, dwork, HZ);
}
static void sched_tick_start(int cpu)
@@ -5791,7 +5791,7 @@ static void sched_tick_start(int cpu)
if (os == TICK_SCHED_REMOTE_OFFLINE) {
twork->cpu = cpu;
INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
- queue_delayed_work(system_unbound_wq, &twork->work, HZ);
+ queue_delayed_work(system_dfl_wq, &twork->work, HZ);
}
}
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 66bcd40a28ca..b1b79143e391 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3514,7 +3514,7 @@ static void scx_watchdog_workfn(struct work_struct *work)
cond_resched();
}
- queue_delayed_work(system_unbound_wq, to_delayed_work(work),
+ queue_delayed_work(system_dfl_wq, to_delayed_work(work),
scx_watchdog_timeout / 2);
}
@@ -5403,7 +5403,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
WRITE_ONCE(scx_watchdog_timeout, timeout);
WRITE_ONCE(scx_watchdog_timestamp, jiffies);
- queue_delayed_work(system_unbound_wq, &scx_watchdog_work,
+ queue_delayed_work(system_dfl_wq, &scx_watchdog_work,
scx_watchdog_timeout / 2);
/*
diff --git a/kernel/umh.c b/kernel/umh.c
index b4da45a3a7cf..cda899327952 100644
--- a/kernel/umh.c
+++ b/kernel/umh.c
@@ -430,7 +430,7 @@ int call_usermodehelper_exec(struct subprocess_info *sub_info, int wait)
sub_info->complete = (wait == UMH_NO_WAIT) ? NULL : &done;
sub_info->wait = wait;
- queue_work(system_unbound_wq, &sub_info->work);
+ queue_work(system_dfl_wq, &sub_info->work);
if (wait == UMH_NO_WAIT) /* task has freed sub_info */
goto unlock;
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 94f87c3fa909..89839eebb359 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2936,7 +2936,7 @@ static void idle_worker_timeout(struct timer_list *t)
raw_spin_unlock_irq(&pool->lock);
if (do_cull)
- queue_work(system_unbound_wq, &pool->idle_cull_work);
+ queue_work(system_dfl_wq, &pool->idle_cull_work);
}
/**
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 784605103202..7e672424f928 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -934,7 +934,7 @@ void wb_memcg_offline(struct mem_cgroup *memcg)
memcg_cgwb_list->next = NULL; /* prevent new wb's */
spin_unlock_irq(&cgwb_lock);
- queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work);
+ queue_work(system_dfl_wq, &cleanup_offline_cgwbs_work);
}
/**
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 102048821c22..f26d87d59296 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -854,7 +854,7 @@ static void toggle_allocation_gate(struct work_struct *work)
/* Disable static key and reset timer. */
static_branch_disable(&kfence_allocation_key);
#endif
- queue_delayed_work(system_unbound_wq, &kfence_timer,
+ queue_delayed_work(system_dfl_wq, &kfence_timer,
msecs_to_jiffies(kfence_sample_interval));
}
@@ -900,7 +900,7 @@ static void kfence_init_enable(void)
atomic_notifier_chain_register(&panic_notifier_list, &kfence_check_canary_notifier);
WRITE_ONCE(kfence_enabled, true);
- queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
+ queue_delayed_work(system_dfl_wq, &kfence_timer, 0);
pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE,
CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool,
@@ -996,7 +996,7 @@ static int kfence_enable_late(void)
return kfence_init_late();
WRITE_ONCE(kfence_enabled, true);
- queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
+ queue_delayed_work(system_dfl_wq, &kfence_timer, 0);
pr_info("re-enabled\n");
return 0;
}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 421740f1bcdc..c2944bc83378 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -651,7 +651,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w)
* in latency-sensitive paths is as cheap as possible.
*/
__mem_cgroup_flush_stats(root_mem_cgroup, true);
- queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
+ queue_delayed_work(system_dfl_wq, &stats_flush_dwork, FLUSH_TIME);
}
unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
@@ -3732,7 +3732,7 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
goto offline_kmem;
if (unlikely(mem_cgroup_is_root(memcg)) && !mem_cgroup_disabled())
- queue_delayed_work(system_unbound_wq, &stats_flush_dwork,
+ queue_delayed_work(system_dfl_wq, &stats_flush_dwork,
FLUSH_TIME);
lru_gen_online_memcg(memcg);
diff --git a/net/core/link_watch.c b/net/core/link_watch.c
index cb04ef2b9807..7c17cba6437d 100644
--- a/net/core/link_watch.c
+++ b/net/core/link_watch.c
@@ -157,9 +157,9 @@ static void linkwatch_schedule_work(int urgent)
* override the existing timer.
*/
if (test_bit(LW_URGENT, &linkwatch_flags))
- mod_delayed_work(system_unbound_wq, &linkwatch_work, 0);
+ mod_delayed_work(system_dfl_wq, &linkwatch_work, 0);
else
- queue_delayed_work(system_unbound_wq, &linkwatch_work, delay);
+ queue_delayed_work(system_dfl_wq, &linkwatch_work, delay);
}
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 01e2b9452c75..684ab03137b6 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -592,7 +592,7 @@ static DECLARE_WORK(unix_gc_work, __unix_gc);
void unix_gc(void)
{
WRITE_ONCE(gc_in_progress, true);
- queue_work(system_unbound_wq, &unix_gc_work);
+ queue_work(system_dfl_wq, &unix_gc_work);
}
#define UNIX_INFLIGHT_TRIGGER_GC 16000
diff --git a/net/wireless/core.c b/net/wireless/core.c
index dcce326fdb8c..ffe0f439fda8 100644
--- a/net/wireless/core.c
+++ b/net/wireless/core.c
@@ -428,7 +428,7 @@ static void cfg80211_wiphy_work(struct work_struct *work)
if (wk) {
list_del_init(&wk->entry);
if (!list_empty(&rdev->wiphy_work_list))
- queue_work(system_unbound_wq, work);
+ queue_work(system_dfl_wq, work);
spin_unlock_irq(&rdev->wiphy_work_lock);
trace_wiphy_work_run(&rdev->wiphy, wk);
@@ -1670,7 +1670,7 @@ void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work)
list_add_tail(&work->entry, &rdev->wiphy_work_list);
spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags);
- queue_work(system_unbound_wq, &rdev->wiphy_work);
+ queue_work(system_dfl_wq, &rdev->wiphy_work);
}
EXPORT_SYMBOL_GPL(wiphy_work_queue);
diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c
index 62f26618f674..8d142856e385 100644
--- a/net/wireless/sysfs.c
+++ b/net/wireless/sysfs.c
@@ -137,7 +137,7 @@ static int wiphy_resume(struct device *dev)
if (rdev->wiphy.registered && rdev->ops->resume)
ret = rdev_resume(rdev);
rdev->suspended = false;
- queue_work(system_unbound_wq, &rdev->wiphy_work);
+ queue_work(system_dfl_wq, &rdev->wiphy_work);
wiphy_unlock(&rdev->wiphy);
if (ret)
diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
index 7c7e99a8c033..6f508c3e37e4 100644
--- a/rust/kernel/workqueue.rs
+++ b/rust/kernel/workqueue.rs
@@ -662,14 +662,14 @@ pub fn system_long() -> &'static Queue {
unsafe { Queue::from_raw(bindings::system_long_wq) }
}
-/// Returns the system unbound work queue (`system_unbound_wq`).
+/// Returns the system unbound work queue (`system_dfl_wq`).
///
/// Workers are not bound to any specific CPU, not concurrency managed, and all queued work items
/// are executed immediately as long as `max_active` limit is not reached and resources are
/// available.
pub fn system_unbound() -> &'static Queue {
- // SAFETY: `system_unbound_wq` is a C global, always available.
- unsafe { Queue::from_raw(bindings::system_unbound_wq) }
+ // SAFETY: `system_dfl_wq` is a C global, always available.
+ unsafe { Queue::from_raw(bindings::system_dfl_wq) }
}
/// Returns the system freezable work queue (`system_freezable_wq`).
diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
index 91c8697c29c3..c8fff8496ede 100644
--- a/sound/soc/codecs/wm_adsp.c
+++ b/sound/soc/codecs/wm_adsp.c
@@ -1044,7 +1044,7 @@ int wm_adsp_early_event(struct snd_soc_dapm_widget *w,
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
- queue_work(system_unbound_wq, &dsp->boot_work);
+ queue_work(system_dfl_wq, &dsp->boot_work);
break;
case SND_SOC_DAPM_PRE_PMD:
wm_adsp_power_down(dsp);
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 06/10] Workqueue: net: WQ_PERCPU added to alloc_workqueue users
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (4 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 05/10] Workqueue: replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 07/10] Workqueue: mm: " Marco Crivellari
` (4 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This patch adds a new WQ_PERCPU flag at the network subsystem, to explicitly
request the use of the per-CPU behavior. Both flags coexist for one release
cycle to allow callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
CC: "David S. Miller" <davem@davemloft.net>
CC: Eric Dumazet <edumazet@google.com>
CC: Jakub Kicinski <kuba@kernel.org>
CC: Paolo Abeni <pabeni@redhat.com>
---
net/ceph/messenger.c | 3 ++-
net/core/sock_diag.c | 2 +-
net/rds/ib_rdma.c | 3 ++-
net/rxrpc/rxperf.c | 2 +-
net/smc/af_smc.c | 6 +++---
net/smc/smc_core.c | 2 +-
net/tls/tls_device.c | 2 +-
net/vmw_vsock/virtio_transport.c | 2 +-
net/vmw_vsock/vsock_loopback.c | 2 +-
9 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index d1b5705dc0c6..183c1e0b405a 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -252,7 +252,8 @@ int __init ceph_msgr_init(void)
* The number of active work items is limited by the number of
* connections, so leave @max_active at default.
*/
- ceph_msgr_wq = alloc_workqueue("ceph-msgr", WQ_MEM_RECLAIM, 0);
+ ceph_msgr_wq = alloc_workqueue("ceph-msgr",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (ceph_msgr_wq)
return 0;
diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
index a08eed9b9142..dcd7e8c02169 100644
--- a/net/core/sock_diag.c
+++ b/net/core/sock_diag.c
@@ -350,7 +350,7 @@ static struct pernet_operations diag_net_ops = {
static int __init sock_diag_init(void)
{
- broadcast_wq = alloc_workqueue("sock_diag_events", 0, 0);
+ broadcast_wq = alloc_workqueue("sock_diag_events", WQ_PERCPU, 0);
BUG_ON(!broadcast_wq);
return register_pernet_subsys(&diag_net_ops);
}
diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
index d1cfceeff133..6585164c7059 100644
--- a/net/rds/ib_rdma.c
+++ b/net/rds/ib_rdma.c
@@ -672,7 +672,8 @@ struct rds_ib_mr_pool *rds_ib_create_mr_pool(struct rds_ib_device *rds_ibdev,
int rds_ib_mr_init(void)
{
- rds_ib_mr_wq = alloc_workqueue("rds_mr_flushd", WQ_MEM_RECLAIM, 0);
+ rds_ib_mr_wq = alloc_workqueue("rds_mr_flushd",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!rds_ib_mr_wq)
return -ENOMEM;
return 0;
diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c
index e848a4777b8c..a92a2b05c19a 100644
--- a/net/rxrpc/rxperf.c
+++ b/net/rxrpc/rxperf.c
@@ -584,7 +584,7 @@ static int __init rxperf_init(void)
pr_info("Server registering\n");
- rxperf_workqueue = alloc_workqueue("rxperf", 0, 0);
+ rxperf_workqueue = alloc_workqueue("rxperf", WQ_PERCPU, 0);
if (!rxperf_workqueue)
goto error_workqueue;
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index 3e6cb35baf25..f69d5657438b 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -3518,15 +3518,15 @@ static int __init smc_init(void)
rc = -ENOMEM;
- smc_tcp_ls_wq = alloc_workqueue("smc_tcp_ls_wq", 0, 0);
+ smc_tcp_ls_wq = alloc_workqueue("smc_tcp_ls_wq", WQ_PERCPU, 0);
if (!smc_tcp_ls_wq)
goto out_pnet;
- smc_hs_wq = alloc_workqueue("smc_hs_wq", 0, 0);
+ smc_hs_wq = alloc_workqueue("smc_hs_wq", WQ_PERCPU, 0);
if (!smc_hs_wq)
goto out_alloc_tcp_ls_wq;
- smc_close_wq = alloc_workqueue("smc_close_wq", 0, 0);
+ smc_close_wq = alloc_workqueue("smc_close_wq", WQ_PERCPU, 0);
if (!smc_close_wq)
goto out_alloc_hs_wq;
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index ab870109f916..9d9a703e884e 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -896,7 +896,7 @@ static int smc_lgr_create(struct smc_sock *smc, struct smc_init_info *ini)
rc = SMC_CLC_DECL_MEM;
goto ism_put_vlan;
}
- lgr->tx_wq = alloc_workqueue("smc_tx_wq-%*phN", 0, 0,
+ lgr->tx_wq = alloc_workqueue("smc_tx_wq-%*phN", WQ_PERCPU, 0,
SMC_LGR_ID_SIZE, &lgr->id);
if (!lgr->tx_wq) {
rc = -ENOMEM;
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index f672a62a9a52..939466316761 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -1410,7 +1410,7 @@ int __init tls_device_init(void)
if (!dummy_page)
return -ENOMEM;
- destruct_wq = alloc_workqueue("ktls_device_destruct", 0, 0);
+ destruct_wq = alloc_workqueue("ktls_device_destruct", WQ_PERCPU, 0);
if (!destruct_wq) {
err = -ENOMEM;
goto err_free_dummy;
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index f0e48e6911fc..b3e960108e6b 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -916,7 +916,7 @@ static int __init virtio_vsock_init(void)
{
int ret;
- virtio_vsock_workqueue = alloc_workqueue("virtio_vsock", 0, 0);
+ virtio_vsock_workqueue = alloc_workqueue("virtio_vsock", WQ_PERCPU, 0);
if (!virtio_vsock_workqueue)
return -ENOMEM;
diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
index 6e78927a598e..bc2ff918b315 100644
--- a/net/vmw_vsock/vsock_loopback.c
+++ b/net/vmw_vsock/vsock_loopback.c
@@ -139,7 +139,7 @@ static int __init vsock_loopback_init(void)
struct vsock_loopback *vsock = &the_vsock_loopback;
int ret;
- vsock->workqueue = alloc_workqueue("vsock-loopback", 0, 0);
+ vsock->workqueue = alloc_workqueue("vsock-loopback", WQ_PERCPU, 0);
if (!vsock->workqueue)
return -ENOMEM;
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 07/10] Workqueue: mm: WQ_PERCPU added to alloc_workqueue users
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (5 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 06/10] Workqueue: net: WQ_PERCPU added to alloc_workqueue users Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 08/10] Workqueue: fs: " Marco Crivellari
` (3 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
Andrew Morton
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This patch adds a new WQ_PERCPU flag to all the mm subsystem users to
explicitly request the use of the per-CPU behavior. Both flags coexist
for one release cycle to allow callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
CC: Andrew Morton <akpm@linux-foundation.org>
---
mm/backing-dev.c | 2 +-
mm/slub.c | 3 ++-
mm/vmstat.c | 3 ++-
3 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 7e672424f928..3b392de6367e 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -969,7 +969,7 @@ static int __init cgwb_init(void)
* system_percpu_wq. Put them in a separate wq and limit concurrency.
* There's no point in executing many of these in parallel.
*/
- cgwb_release_wq = alloc_workqueue("cgwb_release", 0, 1);
+ cgwb_release_wq = alloc_workqueue("cgwb_release", WQ_PERCPU, 1);
if (!cgwb_release_wq)
return -ENOMEM;
diff --git a/mm/slub.c b/mm/slub.c
index b46f87662e71..cac9d5d7c924 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -6364,7 +6364,8 @@ void __init kmem_cache_init(void)
void __init kmem_cache_init_late(void)
{
#ifndef CONFIG_SLUB_TINY
- flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0);
+ flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
WARN_ON(!flushwq);
#endif
}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4c268ce39ff2..57bf76b1d9d4 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2244,7 +2244,8 @@ void __init init_mm_internals(void)
{
int ret __maybe_unused;
- mm_percpu_wq = alloc_workqueue("mm_percpu_wq", WQ_MEM_RECLAIM, 0);
+ mm_percpu_wq = alloc_workqueue("mm_percpu_wq",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
#ifdef CONFIG_SMP
ret = cpuhp_setup_state_nocalls(CPUHP_MM_VMSTAT_DEAD, "mm/vmstat:dead",
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 08/10] Workqueue: fs: WQ_PERCPU added to alloc_workqueue users
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (6 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 07/10] Workqueue: mm: " Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 09/10] Workqueue: WQ_PERCPU added to all the remaining users Marco Crivellari
` (2 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
Alexander Viro, Christian Brauner
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This patch adds a new WQ_PERCPU flag to all the fs subsystem users to
explicitly request the use of the per-CPU behavior. Both flags coexist
for one release cycle to allow callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
CC: Alexander Viro <viro@zeniv.linux.org.uk>
CC: Christian Brauner <brauner@kernel.org>
---
fs/afs/main.c | 4 ++--
fs/bcachefs/super.c | 10 +++++-----
fs/btrfs/async-thread.c | 3 +--
fs/btrfs/disk-io.c | 2 +-
fs/ceph/super.c | 2 +-
fs/dlm/lowcomms.c | 2 +-
fs/dlm/main.c | 2 +-
fs/fs-writeback.c | 2 +-
fs/gfs2/main.c | 5 +++--
fs/gfs2/ops_fstype.c | 6 ++++--
fs/ocfs2/dlm/dlmdomain.c | 3 ++-
fs/ocfs2/dlmfs/dlmfs.c | 3 ++-
fs/smb/client/cifsfs.c | 16 +++++++++++-----
fs/smb/server/ksmbd_work.c | 2 +-
fs/smb/server/transport_rdma.c | 3 ++-
fs/super.c | 3 ++-
fs/verity/verify.c | 2 +-
fs/xfs/xfs_log.c | 3 +--
fs/xfs/xfs_mru_cache.c | 3 ++-
fs/xfs/xfs_super.c | 15 ++++++++-------
20 files changed, 52 insertions(+), 39 deletions(-)
diff --git a/fs/afs/main.c b/fs/afs/main.c
index c845c5daaeba..6b7aab6abd78 100644
--- a/fs/afs/main.c
+++ b/fs/afs/main.c
@@ -168,13 +168,13 @@ static int __init afs_init(void)
printk(KERN_INFO "kAFS: Red Hat AFS client v0.1 registering.\n");
- afs_wq = alloc_workqueue("afs", 0, 0);
+ afs_wq = alloc_workqueue("afs", WQ_PERCPU, 0);
if (!afs_wq)
goto error_afs_wq;
afs_async_calls = alloc_workqueue("kafsd", WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
if (!afs_async_calls)
goto error_async;
- afs_lock_manager = alloc_workqueue("kafs_lockd", WQ_MEM_RECLAIM, 0);
+ afs_lock_manager = alloc_workqueue("kafs_lockd", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!afs_lock_manager)
goto error_lockmgr;
diff --git a/fs/bcachefs/super.c b/fs/bcachefs/super.c
index a58edde43bee..8bba5347a36e 100644
--- a/fs/bcachefs/super.c
+++ b/fs/bcachefs/super.c
@@ -909,15 +909,15 @@ static struct bch_fs *bch2_fs_alloc(struct bch_sb *sb, struct bch_opts opts)
if (!(c->btree_update_wq = alloc_workqueue("bcachefs",
WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM|WQ_UNBOUND, 512)) ||
!(c->btree_io_complete_wq = alloc_workqueue("bcachefs_btree_io",
- WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 1)) ||
+ WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 1)) ||
!(c->copygc_wq = alloc_workqueue("bcachefs_copygc",
- WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 1)) ||
+ WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU, 1)) ||
!(c->btree_read_complete_wq = alloc_workqueue("bcachefs_btree_read_complete",
- WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 512)) ||
+ WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 512)) ||
!(c->btree_write_submit_wq = alloc_workqueue("bcachefs_btree_write_sumit",
- WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 1)) ||
+ WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 1)) ||
!(c->write_ref_wq = alloc_workqueue("bcachefs_write_ref",
- WQ_FREEZABLE, 0)) ||
+ WQ_FREEZABLE | WQ_PERCPU, 0)) ||
#ifndef BCH_WRITE_REF_DEBUG
percpu_ref_init(&c->writes, bch2_writes_disabled,
PERCPU_REF_INIT_DEAD, GFP_KERNEL) ||
diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c
index f3bffe08b290..0a84d86a942d 100644
--- a/fs/btrfs/async-thread.c
+++ b/fs/btrfs/async-thread.c
@@ -109,8 +109,7 @@ struct btrfs_workqueue *btrfs_alloc_workqueue(struct btrfs_fs_info *fs_info,
ret->thresh = thresh;
}
- ret->normal_wq = alloc_workqueue("btrfs-%s", flags, ret->current_active,
- name);
+ ret->normal_wq = alloc_workqueue("btrfs-%s", flags, ret->current_active, name);
if (!ret->normal_wq) {
kfree(ret);
return NULL;
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 3dd555db3d32..f817b29a43de 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1963,7 +1963,7 @@ static int btrfs_init_workqueues(struct btrfs_fs_info *fs_info)
{
u32 max_active = fs_info->thread_pool_size;
unsigned int flags = WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_UNBOUND;
- unsigned int ordered_flags = WQ_MEM_RECLAIM | WQ_FREEZABLE;
+ unsigned int ordered_flags = WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU;
fs_info->workers =
btrfs_alloc_workqueue(fs_info, "worker", flags, max_active, 16);
diff --git a/fs/ceph/super.c b/fs/ceph/super.c
index f3951253e393..a0302a004157 100644
--- a/fs/ceph/super.c
+++ b/fs/ceph/super.c
@@ -862,7 +862,7 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
fsc->inode_wq = alloc_workqueue("ceph-inode", WQ_UNBOUND, 0);
if (!fsc->inode_wq)
goto fail_client;
- fsc->cap_wq = alloc_workqueue("ceph-cap", 0, 1);
+ fsc->cap_wq = alloc_workqueue("ceph-cap", WQ_PERCPU, 1);
if (!fsc->cap_wq)
goto fail_inode_wq;
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 70abd4da17a6..6ced1fa90209 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1702,7 +1702,7 @@ static int work_start(void)
return -ENOMEM;
}
- process_workqueue = alloc_workqueue("dlm_process", WQ_HIGHPRI | WQ_BH, 0);
+ process_workqueue = alloc_workqueue("dlm_process", WQ_HIGHPRI | WQ_BH | WQ_PERCPU, 0);
if (!process_workqueue) {
log_print("can't start dlm_process");
destroy_workqueue(io_workqueue);
diff --git a/fs/dlm/main.c b/fs/dlm/main.c
index 4887c8a05318..a44d16da7187 100644
--- a/fs/dlm/main.c
+++ b/fs/dlm/main.c
@@ -52,7 +52,7 @@ static int __init init_dlm(void)
if (error)
goto out_user;
- dlm_wq = alloc_workqueue("dlm_wq", 0, 0);
+ dlm_wq = alloc_workqueue("dlm_wq", WQ_PERCPU, 0);
if (!dlm_wq) {
error = -ENOMEM;
goto out_plock;
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index cf51a265bf27..4b1a53a3266b 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -1180,7 +1180,7 @@ void cgroup_writeback_umount(struct super_block *sb)
static int __init cgroup_writeback_init(void)
{
- isw_wq = alloc_workqueue("inode_switch_wbs", 0, 0);
+ isw_wq = alloc_workqueue("inode_switch_wbs", WQ_PERCPU, 0);
if (!isw_wq)
return -ENOMEM;
return 0;
diff --git a/fs/gfs2/main.c b/fs/gfs2/main.c
index 0727f60ad028..9d65719353fa 100644
--- a/fs/gfs2/main.c
+++ b/fs/gfs2/main.c
@@ -151,7 +151,8 @@ static int __init init_gfs2_fs(void)
error = -ENOMEM;
gfs2_recovery_wq = alloc_workqueue("gfs2_recovery",
- WQ_MEM_RECLAIM | WQ_FREEZABLE, 0);
+ WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU,
+ 0);
if (!gfs2_recovery_wq)
goto fail_wq1;
@@ -160,7 +161,7 @@ static int __init init_gfs2_fs(void)
if (!gfs2_control_wq)
goto fail_wq2;
- gfs2_freeze_wq = alloc_workqueue("gfs2_freeze", 0, 0);
+ gfs2_freeze_wq = alloc_workqueue("gfs2_freeze", WQ_PERCPU, 0);
if (!gfs2_freeze_wq)
goto fail_wq3;
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index e83d293c3614..0dccb5882ef6 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -1189,13 +1189,15 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
error = -ENOMEM;
sdp->sd_glock_wq = alloc_workqueue("gfs2-glock/%s",
- WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE, 0,
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE | WQ_PERCPU,
+ 0,
sdp->sd_fsname);
if (!sdp->sd_glock_wq)
goto fail_free;
sdp->sd_delete_wq = alloc_workqueue("gfs2-delete/%s",
- WQ_MEM_RECLAIM | WQ_FREEZABLE, 0, sdp->sd_fsname);
+ WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU, 0,
+ sdp->sd_fsname);
if (!sdp->sd_delete_wq)
goto fail_glock_wq;
diff --git a/fs/ocfs2/dlm/dlmdomain.c b/fs/ocfs2/dlm/dlmdomain.c
index 2018501b2249..2347a50f079b 100644
--- a/fs/ocfs2/dlm/dlmdomain.c
+++ b/fs/ocfs2/dlm/dlmdomain.c
@@ -1876,7 +1876,8 @@ static int dlm_join_domain(struct dlm_ctxt *dlm)
dlm_debug_init(dlm);
snprintf(wq_name, O2NM_MAX_NAME_LEN, "dlm_wq-%s", dlm->name);
- dlm->dlm_worker = alloc_workqueue(wq_name, WQ_MEM_RECLAIM, 0);
+ dlm->dlm_worker = alloc_workqueue(wq_name, WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!dlm->dlm_worker) {
status = -ENOMEM;
mlog_errno(status);
diff --git a/fs/ocfs2/dlmfs/dlmfs.c b/fs/ocfs2/dlmfs/dlmfs.c
index 5130ec44e5e1..0b730535b2c8 100644
--- a/fs/ocfs2/dlmfs/dlmfs.c
+++ b/fs/ocfs2/dlmfs/dlmfs.c
@@ -595,7 +595,8 @@ static int __init init_dlmfs_fs(void)
}
cleanup_inode = 1;
- user_dlm_worker = alloc_workqueue("user_dlm", WQ_MEM_RECLAIM, 0);
+ user_dlm_worker = alloc_workqueue("user_dlm",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!user_dlm_worker) {
status = -ENOMEM;
goto bail;
diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
index a08c42363ffc..3d3a76fa7210 100644
--- a/fs/smb/client/cifsfs.c
+++ b/fs/smb/client/cifsfs.c
@@ -1883,7 +1883,9 @@ init_cifs(void)
cifs_dbg(VFS, "dir_cache_timeout set to max of 65000 seconds\n");
}
- cifsiod_wq = alloc_workqueue("cifsiod", WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ cifsiod_wq = alloc_workqueue("cifsiod",
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!cifsiod_wq) {
rc = -ENOMEM;
goto out_clean_proc;
@@ -1911,28 +1913,32 @@ init_cifs(void)
}
cifsoplockd_wq = alloc_workqueue("cifsoplockd",
- WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!cifsoplockd_wq) {
rc = -ENOMEM;
goto out_destroy_fileinfo_put_wq;
}
deferredclose_wq = alloc_workqueue("deferredclose",
- WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!deferredclose_wq) {
rc = -ENOMEM;
goto out_destroy_cifsoplockd_wq;
}
serverclose_wq = alloc_workqueue("serverclose",
- WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!serverclose_wq) {
rc = -ENOMEM;
goto out_destroy_deferredclose_wq;
}
cfid_put_wq = alloc_workqueue("cfid_put_wq",
- WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!cfid_put_wq) {
rc = -ENOMEM;
goto out_destroy_serverclose_wq;
diff --git a/fs/smb/server/ksmbd_work.c b/fs/smb/server/ksmbd_work.c
index 72b00ca6e455..4a71f46d7020 100644
--- a/fs/smb/server/ksmbd_work.c
+++ b/fs/smb/server/ksmbd_work.c
@@ -78,7 +78,7 @@ int ksmbd_work_pool_init(void)
int ksmbd_workqueue_init(void)
{
- ksmbd_wq = alloc_workqueue("ksmbd-io", 0, 0);
+ ksmbd_wq = alloc_workqueue("ksmbd-io", WQ_PERCPU, 0);
if (!ksmbd_wq)
return -ENOMEM;
return 0;
diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
index 4998df04ab95..43b7062335fa 100644
--- a/fs/smb/server/transport_rdma.c
+++ b/fs/smb/server/transport_rdma.c
@@ -2198,7 +2198,8 @@ int ksmbd_rdma_init(void)
* for lack of credits
*/
smb_direct_wq = alloc_workqueue("ksmbd-smb_direct-wq",
- WQ_HIGHPRI | WQ_MEM_RECLAIM, 0);
+ WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!smb_direct_wq)
return -ENOMEM;
diff --git a/fs/super.c b/fs/super.c
index 97a17f9d9023..0a9af48f30dd 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -2174,7 +2174,8 @@ int sb_init_dio_done_wq(struct super_block *sb)
{
struct workqueue_struct *old;
struct workqueue_struct *wq = alloc_workqueue("dio/%s",
- WQ_MEM_RECLAIM, 0,
+ WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0,
sb->s_id);
if (!wq)
return -ENOMEM;
diff --git a/fs/verity/verify.c b/fs/verity/verify.c
index 4fcad0825a12..b8f53d1cfd20 100644
--- a/fs/verity/verify.c
+++ b/fs/verity/verify.c
@@ -357,7 +357,7 @@ void __init fsverity_init_workqueue(void)
* latency on ARM64.
*/
fsverity_read_workqueue = alloc_workqueue("fsverity_read_queue",
- WQ_HIGHPRI,
+ WQ_HIGHPRI | WQ_PERCPU,
num_online_cpus());
if (!fsverity_read_workqueue)
panic("failed to allocate fsverity_read_queue");
diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index 6493bdb57351..3fecb066eeb3 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -1489,8 +1489,7 @@ xlog_alloc_log(
log->l_iclog->ic_prev = prev_iclog; /* re-write 1st prev ptr */
log->l_ioend_workqueue = alloc_workqueue("xfs-log/%s",
- XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM |
- WQ_HIGHPRI),
+ XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU),
0, mp->m_super->s_id);
if (!log->l_ioend_workqueue)
goto out_free_iclog;
diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c
index d0f5b403bdbe..152032f68013 100644
--- a/fs/xfs/xfs_mru_cache.c
+++ b/fs/xfs/xfs_mru_cache.c
@@ -293,7 +293,8 @@ int
xfs_mru_cache_init(void)
{
xfs_mru_reap_wq = alloc_workqueue("xfs_mru_cache",
- XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE), 1);
+ XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU),
+ 1);
if (!xfs_mru_reap_wq)
return -ENOMEM;
return 0;
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index b2dd0c0bf509..38584c5618f4 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -565,19 +565,19 @@ xfs_init_mount_workqueues(
struct xfs_mount *mp)
{
mp->m_buf_workqueue = alloc_workqueue("xfs-buf/%s",
- XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM),
+ XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU),
1, mp->m_super->s_id);
if (!mp->m_buf_workqueue)
goto out;
mp->m_unwritten_workqueue = alloc_workqueue("xfs-conv/%s",
- XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM),
+ XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU),
0, mp->m_super->s_id);
if (!mp->m_unwritten_workqueue)
goto out_destroy_buf;
mp->m_reclaim_workqueue = alloc_workqueue("xfs-reclaim/%s",
- XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM),
+ XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU),
0, mp->m_super->s_id);
if (!mp->m_reclaim_workqueue)
goto out_destroy_unwritten;
@@ -589,13 +589,14 @@ xfs_init_mount_workqueues(
goto out_destroy_reclaim;
mp->m_inodegc_wq = alloc_workqueue("xfs-inodegc/%s",
- XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM),
+ XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU),
1, mp->m_super->s_id);
if (!mp->m_inodegc_wq)
goto out_destroy_blockgc;
mp->m_sync_workqueue = alloc_workqueue("xfs-sync/%s",
- XFS_WQFLAGS(WQ_FREEZABLE), 0, mp->m_super->s_id);
+ XFS_WQFLAGS(WQ_FREEZABLE | WQ_PERCPU), 0,
+ mp->m_super->s_id);
if (!mp->m_sync_workqueue)
goto out_destroy_inodegc;
@@ -2499,8 +2500,8 @@ xfs_init_workqueues(void)
* AGs in all the filesystems mounted. Hence use the default large
* max_active value for this workqueue.
*/
- xfs_alloc_wq = alloc_workqueue("xfsalloc",
- XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE), 0);
+ xfs_alloc_wq = alloc_workqueue("xfsalloc", XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU),
+ 0);
if (!xfs_alloc_wq)
return -ENOMEM;
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 09/10] Workqueue: WQ_PERCPU added to all the remaining users
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (7 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 08/10] Workqueue: fs: " Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 10/10] [Doc] Workqueue: WQ_UNBOUND doc upgraded Marco Crivellari
2025-07-11 10:32 ` [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This patch adds a new WQ_PERCPU flag to explicitly request the use of
the per-CPU behavior. Both flags coexist for one release cycle to allow
callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
block/bio-integrity-auto.c | 5 ++-
block/bio.c | 3 +-
block/blk-core.c | 3 +-
block/blk-throttle.c | 3 +-
block/blk-zoned.c | 3 +-
crypto/cryptd.c | 3 +-
drivers/acpi/ec.c | 3 +-
drivers/acpi/osl.c | 4 +-
drivers/acpi/thermal.c | 3 +-
drivers/ata/libata-sff.c | 3 +-
drivers/base/core.c | 2 +-
drivers/block/aoe/aoemain.c | 2 +-
drivers/block/rbd.c | 2 +-
drivers/block/rnbd/rnbd-clt.c | 2 +-
drivers/block/sunvdc.c | 2 +-
drivers/block/virtio_blk.c | 2 +-
drivers/bus/mhi/ep/main.c | 2 +-
drivers/char/tpm/tpm-dev-common.c | 3 +-
drivers/char/xillybus/xillybus_core.c | 2 +-
drivers/char/xillybus/xillyusb.c | 4 +-
drivers/cpufreq/tegra194-cpufreq.c | 3 +-
drivers/crypto/atmel-i2c.c | 2 +-
drivers/crypto/cavium/nitrox/nitrox_mbx.c | 2 +-
drivers/crypto/intel/qat/qat_common/adf_aer.c | 4 +-
drivers/crypto/intel/qat/qat_common/adf_isr.c | 3 +-
.../crypto/intel/qat/qat_common/adf_sriov.c | 3 +-
.../crypto/intel/qat/qat_common/adf_vf_isr.c | 3 +-
drivers/firewire/core-transaction.c | 3 +-
drivers/firewire/ohci.c | 3 +-
drivers/gpu/drm/amd/amdkfd/kfd_process.c | 3 +-
drivers/gpu/drm/bridge/analogix/anx7625.c | 3 +-
.../drm/i915/display/intel_display_driver.c | 3 +-
drivers/gpu/drm/i915/i915_driver.c | 3 +-
.../gpu/drm/i915/selftests/i915_sw_fence.c | 2 +-
.../gpu/drm/i915/selftests/mock_gem_device.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_sched.c | 3 +-
drivers/gpu/drm/radeon/radeon_display.c | 3 +-
drivers/gpu/drm/xe/xe_device.c | 4 +-
drivers/gpu/drm/xe/xe_ggtt.c | 2 +-
drivers/gpu/drm/xe/xe_hw_engine_group.c | 3 +-
drivers/gpu/drm/xe/xe_sriov.c | 2 +-
drivers/greybus/operation.c | 2 +-
drivers/hid/hid-nintendo.c | 3 +-
drivers/hv/mshv_eventfd.c | 2 +-
drivers/i3c/master.c | 2 +-
drivers/infiniband/core/cm.c | 2 +-
drivers/infiniband/core/device.c | 4 +-
drivers/infiniband/hw/hfi1/init.c | 3 +-
drivers/infiniband/hw/hfi1/opfn.c | 3 +-
drivers/infiniband/hw/mlx4/cm.c | 2 +-
drivers/infiniband/sw/rdmavt/cq.c | 3 +-
drivers/infiniband/ulp/iser/iscsi_iser.c | 2 +-
drivers/infiniband/ulp/isert/ib_isert.c | 2 +-
drivers/infiniband/ulp/rtrs/rtrs-clt.c | 2 +-
drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +-
drivers/input/mouse/psmouse-smbus.c | 2 +-
drivers/isdn/capi/kcapi.c | 2 +-
drivers/md/bcache/btree.c | 3 +-
drivers/md/bcache/super.c | 10 +++--
drivers/md/bcache/writeback.c | 2 +-
drivers/md/dm-bufio.c | 3 +-
drivers/md/dm-cache-target.c | 3 +-
drivers/md/dm-clone-target.c | 3 +-
drivers/md/dm-crypt.c | 6 ++-
drivers/md/dm-delay.c | 4 +-
drivers/md/dm-integrity.c | 15 ++++---
drivers/md/dm-kcopyd.c | 3 +-
drivers/md/dm-log-userspace-base.c | 3 +-
drivers/md/dm-mpath.c | 5 ++-
drivers/md/dm-raid1.c | 5 ++-
drivers/md/dm-snap-persistent.c | 3 +-
drivers/md/dm-stripe.c | 2 +-
drivers/md/dm-verity-target.c | 4 +-
drivers/md/dm-writecache.c | 3 +-
drivers/md/dm.c | 3 +-
drivers/md/md.c | 4 +-
drivers/media/pci/ddbridge/ddbridge-core.c | 2 +-
.../platform/mediatek/mdp3/mtk-mdp3-core.c | 6 ++-
drivers/message/fusion/mptbase.c | 7 +++-
drivers/mmc/core/block.c | 3 +-
drivers/mmc/host/omap.c | 2 +-
drivers/net/can/spi/hi311x.c | 3 +-
drivers/net/can/spi/mcp251x.c | 3 +-
.../net/ethernet/cavium/liquidio/lio_core.c | 2 +-
.../net/ethernet/cavium/liquidio/lio_main.c | 8 ++--
.../ethernet/cavium/liquidio/lio_vf_main.c | 3 +-
.../cavium/liquidio/request_manager.c | 2 +-
.../cavium/liquidio/response_manager.c | 3 +-
.../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 2 +-
.../hisilicon/hns3/hns3pf/hclge_main.c | 3 +-
drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +-
.../net/ethernet/marvell/octeontx2/af/cgx.c | 2 +-
.../marvell/octeontx2/af/mcs_rvu_if.c | 2 +-
.../ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 +-
.../ethernet/marvell/octeontx2/af/rvu_rep.c | 2 +-
.../marvell/octeontx2/nic/cn10k_ipsec.c | 3 +-
.../ethernet/marvell/prestera/prestera_main.c | 2 +-
.../ethernet/marvell/prestera/prestera_pci.c | 2 +-
drivers/net/ethernet/mellanox/mlxsw/core.c | 4 +-
drivers/net/ethernet/netronome/nfp/nfp_main.c | 2 +-
drivers/net/ethernet/qlogic/qed/qed_main.c | 3 +-
drivers/net/ethernet/wiznet/w5100.c | 2 +-
drivers/net/fjes/fjes_main.c | 5 ++-
drivers/net/wireguard/device.c | 6 ++-
drivers/net/wireless/ath/ath6kl/usb.c | 2 +-
.../net/wireless/marvell/libertas/if_sdio.c | 3 +-
.../net/wireless/marvell/libertas/if_spi.c | 3 +-
.../net/wireless/marvell/libertas_tf/main.c | 2 +-
drivers/net/wireless/quantenna/qtnfmac/core.c | 3 +-
drivers/net/wireless/realtek/rtlwifi/base.c | 2 +-
drivers/net/wireless/realtek/rtw88/usb.c | 3 +-
drivers/net/wireless/silabs/wfx/main.c | 2 +-
drivers/net/wireless/st/cw1200/bh.c | 4 +-
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 3 +-
drivers/net/wwan/wwan_hwsim.c | 2 +-
drivers/nvme/host/tcp.c | 2 +
drivers/nvme/target/core.c | 5 ++-
drivers/nvme/target/fc.c | 6 +--
drivers/nvme/target/tcp.c | 2 +-
drivers/pci/endpoint/functions/pci-epf-mhi.c | 2 +-
drivers/pci/endpoint/functions/pci-epf-ntb.c | 5 ++-
drivers/pci/endpoint/functions/pci-epf-test.c | 3 +-
drivers/pci/endpoint/functions/pci-epf-vntb.c | 5 ++-
drivers/pci/hotplug/pnv_php.c | 3 +-
drivers/pci/hotplug/shpchp_core.c | 3 +-
.../platform/surface/surface_acpi_notify.c | 2 +-
drivers/power/supply/ab8500_btemp.c | 3 +-
drivers/power/supply/ipaq_micro_battery.c | 3 +-
drivers/rapidio/rio.c | 2 +-
drivers/s390/char/tape_3590.c | 2 +-
drivers/scsi/be2iscsi/be_main.c | 3 +-
drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 2 +-
drivers/scsi/device_handler/scsi_dh_alua.c | 2 +-
drivers/scsi/fcoe/fcoe.c | 2 +-
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 3 +-
drivers/scsi/lpfc/lpfc_init.c | 2 +-
drivers/scsi/pm8001/pm8001_init.c | 2 +-
drivers/scsi/qedf/qedf_main.c | 15 ++++---
drivers/scsi/qedi/qedi_main.c | 2 +-
drivers/scsi/qla2xxx/qla_os.c | 2 +-
drivers/scsi/qla2xxx/qla_target.c | 2 +-
drivers/scsi/qla2xxx/tcm_qla2xxx.c | 2 +-
drivers/scsi/qla4xxx/ql4_os.c | 3 +-
drivers/scsi/scsi_transport_fc.c | 7 ++--
drivers/soc/fsl/qbman/qman.c | 2 +-
drivers/staging/greybus/sdio.c | 2 +-
drivers/target/target_core_transport.c | 4 +-
drivers/target/target_core_xcopy.c | 2 +-
drivers/target/tcm_fc/tfc_conf.c | 2 +-
drivers/usb/core/hub.c | 2 +-
drivers/usb/gadget/function/f_hid.c | 3 +-
drivers/usb/storage/uas.c | 2 +-
drivers/usb/typec/anx7411.c | 3 +-
drivers/vdpa/vdpa_user/vduse_dev.c | 3 +-
drivers/virt/acrn/irqfd.c | 3 +-
drivers/virtio/virtio_balloon.c | 3 +-
drivers/xen/privcmd.c | 3 +-
include/linux/workqueue.h | 4 +-
kernel/bpf/cgroup.c | 3 +-
kernel/cgroup/cgroup-v1.c | 2 +-
kernel/cgroup/cgroup.c | 2 +-
kernel/padata.c | 5 ++-
kernel/power/main.c | 2 +-
kernel/rcu/tree.c | 4 +-
kernel/workqueue.c | 41 ++++++++++++++-----
virt/kvm/eventfd.c | 2 +-
168 files changed, 333 insertions(+), 221 deletions(-)
diff --git a/block/bio-integrity-auto.c b/block/bio-integrity-auto.c
index e524c609be50..b23432f19a1e 100644
--- a/block/bio-integrity-auto.c
+++ b/block/bio-integrity-auto.c
@@ -182,8 +182,9 @@ static int __init blk_integrity_auto_init(void)
* kintegrityd won't block much but may burn a lot of CPU cycles.
* Make it highpri CPU intensive wq with max concurrency of 1.
*/
- kintegrityd_wq = alloc_workqueue("kintegrityd", WQ_MEM_RECLAIM |
- WQ_HIGHPRI | WQ_CPU_INTENSIVE, 1);
+ kintegrityd_wq = alloc_workqueue("kintegrityd",
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_PERCPU,
+ 1);
if (!kintegrityd_wq)
panic("Failed to create kintegrityd\n");
return 0;
diff --git a/block/bio.c b/block/bio.c
index 4e6c85a33d74..b2a782465cec 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1715,7 +1715,8 @@ int bioset_init(struct bio_set *bs,
if (flags & BIOSET_NEED_RESCUER) {
bs->rescue_workqueue = alloc_workqueue("bioset",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!bs->rescue_workqueue)
goto bad;
}
diff --git a/block/blk-core.c b/block/blk-core.c
index e8cc270a453f..d2d7d54a4db8 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1272,7 +1272,8 @@ int __init blk_dev_init(void)
/* used for unplugging and affects IO latency/throughput - HIGHPRI */
kblockd_workqueue = alloc_workqueue("kblockd",
- WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU,
+ 0);
if (!kblockd_workqueue)
panic("Failed to create kblockd\n");
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index d6dd2e047874..a6dd9f0d631f 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -1719,7 +1719,8 @@ void blk_throtl_exit(struct gendisk *disk)
static int __init throtl_init(void)
{
- kthrotld_workqueue = alloc_workqueue("kthrotld", WQ_MEM_RECLAIM, 0);
+ kthrotld_workqueue = alloc_workqueue("kthrotld",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!kthrotld_workqueue)
panic("Failed to create kthrotld\n");
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 0c77244a35c9..1d614b715158 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -1362,7 +1362,8 @@ static int disk_alloc_zone_resources(struct gendisk *disk,
goto free_hash;
disk->zone_wplugs_wq =
- alloc_workqueue("%s_zwplugs", WQ_MEM_RECLAIM | WQ_HIGHPRI,
+ alloc_workqueue("%s_zwplugs",
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU,
pool_size, disk->disk_name);
if (!disk->zone_wplugs_wq)
goto destroy_pool;
diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 31d022d47f7a..eaf970086f8d 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -1109,7 +1109,8 @@ static int __init cryptd_init(void)
{
int err;
- cryptd_wq = alloc_workqueue("cryptd", WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE,
+ cryptd_wq = alloc_workqueue("cryptd",
+ WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU,
1);
if (!cryptd_wq)
return -ENOMEM;
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index 8db09d81918f..f2b72bf8fa65 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -2273,7 +2273,8 @@ static int acpi_ec_init_workqueues(void)
ec_wq = alloc_ordered_workqueue("kec", 0);
if (!ec_query_wq)
- ec_query_wq = alloc_workqueue("kec_query", 0, ec_max_queries);
+ ec_query_wq = alloc_workqueue("kec_query", WQ_PERCPU,
+ ec_max_queries);
if (!ec_wq || !ec_query_wq) {
acpi_ec_destroy_workqueues();
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index a79a5d47bdb8..05393a7315fe 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -1694,8 +1694,8 @@ acpi_status __init acpi_os_initialize(void)
acpi_status __init acpi_os_initialize1(void)
{
- kacpid_wq = alloc_workqueue("kacpid", 0, 1);
- kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 0);
+ kacpid_wq = alloc_workqueue("kacpid", WQ_PERCPU, 1);
+ kacpi_notify_wq = alloc_workqueue("kacpi_notify", WQ_PERCPU, 0);
kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);
BUG_ON(!kacpid_wq);
BUG_ON(!kacpi_notify_wq);
diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c
index 0c874186f8ae..9f5a2f288d32 100644
--- a/drivers/acpi/thermal.c
+++ b/drivers/acpi/thermal.c
@@ -1060,7 +1060,8 @@ static int __init acpi_thermal_init(void)
}
acpi_thermal_pm_queue = alloc_workqueue("acpi_thermal_pm",
- WQ_HIGHPRI | WQ_MEM_RECLAIM, 0);
+ WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!acpi_thermal_pm_queue)
return -ENODEV;
diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
index 5a46c066abc3..ae6879fa868d 100644
--- a/drivers/ata/libata-sff.c
+++ b/drivers/ata/libata-sff.c
@@ -3199,7 +3199,8 @@ void ata_sff_port_init(struct ata_port *ap)
int __init ata_sff_init(void)
{
- ata_sff_wq = alloc_workqueue("ata_sff", WQ_MEM_RECLAIM, WQ_MAX_ACTIVE);
+ ata_sff_wq = alloc_workqueue("ata_sff", WQ_MEM_RECLAIM | WQ_PERCPU,
+ WQ_MAX_ACTIVE);
if (!ata_sff_wq)
return -ENOMEM;
diff --git a/drivers/base/core.c b/drivers/base/core.c
index d2f9d3a59d6b..fa43d02c56c1 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -4115,7 +4115,7 @@ int __init devices_init(void)
sysfs_dev_char_kobj = kobject_create_and_add("char", dev_kobj);
if (!sysfs_dev_char_kobj)
goto char_kobj_err;
- device_link_wq = alloc_workqueue("device_link_wq", 0, 0);
+ device_link_wq = alloc_workqueue("device_link_wq", WQ_PERCPU, 0);
if (!device_link_wq)
goto wq_err;
diff --git a/drivers/block/aoe/aoemain.c b/drivers/block/aoe/aoemain.c
index cdf6e4041bb9..3b21750038ee 100644
--- a/drivers/block/aoe/aoemain.c
+++ b/drivers/block/aoe/aoemain.c
@@ -44,7 +44,7 @@ aoe_init(void)
{
int ret;
- aoe_wq = alloc_workqueue("aoe_wq", 0, 0);
+ aoe_wq = alloc_workqueue("aoe_wq", WQ_PERCPU, 0);
if (!aoe_wq)
return -ENOMEM;
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index faafd7ff43d6..af0e21149dbc 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -7389,7 +7389,7 @@ static int __init rbd_init(void)
* The number of active work items is limited by the number of
* rbd devices * queue depth, so leave @max_active at default.
*/
- rbd_wq = alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM, 0);
+ rbd_wq = alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!rbd_wq) {
rc = -ENOMEM;
goto err_out_slab;
diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 15627417f12e..b3a0470f9e80 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -1809,7 +1809,7 @@ static int __init rnbd_client_init(void)
unregister_blkdev(rnbd_client_major, "rnbd");
return err;
}
- rnbd_clt_wq = alloc_workqueue("rnbd_clt_wq", 0, 0);
+ rnbd_clt_wq = alloc_workqueue("rnbd_clt_wq", WQ_PERCPU, 0);
if (!rnbd_clt_wq) {
pr_err("Failed to load module, alloc_workqueue failed.\n");
rnbd_clt_destroy_sysfs_files();
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 442546b05df8..851763e5dd18 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -1215,7 +1215,7 @@ static int __init vdc_init(void)
{
int err;
- sunvdc_wq = alloc_workqueue("sunvdc", 0, 0);
+ sunvdc_wq = alloc_workqueue("sunvdc", WQ_PERCPU, 0);
if (!sunvdc_wq)
return -ENOMEM;
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 7cffea01d868..a5a48f976a20 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1683,7 +1683,7 @@ static int __init virtio_blk_init(void)
{
int error;
- virtblk_wq = alloc_workqueue("virtio-blk", 0, 0);
+ virtblk_wq = alloc_workqueue("virtio-blk", WQ_PERCPU, 0);
if (!virtblk_wq)
return -ENOMEM;
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index b3eafcf2a2c5..bee0b794c3e3 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1507,7 +1507,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
INIT_WORK(&mhi_cntrl->cmd_ring_work, mhi_ep_cmd_ring_worker);
INIT_WORK(&mhi_cntrl->ch_ring_work, mhi_ep_ch_ring_worker);
- mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
+ mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", WQ_PERCPU, 0);
if (!mhi_cntrl->wq) {
ret = -ENOMEM;
goto err_destroy_ring_item_cache;
diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
index 11deaf538e87..2b46db0fd4b3 100644
--- a/drivers/char/tpm/tpm-dev-common.c
+++ b/drivers/char/tpm/tpm-dev-common.c
@@ -275,7 +275,8 @@ void tpm_common_release(struct file *file, struct file_priv *priv)
int __init tpm_dev_common_init(void)
{
- tpm_dev_wq = alloc_workqueue("tpm_dev_wq", WQ_MEM_RECLAIM, 0);
+ tpm_dev_wq = alloc_workqueue("tpm_dev_wq", WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
return !tpm_dev_wq ? -ENOMEM : 0;
}
diff --git a/drivers/char/xillybus/xillybus_core.c b/drivers/char/xillybus/xillybus_core.c
index 11b7c4749274..3ecd589a22b1 100644
--- a/drivers/char/xillybus/xillybus_core.c
+++ b/drivers/char/xillybus/xillybus_core.c
@@ -1974,7 +1974,7 @@ EXPORT_SYMBOL(xillybus_endpoint_remove);
static int __init xillybus_init(void)
{
- xillybus_wq = alloc_workqueue(xillyname, 0, 0);
+ xillybus_wq = alloc_workqueue(xillyname, WQ_PERCPU, 0);
if (!xillybus_wq)
return -ENOMEM;
diff --git a/drivers/char/xillybus/xillyusb.c b/drivers/char/xillybus/xillyusb.c
index 45771b1a3716..2a29e2be0296 100644
--- a/drivers/char/xillybus/xillyusb.c
+++ b/drivers/char/xillybus/xillyusb.c
@@ -2163,7 +2163,7 @@ static int xillyusb_probe(struct usb_interface *interface,
spin_lock_init(&xdev->error_lock);
xdev->in_counter = 0;
xdev->in_bytes_left = 0;
- xdev->workq = alloc_workqueue(xillyname, WQ_HIGHPRI, 0);
+ xdev->workq = alloc_workqueue(xillyname, WQ_HIGHPRI | WQ_PERCPU, 0);
if (!xdev->workq) {
dev_err(&interface->dev, "Failed to allocate work queue\n");
@@ -2275,7 +2275,7 @@ static int __init xillyusb_init(void)
{
int rc = 0;
- wakeup_wq = alloc_workqueue(xillyname, 0, 0);
+ wakeup_wq = alloc_workqueue(xillyname, WQ_PERCPU, 0);
if (!wakeup_wq)
return -ENOMEM;
diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-cpufreq.c
index 9b4f516f313e..695599e1001f 100644
--- a/drivers/cpufreq/tegra194-cpufreq.c
+++ b/drivers/cpufreq/tegra194-cpufreq.c
@@ -750,7 +750,8 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
if (IS_ERR(bpmp))
return PTR_ERR(bpmp);
- read_counters_wq = alloc_workqueue("read_counters_wq", __WQ_LEGACY, 1);
+ read_counters_wq = alloc_workqueue("read_counters_wq",
+ __WQ_LEGACY | WQ_PERCPU, 1);
if (!read_counters_wq) {
dev_err(&pdev->dev, "fail to create_workqueue\n");
err = -EINVAL;
diff --git a/drivers/crypto/atmel-i2c.c b/drivers/crypto/atmel-i2c.c
index a895e4289efa..9688d116d07e 100644
--- a/drivers/crypto/atmel-i2c.c
+++ b/drivers/crypto/atmel-i2c.c
@@ -402,7 +402,7 @@ EXPORT_SYMBOL(atmel_i2c_probe);
static int __init atmel_i2c_init(void)
{
- atmel_wq = alloc_workqueue("atmel_wq", 0, 0);
+ atmel_wq = alloc_workqueue("atmel_wq", WQ_PERCPU, 0);
return atmel_wq ? 0 : -ENOMEM;
}
diff --git a/drivers/crypto/cavium/nitrox/nitrox_mbx.c b/drivers/crypto/cavium/nitrox/nitrox_mbx.c
index d4e06999af9b..a6a76e50ba84 100644
--- a/drivers/crypto/cavium/nitrox/nitrox_mbx.c
+++ b/drivers/crypto/cavium/nitrox/nitrox_mbx.c
@@ -192,7 +192,7 @@ int nitrox_mbox_init(struct nitrox_device *ndev)
}
/* allocate pf2vf response workqueue */
- ndev->iov.pf2vf_wq = alloc_workqueue("nitrox_pf2vf", 0, 0);
+ ndev->iov.pf2vf_wq = alloc_workqueue("nitrox_pf2vf", WQ_PERCPU, 0);
if (!ndev->iov.pf2vf_wq) {
kfree(ndev->iov.vfdev);
ndev->iov.vfdev = NULL;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto/intel/qat/qat_common/adf_aer.c
index 4cb8bd83f570..fabd852f1708 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_aer.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c
@@ -276,11 +276,11 @@ int adf_notify_fatal_error(struct adf_accel_dev *accel_dev)
int adf_init_aer(void)
{
device_reset_wq = alloc_workqueue("qat_device_reset_wq",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!device_reset_wq)
return -EFAULT;
- device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0);
+ device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", WQ_PERCPU, 0);
if (!device_sriov_wq) {
destroy_workqueue(device_reset_wq);
device_reset_wq = NULL;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_isr.c b/drivers/crypto/intel/qat/qat_common/adf_isr.c
index cae1aee5479a..7381e0570540 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_isr.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_isr.c
@@ -384,7 +384,8 @@ EXPORT_SYMBOL_GPL(adf_isr_resource_alloc);
*/
int __init adf_init_misc_wq(void)
{
- adf_misc_wq = alloc_workqueue("qat_misc_wq", WQ_MEM_RECLAIM, 0);
+ adf_misc_wq = alloc_workqueue("qat_misc_wq",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
return !adf_misc_wq ? -ENOMEM : 0;
}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_sriov.c b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
index c75d0b6cb0ad..0afa8d42c220 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_sriov.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
@@ -300,7 +300,8 @@ EXPORT_SYMBOL_GPL(adf_sriov_configure);
int __init adf_init_pf_wq(void)
{
/* Workqueue for PF2VF responses */
- pf2vf_resp_wq = alloc_workqueue("qat_pf2vf_resp_wq", WQ_MEM_RECLAIM, 0);
+ pf2vf_resp_wq = alloc_workqueue("qat_pf2vf_resp_wq",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
return !pf2vf_resp_wq ? -ENOMEM : 0;
}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c b/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
index a4636ec9f9ca..d0fef20a3df4 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c
@@ -299,7 +299,8 @@ EXPORT_SYMBOL_GPL(adf_flush_vf_wq);
*/
int __init adf_init_vf_wq(void)
{
- adf_vf_stop_wq = alloc_workqueue("adf_vf_stop_wq", WQ_MEM_RECLAIM, 0);
+ adf_vf_stop_wq = alloc_workqueue("adf_vf_stop_wq",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
return !adf_vf_stop_wq ? -EFAULT : 0;
}
diff --git a/drivers/firewire/core-transaction.c b/drivers/firewire/core-transaction.c
index b0f9ef6ac6df..f07b8a13a201 100644
--- a/drivers/firewire/core-transaction.c
+++ b/drivers/firewire/core-transaction.c
@@ -1327,7 +1327,8 @@ static int __init fw_core_init(void)
{
int ret;
- fw_workqueue = alloc_workqueue("firewire", WQ_MEM_RECLAIM, 0);
+ fw_workqueue = alloc_workqueue("firewire", WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!fw_workqueue)
return -ENOMEM;
diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
index edaedd156a6d..2b721cca366c 100644
--- a/drivers/firewire/ohci.c
+++ b/drivers/firewire/ohci.c
@@ -3941,7 +3941,8 @@ static struct pci_driver fw_ohci_pci_driver = {
static int __init fw_ohci_init(void)
{
- selfid_workqueue = alloc_workqueue(KBUILD_MODNAME, WQ_MEM_RECLAIM, 0);
+ selfid_workqueue = alloc_workqueue(KBUILD_MODNAME,
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!selfid_workqueue)
return -ENOMEM;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index 7c0c24732481..2cb9088f67cc 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -690,7 +690,8 @@ void kfd_procfs_del_queue(struct queue *q)
int kfd_process_create_wq(void)
{
if (!kfd_process_wq)
- kfd_process_wq = alloc_workqueue("kfd_process_wq", 0, 0);
+ kfd_process_wq = alloc_workqueue("kfd_process_wq", WQ_PERCPU,
+ 0);
if (!kfd_restore_wq)
kfd_restore_wq = alloc_ordered_workqueue("kfd_restore_wq",
WQ_FREEZABLE);
diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
index 0b97b66de577..bc06ea7c4eb1 100644
--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
+++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
@@ -2659,7 +2659,8 @@ static int anx7625_i2c_probe(struct i2c_client *client)
if (platform->pdata.intp_irq) {
INIT_WORK(&platform->work, anx7625_work_func);
platform->workqueue = alloc_workqueue("anx7625_work",
- WQ_FREEZABLE | WQ_MEM_RECLAIM, 1);
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 1);
if (!platform->workqueue) {
DRM_DEV_ERROR(dev, "fail to create work queue\n");
ret = -ENOMEM;
diff --git a/drivers/gpu/drm/i915/display/intel_display_driver.c b/drivers/gpu/drm/i915/display/intel_display_driver.c
index 31740a677dd8..ccfdbe26232c 100644
--- a/drivers/gpu/drm/i915/display/intel_display_driver.c
+++ b/drivers/gpu/drm/i915/display/intel_display_driver.c
@@ -243,7 +243,8 @@ int intel_display_driver_probe_noirq(struct intel_display *display)
display->wq.modeset = alloc_ordered_workqueue("i915_modeset", 0);
display->wq.flip = alloc_workqueue("i915_flip", WQ_HIGHPRI |
WQ_UNBOUND, WQ_UNBOUND_MAX_ACTIVE);
- display->wq.cleanup = alloc_workqueue("i915_cleanup", WQ_HIGHPRI, 0);
+ display->wq.cleanup = alloc_workqueue("i915_cleanup", WQ_HIGHPRI |
+ WQ_PERCPU, 0);
intel_mode_config_init(display);
diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
index 79b98ba4104e..32edb27f6af6 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -144,7 +144,8 @@ static int i915_workqueues_init(struct drm_i915_private *dev_priv)
* to be scheduled on the system_percpu_wq before moving to a driver
* instance due deprecation of flush_scheduled_work().
*/
- dev_priv->unordered_wq = alloc_workqueue("i915-unordered", 0, 0);
+ dev_priv->unordered_wq = alloc_workqueue("i915-unordered", WQ_PERCPU,
+ 0);
if (dev_priv->unordered_wq == NULL)
goto out_free_dp_wq;
diff --git a/drivers/gpu/drm/i915/selftests/i915_sw_fence.c b/drivers/gpu/drm/i915/selftests/i915_sw_fence.c
index 8f5ce71fa453..b81d65c77458 100644
--- a/drivers/gpu/drm/i915/selftests/i915_sw_fence.c
+++ b/drivers/gpu/drm/i915/selftests/i915_sw_fence.c
@@ -526,7 +526,7 @@ static int test_ipc(void *arg)
struct workqueue_struct *wq;
int ret = 0;
- wq = alloc_workqueue("i1915-selftest", 0, 0);
+ wq = alloc_workqueue("i1915-selftest", WQ_PERCPU, 0);
if (wq == NULL)
return -ENOMEM;
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index a77e5b26542c..a55f24240fe0 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -214,7 +214,7 @@ struct drm_i915_private *mock_gem_device(void)
if (!i915->wq)
goto err_drv;
- i915->unordered_wq = alloc_workqueue("mock-unordered", 0, 0);
+ i915->unordered_wq = alloc_workqueue("mock-unordered", WQ_PERCPU, 0);
if (!i915->unordered_wq)
goto err_wq;
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
index e154d08857c5..6fefe051993c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
@@ -626,7 +626,7 @@ nouveau_drm_device_init(struct nouveau_drm *drm)
struct drm_device *dev = drm->dev;
int ret;
- drm->sched_wq = alloc_workqueue("nouveau_sched_wq_shared", 0,
+ drm->sched_wq = alloc_workqueue("nouveau_sched_wq_shared", WQ_PERCPU,
WQ_MAX_ACTIVE);
if (!drm->sched_wq)
return -ENOMEM;
diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouveau/nouveau_sched.c
index d326e55d2d24..85b25bffefd8 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sched.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
@@ -415,7 +415,8 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,
int ret;
if (!wq) {
- wq = alloc_workqueue("nouveau_sched_wq_%d", 0, WQ_MAX_ACTIVE,
+ wq = alloc_workqueue("nouveau_sched_wq_%d", WQ_PERCPU,
+ WQ_MAX_ACTIVE,
current->pid);
if (!wq)
return -ENOMEM;
diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
index 8f5f8abcb1b4..d18aeeb38085 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -687,7 +687,8 @@ static void radeon_crtc_init(struct drm_device *dev, int index)
if (radeon_crtc == NULL)
return;
- radeon_crtc->flip_queue = alloc_workqueue("radeon-crtc", WQ_HIGHPRI, 0);
+ radeon_crtc->flip_queue = alloc_workqueue("radeon-crtc",
+ WQ_HIGHPRI | WQ_PERCPU, 0);
if (!radeon_crtc->flip_queue) {
kfree(radeon_crtc);
return;
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 00191227bc95..52b4f0dd827c 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -475,8 +475,8 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
xe->preempt_fence_wq = alloc_ordered_workqueue("xe-preempt-fence-wq",
WQ_MEM_RECLAIM);
xe->ordered_wq = alloc_ordered_workqueue("xe-ordered-wq", 0);
- xe->unordered_wq = alloc_workqueue("xe-unordered-wq", 0, 0);
- xe->destroy_wq = alloc_workqueue("xe-destroy-wq", 0, 0);
+ xe->unordered_wq = alloc_workqueue("xe-unordered-wq", WQ_PERCPU, 0);
+ xe->destroy_wq = alloc_workqueue("xe-destroy-wq", WQ_PERCPU, 0);
if (!xe->ordered_wq || !xe->unordered_wq ||
!xe->preempt_fence_wq || !xe->destroy_wq) {
/*
diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
index 5fcb2b4c2c13..9c625c191be2 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@ -246,7 +246,7 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
else
ggtt->pt_ops = &xelp_pt_ops;
- ggtt->wq = alloc_workqueue("xe-ggtt-wq", 0, WQ_MEM_RECLAIM);
+ ggtt->wq = alloc_workqueue("xe-ggtt-wq", WQ_PERCPU, WQ_MEM_RECLAIM);
drm_mm_init(&ggtt->mm, xe_wopcm_size(xe),
ggtt->size - xe_wopcm_size(xe));
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/xe_hw_engine_group.c
index 2d68c5b5262a..fae2bab4c25e 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
@@ -57,7 +57,8 @@ hw_engine_group_alloc(struct xe_device *xe)
if (!group)
return ERR_PTR(-ENOMEM);
- group->resume_wq = alloc_workqueue("xe-resume-lr-jobs-wq", 0, 0);
+ group->resume_wq = alloc_workqueue("xe-resume-lr-jobs-wq", WQ_PERCPU,
+ 0);
if (!group->resume_wq)
return ERR_PTR(-ENOMEM);
diff --git a/drivers/gpu/drm/xe/xe_sriov.c b/drivers/gpu/drm/xe/xe_sriov.c
index a0eab44c0e76..6e6eb437a802 100644
--- a/drivers/gpu/drm/xe/xe_sriov.c
+++ b/drivers/gpu/drm/xe/xe_sriov.c
@@ -119,7 +119,7 @@ int xe_sriov_init(struct xe_device *xe)
xe_sriov_vf_init_early(xe);
xe_assert(xe, !xe->sriov.wq);
- xe->sriov.wq = alloc_workqueue("xe-sriov-wq", 0, 0);
+ xe->sriov.wq = alloc_workqueue("xe-sriov-wq", WQ_PERCPU, 0);
if (!xe->sriov.wq)
return -ENOMEM;
diff --git a/drivers/greybus/operation.c b/drivers/greybus/operation.c
index f6beeebf974c..3f3c07fb62aa 100644
--- a/drivers/greybus/operation.c
+++ b/drivers/greybus/operation.c
@@ -1237,7 +1237,7 @@ int __init gb_operation_init(void)
goto err_destroy_message_cache;
gb_operation_completion_wq = alloc_workqueue("greybus_completion",
- 0, 0);
+ WQ_PERCPU, 0);
if (!gb_operation_completion_wq)
goto err_destroy_operation_cache;
diff --git a/drivers/hid/hid-nintendo.c b/drivers/hid/hid-nintendo.c
index 839d5bcd72b1..b7b981fce6a1 100644
--- a/drivers/hid/hid-nintendo.c
+++ b/drivers/hid/hid-nintendo.c
@@ -2647,7 +2647,8 @@ static int nintendo_hid_probe(struct hid_device *hdev,
init_waitqueue_head(&ctlr->wait);
spin_lock_init(&ctlr->lock);
ctlr->rumble_queue = alloc_workqueue("hid-nintendo-rumble_wq",
- WQ_FREEZABLE | WQ_MEM_RECLAIM, 0);
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!ctlr->rumble_queue) {
ret = -ENOMEM;
goto err;
diff --git a/drivers/hv/mshv_eventfd.c b/drivers/hv/mshv_eventfd.c
index 8dd22be2ca0b..91386f236e25 100644
--- a/drivers/hv/mshv_eventfd.c
+++ b/drivers/hv/mshv_eventfd.c
@@ -592,7 +592,7 @@ static void mshv_irqfd_release(struct mshv_partition *pt)
int mshv_irqfd_wq_init(void)
{
- irqfd_cleanup_wq = alloc_workqueue("mshv-irqfd-cleanup", 0, 0);
+ irqfd_cleanup_wq = alloc_workqueue("mshv-irqfd-cleanup", WQ_PERCPU, 0);
if (!irqfd_cleanup_wq)
return -ENOMEM;
diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
index fd81871609d9..58c8713f0040 100644
--- a/drivers/i3c/master.c
+++ b/drivers/i3c/master.c
@@ -2851,7 +2851,7 @@ int i3c_master_register(struct i3c_master_controller *master,
if (ret)
goto err_put_dev;
- master->wq = alloc_workqueue("%s", 0, 0, dev_name(parent));
+ master->wq = alloc_workqueue("%s", WQ_PERCPU, 0, dev_name(parent));
if (!master->wq) {
ret = -ENOMEM;
goto err_put_dev;
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 142170473e75..5956cd8291a1 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -4521,7 +4521,7 @@ static int __init ib_cm_init(void)
get_random_bytes(&cm.random_id_operand, sizeof cm.random_id_operand);
INIT_LIST_HEAD(&cm.timewait_list);
- cm.wq = alloc_workqueue("ib_cm", 0, 1);
+ cm.wq = alloc_workqueue("ib_cm", WQ_PERCPU, 1);
if (!cm.wq) {
ret = -ENOMEM;
goto error2;
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index b4e3e4beb7f4..e9b536983b3b 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -2976,7 +2976,7 @@ static int __init ib_core_init(void)
{
int ret = -ENOMEM;
- ib_wq = alloc_workqueue("infiniband", 0, 0);
+ ib_wq = alloc_workqueue("infiniband", WQ_PERCPU, 0);
if (!ib_wq)
return -ENOMEM;
@@ -2986,7 +2986,7 @@ static int __init ib_core_init(void)
goto err;
ib_comp_wq = alloc_workqueue("ib-comp-wq",
- WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS, 0);
+ WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS | WQ_PERCPU, 0);
if (!ib_comp_wq)
goto err_unbound;
diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
index b35f92e7d865..0d7797495ddf 100644
--- a/drivers/infiniband/hw/hfi1/init.c
+++ b/drivers/infiniband/hw/hfi1/init.c
@@ -745,8 +745,7 @@ static int create_workqueues(struct hfi1_devdata *dd)
ppd->hfi1_wq =
alloc_workqueue(
"hfi%d_%d",
- WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE |
- WQ_MEM_RECLAIM,
+ WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_PERCPU,
HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES,
dd->unit, pidx);
if (!ppd->hfi1_wq)
diff --git a/drivers/infiniband/hw/hfi1/opfn.c b/drivers/infiniband/hw/hfi1/opfn.c
index 370a5a8eaa71..68c1cdbc90c1 100644
--- a/drivers/infiniband/hw/hfi1/opfn.c
+++ b/drivers/infiniband/hw/hfi1/opfn.c
@@ -305,8 +305,7 @@ void opfn_trigger_conn_request(struct rvt_qp *qp, u32 bth1)
int opfn_init(void)
{
opfn_wq = alloc_workqueue("hfi_opfn",
- WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE |
- WQ_MEM_RECLAIM,
+ WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_PERCPU,
HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES);
if (!opfn_wq)
return -ENOMEM;
diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
index 12b481d138cf..03aacd526860 100644
--- a/drivers/infiniband/hw/mlx4/cm.c
+++ b/drivers/infiniband/hw/mlx4/cm.c
@@ -591,7 +591,7 @@ void mlx4_ib_cm_paravirt_clean(struct mlx4_ib_dev *dev, int slave)
int mlx4_ib_cm_init(void)
{
- cm_wq = alloc_workqueue("mlx4_ib_cm", 0, 0);
+ cm_wq = alloc_workqueue("mlx4_ib_cm", WQ_PERCPU, 0);
if (!cm_wq)
return -ENOMEM;
diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
index 0ca2743f1075..e7835ca70e2b 100644
--- a/drivers/infiniband/sw/rdmavt/cq.c
+++ b/drivers/infiniband/sw/rdmavt/cq.c
@@ -518,7 +518,8 @@ int rvt_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *entry)
*/
int rvt_driver_cq_init(void)
{
- comp_vector_wq = alloc_workqueue("%s", WQ_HIGHPRI | WQ_CPU_INTENSIVE,
+ comp_vector_wq = alloc_workqueue("%s",
+ WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_PERCPU,
0, "rdmavt_cq");
if (!comp_vector_wq)
return -ENOMEM;
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
index a5be6f1ba12b..eb99c0f65ca9 100644
--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
+++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
@@ -1033,7 +1033,7 @@ static int __init iser_init(void)
mutex_init(&ig.connlist_mutex);
INIT_LIST_HEAD(&ig.connlist);
- release_wq = alloc_workqueue("release workqueue", 0, 0);
+ release_wq = alloc_workqueue("release workqueue", WQ_PERCPU, 0);
if (!release_wq) {
iser_err("failed to allocate release workqueue\n");
err = -ENOMEM;
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 42977a5326ee..af811d060cc8 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -2613,7 +2613,7 @@ static struct iscsit_transport iser_target_transport = {
static int __init isert_init(void)
{
- isert_login_wq = alloc_workqueue("isert_login_wq", 0, 0);
+ isert_login_wq = alloc_workqueue("isert_login_wq", WQ_PERCPU, 0);
if (!isert_login_wq) {
isert_err("Unable to allocate isert_login_wq\n");
return -ENOMEM;
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index 71387811b281..40fd2b695160 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -3187,7 +3187,7 @@ static int __init rtrs_client_init(void)
pr_err("Failed to create rtrs-client dev class\n");
return ret;
}
- rtrs_wq = alloc_workqueue("rtrs_client_wq", 0, 0);
+ rtrs_wq = alloc_workqueue("rtrs_client_wq", WQ_PERCPU, 0);
if (!rtrs_wq) {
class_unregister(&rtrs_clt_dev_class);
return -ENOMEM;
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
index ef4abdea3c2d..780a98b2ded9 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
@@ -2321,7 +2321,7 @@ static int __init rtrs_server_init(void)
if (err)
goto out_err;
- rtrs_wq = alloc_workqueue("rtrs_server_wq", 0, 0);
+ rtrs_wq = alloc_workqueue("rtrs_server_wq", WQ_PERCPU, 0);
if (!rtrs_wq) {
err = -ENOMEM;
goto out_dev_class;
diff --git a/drivers/input/mouse/psmouse-smbus.c b/drivers/input/mouse/psmouse-smbus.c
index 93420f07b7d0..5d6a4909ccbf 100644
--- a/drivers/input/mouse/psmouse-smbus.c
+++ b/drivers/input/mouse/psmouse-smbus.c
@@ -299,7 +299,7 @@ int __init psmouse_smbus_module_init(void)
{
int error;
- psmouse_smbus_wq = alloc_workqueue("psmouse-smbus", 0, 0);
+ psmouse_smbus_wq = alloc_workqueue("psmouse-smbus", WQ_PERCPU, 0);
if (!psmouse_smbus_wq)
return -ENOMEM;
diff --git a/drivers/isdn/capi/kcapi.c b/drivers/isdn/capi/kcapi.c
index c5d13bdc239b..e8f7e52354bc 100644
--- a/drivers/isdn/capi/kcapi.c
+++ b/drivers/isdn/capi/kcapi.c
@@ -907,7 +907,7 @@ int __init kcapi_init(void)
{
int err;
- kcapi_wq = alloc_workqueue("kcapi", 0, 0);
+ kcapi_wq = alloc_workqueue("kcapi", WQ_PERCPU, 0);
if (!kcapi_wq)
return -ENOMEM;
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index ed40d8600656..1d2213944441 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -2836,7 +2836,8 @@ void bch_btree_exit(void)
int __init bch_btree_init(void)
{
- btree_io_wq = alloc_workqueue("bch_btree_io", WQ_MEM_RECLAIM, 0);
+ btree_io_wq = alloc_workqueue("bch_btree_io",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!btree_io_wq)
return -ENOMEM;
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index de0a8e5f5c49..481d61a67032 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1933,7 +1933,8 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
if (!c->uuids)
goto err;
- c->moving_gc_wq = alloc_workqueue("bcache_gc", WQ_MEM_RECLAIM, 0);
+ c->moving_gc_wq = alloc_workqueue("bcache_gc",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!c->moving_gc_wq)
goto err;
@@ -2867,7 +2868,7 @@ static int __init bcache_init(void)
if (bch_btree_init())
goto err;
- bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0);
+ bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!bcache_wq)
goto err;
@@ -2880,11 +2881,12 @@ static int __init bcache_init(void)
*
* We still want to user our own queue to not congest the `system_percpu_wq`.
*/
- bch_flush_wq = alloc_workqueue("bch_flush", 0, 0);
+ bch_flush_wq = alloc_workqueue("bch_flush", WQ_PERCPU, 0);
if (!bch_flush_wq)
goto err;
- bch_journal_wq = alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0);
+ bch_journal_wq = alloc_workqueue("bch_journal",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!bch_journal_wq)
goto err;
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 453efbbdc8ee..01969ec07c1a 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -1079,7 +1079,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
int bch_cached_dev_writeback_start(struct cached_dev *dc)
{
dc->writeback_write_wq = alloc_workqueue("bcache_writeback_wq",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!dc->writeback_write_wq)
return -ENOMEM;
diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index 9c8ed65cd87e..6c6ee8d62485 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -2933,7 +2933,8 @@ static int __init dm_bufio_init(void)
__cache_size_refresh();
mutex_unlock(&dm_bufio_clients_lock);
- dm_bufio_wq = alloc_workqueue("dm_bufio_cache", WQ_MEM_RECLAIM, 0);
+ dm_bufio_wq = alloc_workqueue("dm_bufio_cache",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!dm_bufio_wq)
return -ENOMEM;
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index a10d75a562db..7bad7cc87d37 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -2533,7 +2533,8 @@ static int cache_create(struct cache_args *ca, struct cache **result)
goto bad;
}
- cache->wq = alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0);
+ cache->wq = alloc_workqueue("dm-" DM_MSG_PREFIX,
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!cache->wq) {
*error = "could not create workqueue for metadata object";
goto bad;
diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
index e956d980672c..b25845e36274 100644
--- a/drivers/md/dm-clone-target.c
+++ b/drivers/md/dm-clone-target.c
@@ -1877,7 +1877,8 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
clone->hydration_offset = 0;
atomic_set(&clone->hydrations_in_flight, 0);
- clone->wq = alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0);
+ clone->wq = alloc_workqueue("dm-" DM_MSG_PREFIX,
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!clone->wq) {
ti->error = "Failed to allocate workqueue";
r = -ENOMEM;
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 9dfdb63220d7..518a8ba49a76 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -3420,7 +3420,9 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
if (test_bit(DM_CRYPT_HIGH_PRIORITY, &cc->flags))
common_wq_flags |= WQ_HIGHPRI;
- cc->io_queue = alloc_workqueue("kcryptd_io-%s-%d", common_wq_flags, 1, devname, wq_id);
+ cc->io_queue = alloc_workqueue("kcryptd_io-%s-%d",
+ common_wq_flags | WQ_PERCPU, 1,
+ devname, wq_id);
if (!cc->io_queue) {
ti->error = "Couldn't create kcryptd io queue";
goto bad;
@@ -3428,7 +3430,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
if (test_bit(DM_CRYPT_SAME_CPU, &cc->flags)) {
cc->crypt_queue = alloc_workqueue("kcryptd-%s-%d",
- common_wq_flags | WQ_CPU_INTENSIVE,
+ common_wq_flags | WQ_CPU_INTENSIVE | WQ_PERCPU,
1, devname, wq_id);
} else {
/*
diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
index d4cf0ac2a7aa..a6e6c485f01f 100644
--- a/drivers/md/dm-delay.c
+++ b/drivers/md/dm-delay.c
@@ -279,7 +279,9 @@ static int delay_ctr(struct dm_target *ti, unsigned int argc, char **argv)
} else {
timer_setup(&dc->delay_timer, handle_delayed_timer, 0);
INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
- dc->kdelayd_wq = alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0);
+ dc->kdelayd_wq = alloc_workqueue("kdelayd",
+ WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!dc->kdelayd_wq) {
ret = -EINVAL;
DMERR("Couldn't start kdelayd");
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 2a283feb3319..3420a5a06d02 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -4818,7 +4818,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
}
ic->metadata_wq = alloc_workqueue("dm-integrity-metadata",
- WQ_MEM_RECLAIM, METADATA_WORKQUEUE_MAX_ACTIVE);
+ WQ_MEM_RECLAIM | WQ_PERCPU,
+ METADATA_WORKQUEUE_MAX_ACTIVE);
if (!ic->metadata_wq) {
ti->error = "Cannot allocate workqueue";
r = -ENOMEM;
@@ -4836,7 +4837,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
goto bad;
}
- ic->offload_wq = alloc_workqueue("dm-integrity-offload", WQ_MEM_RECLAIM,
+ ic->offload_wq = alloc_workqueue("dm-integrity-offload",
+ WQ_MEM_RECLAIM | WQ_PERCPU,
METADATA_WORKQUEUE_MAX_ACTIVE);
if (!ic->offload_wq) {
ti->error = "Cannot allocate workqueue";
@@ -4844,7 +4846,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
goto bad;
}
- ic->commit_wq = alloc_workqueue("dm-integrity-commit", WQ_MEM_RECLAIM, 1);
+ ic->commit_wq = alloc_workqueue("dm-integrity-commit",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1);
if (!ic->commit_wq) {
ti->error = "Cannot allocate workqueue";
r = -ENOMEM;
@@ -4853,7 +4856,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
INIT_WORK(&ic->commit_work, integrity_commit);
if (ic->mode == 'J' || ic->mode == 'B') {
- ic->writer_wq = alloc_workqueue("dm-integrity-writer", WQ_MEM_RECLAIM, 1);
+ ic->writer_wq = alloc_workqueue("dm-integrity-writer",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1);
if (!ic->writer_wq) {
ti->error = "Cannot allocate workqueue";
r = -ENOMEM;
@@ -5025,7 +5029,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
}
if (ic->internal_hash) {
- ic->recalc_wq = alloc_workqueue("dm-integrity-recalc", WQ_MEM_RECLAIM, 1);
+ ic->recalc_wq = alloc_workqueue("dm-integrity-recalc",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1);
if (!ic->recalc_wq) {
ti->error = "Cannot allocate workqueue";
r = -ENOMEM;
diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
index 6ea75436a433..cec9a60227b6 100644
--- a/drivers/md/dm-kcopyd.c
+++ b/drivers/md/dm-kcopyd.c
@@ -934,7 +934,8 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro
goto bad_slab;
INIT_WORK(&kc->kcopyd_work, do_work);
- kc->kcopyd_wq = alloc_workqueue("kcopyd", WQ_MEM_RECLAIM, 0);
+ kc->kcopyd_wq = alloc_workqueue("kcopyd", WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!kc->kcopyd_wq) {
r = -ENOMEM;
goto bad_workqueue;
diff --git a/drivers/md/dm-log-userspace-base.c b/drivers/md/dm-log-userspace-base.c
index 9fbb4b48fb2b..607436804a8b 100644
--- a/drivers/md/dm-log-userspace-base.c
+++ b/drivers/md/dm-log-userspace-base.c
@@ -299,7 +299,8 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti,
}
if (lc->integrated_flush) {
- lc->dmlog_wq = alloc_workqueue("dmlogd", WQ_MEM_RECLAIM, 0);
+ lc->dmlog_wq = alloc_workqueue("dmlogd",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!lc->dmlog_wq) {
DMERR("couldn't start dmlogd");
r = -ENOMEM;
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 6c98f4ae5ea9..a0f6de8040c5 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -2205,7 +2205,8 @@ static int __init dm_multipath_init(void)
{
int r = -ENOMEM;
- kmultipathd = alloc_workqueue("kmpathd", WQ_MEM_RECLAIM, 0);
+ kmultipathd = alloc_workqueue("kmpathd", WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!kmultipathd) {
DMERR("failed to create workqueue kmpathd");
goto bad_alloc_kmultipathd;
@@ -2224,7 +2225,7 @@ static int __init dm_multipath_init(void)
goto bad_alloc_kmpath_handlerd;
}
- dm_mpath_wq = alloc_workqueue("dm_mpath_wq", 0, 0);
+ dm_mpath_wq = alloc_workqueue("dm_mpath_wq", WQ_PERCPU, 0);
if (!dm_mpath_wq) {
DMERR("failed to create workqueue dm_mpath_wq");
goto bad_alloc_dm_mpath_wq;
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index 9e615b4f1f5e..3e773591d1c6 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -1129,7 +1129,8 @@ static int mirror_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->num_discard_bios = 1;
ti->per_io_data_size = sizeof(struct dm_raid1_bio_record);
- ms->kmirrord_wq = alloc_workqueue("kmirrord", WQ_MEM_RECLAIM, 0);
+ ms->kmirrord_wq = alloc_workqueue("kmirrord",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!ms->kmirrord_wq) {
DMERR("couldn't start kmirrord");
r = -ENOMEM;
@@ -1501,7 +1502,7 @@ static int __init dm_mirror_init(void)
{
int r;
- dm_raid1_wq = alloc_workqueue("dm_raid1_wq", 0, 0);
+ dm_raid1_wq = alloc_workqueue("dm_raid1_wq", WQ_PERCPU, 0);
if (!dm_raid1_wq) {
DMERR("Failed to alloc workqueue");
return -ENOMEM;
diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index 568d10842b1f..0e13d60bfdd1 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -871,7 +871,8 @@ static int persistent_ctr(struct dm_exception_store *store, char *options)
atomic_set(&ps->pending_count, 0);
ps->callbacks = NULL;
- ps->metadata_wq = alloc_workqueue("ksnaphd", WQ_MEM_RECLAIM, 0);
+ ps->metadata_wq = alloc_workqueue("ksnaphd",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!ps->metadata_wq) {
DMERR("couldn't start header metadata update thread");
r = -ENOMEM;
diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
index a1b7535c508a..4241992228a6 100644
--- a/drivers/md/dm-stripe.c
+++ b/drivers/md/dm-stripe.c
@@ -485,7 +485,7 @@ int __init dm_stripe_init(void)
{
int r;
- dm_stripe_wq = alloc_workqueue("dm_stripe_wq", 0, 0);
+ dm_stripe_wq = alloc_workqueue("dm_stripe_wq", WQ_PERCPU, 0);
if (!dm_stripe_wq)
return -ENOMEM;
r = dm_register_target(&stripe_target);
diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
index 3c427f18a04b..50e59a161486 100644
--- a/drivers/md/dm-verity-target.c
+++ b/drivers/md/dm-verity-target.c
@@ -1665,7 +1665,9 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
* will fall-back to using it for error handling (or if the bufio cache
* doesn't have required hashes).
*/
- v->verify_wq = alloc_workqueue("kverityd", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ v->verify_wq = alloc_workqueue("kverityd",
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU,
+ 0);
if (!v->verify_wq) {
ti->error = "Cannot allocate workqueue";
r = -ENOMEM;
diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
index d6a04a57472d..8a50e7e88f2e 100644
--- a/drivers/md/dm-writecache.c
+++ b/drivers/md/dm-writecache.c
@@ -2276,7 +2276,8 @@ static int writecache_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad;
}
- wc->writeback_wq = alloc_workqueue("writecache-writeback", WQ_MEM_RECLAIM, 1);
+ wc->writeback_wq = alloc_workqueue("writecache-writeback",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1);
if (!wc->writeback_wq) {
r = -ENOMEM;
ti->error = "Could not allocate writeback workqueue";
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 5ab7574c0c76..84b2746d7672 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2344,7 +2344,8 @@ static struct mapped_device *alloc_dev(int minor)
format_dev_t(md->name, MKDEV(_major, minor));
- md->wq = alloc_workqueue("kdmflush/%s", WQ_MEM_RECLAIM, 0, md->name);
+ md->wq = alloc_workqueue("kdmflush/%s", WQ_MEM_RECLAIM | WQ_PERCPU, 0,
+ md->name);
if (!md->wq)
goto bad;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 9daa78c5fe33..1f0b047618f7 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -9929,11 +9929,11 @@ static int __init md_init(void)
{
int ret = -ENOMEM;
- md_wq = alloc_workqueue("md", WQ_MEM_RECLAIM, 0);
+ md_wq = alloc_workqueue("md", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!md_wq)
goto err_wq;
- md_misc_wq = alloc_workqueue("md_misc", 0, 0);
+ md_misc_wq = alloc_workqueue("md_misc", WQ_PERCPU, 0);
if (!md_misc_wq)
goto err_misc_wq;
diff --git a/drivers/media/pci/ddbridge/ddbridge-core.c b/drivers/media/pci/ddbridge/ddbridge-core.c
index 40e6c873c36d..d240e291ba4f 100644
--- a/drivers/media/pci/ddbridge/ddbridge-core.c
+++ b/drivers/media/pci/ddbridge/ddbridge-core.c
@@ -3430,7 +3430,7 @@ int ddb_init_ddbridge(void)
if (ddb_class_create() < 0)
return -1;
- ddb_wq = alloc_workqueue("ddbridge", 0, 0);
+ ddb_wq = alloc_workqueue("ddbridge", WQ_PERCPU, 0);
if (!ddb_wq)
return ddb_exit_ddbridge(1, -1);
diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
index f571f561f070..fa6af1cc5eba 100644
--- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
+++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
@@ -274,14 +274,16 @@ static int mdp_probe(struct platform_device *pdev)
goto err_free_mutex;
}
- mdp->job_wq = alloc_workqueue(MDP_MODULE_NAME, WQ_FREEZABLE, 0);
+ mdp->job_wq = alloc_workqueue(MDP_MODULE_NAME,
+ WQ_FREEZABLE | WQ_PERCPU, 0);
if (!mdp->job_wq) {
dev_err(dev, "Unable to create job workqueue\n");
ret = -ENOMEM;
goto err_deinit_comp;
}
- mdp->clock_wq = alloc_workqueue(MDP_MODULE_NAME "-clock", WQ_FREEZABLE,
+ mdp->clock_wq = alloc_workqueue(MDP_MODULE_NAME "-clock",
+ WQ_FREEZABLE | WQ_PERCPU,
0);
if (!mdp->clock_wq) {
dev_err(dev, "Unable to create clock workqueue\n");
diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
index 738bc4e60a18..e60a8d3947c9 100644
--- a/drivers/message/fusion/mptbase.c
+++ b/drivers/message/fusion/mptbase.c
@@ -1857,7 +1857,8 @@ mpt_attach(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_DELAYED_WORK(&ioc->fault_reset_work, mpt_fault_reset_work);
ioc->reset_work_q =
- alloc_workqueue("mpt_poll_%d", WQ_MEM_RECLAIM, 0, ioc->id);
+ alloc_workqueue("mpt_poll_%d", WQ_MEM_RECLAIM | WQ_PERCPU, 0,
+ ioc->id);
if (!ioc->reset_work_q) {
printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n",
ioc->name);
@@ -1984,7 +1985,9 @@ mpt_attach(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_LIST_HEAD(&ioc->fw_event_list);
spin_lock_init(&ioc->fw_event_lock);
- ioc->fw_event_q = alloc_workqueue("mpt/%d", WQ_MEM_RECLAIM, 0, ioc->id);
+ ioc->fw_event_q = alloc_workqueue("mpt/%d",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0,
+ ioc->id);
if (!ioc->fw_event_q) {
printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n",
ioc->name);
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 4830628510e6..744444fc26db 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -3325,7 +3325,8 @@ static int mmc_blk_probe(struct mmc_card *card)
mmc_fixup_device(card, mmc_blk_fixups);
card->complete_wq = alloc_workqueue("mmc_complete",
- WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU,
+ 0);
if (!card->complete_wq) {
pr_err("Failed to create mmc completion workqueue");
return -ENOMEM;
diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
index c50617d03709..ec890aa0c7b2 100644
--- a/drivers/mmc/host/omap.c
+++ b/drivers/mmc/host/omap.c
@@ -1483,7 +1483,7 @@ static int mmc_omap_probe(struct platform_device *pdev)
host->nr_slots = pdata->nr_slots;
host->reg_shift = (mmc_omap7xx() ? 1 : 2);
- host->mmc_omap_wq = alloc_workqueue("mmc_omap", 0, 0);
+ host->mmc_omap_wq = alloc_workqueue("mmc_omap", WQ_PERCPU, 0);
if (!host->mmc_omap_wq) {
ret = -ENOMEM;
goto err_plat_cleanup;
diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
index 09ae218315d7..96f23311b4ee 100644
--- a/drivers/net/can/spi/hi311x.c
+++ b/drivers/net/can/spi/hi311x.c
@@ -770,7 +770,8 @@ static int hi3110_open(struct net_device *net)
goto out_close;
}
- priv->wq = alloc_workqueue("hi3110_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
+ priv->wq = alloc_workqueue("hi3110_wq",
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
0);
if (!priv->wq) {
ret = -ENOMEM;
diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
index ec5c64006a16..ec8c9193c4e4 100644
--- a/drivers/net/can/spi/mcp251x.c
+++ b/drivers/net/can/spi/mcp251x.c
@@ -1365,7 +1365,8 @@ static int mcp251x_can_probe(struct spi_device *spi)
if (ret)
goto out_clk;
- priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
+ priv->wq = alloc_workqueue("mcp251x_wq",
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
0);
if (!priv->wq) {
ret = -ENOMEM;
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c
index 674c54831875..215dac201b4a 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_core.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c
@@ -472,7 +472,7 @@ int setup_rx_oom_poll_fn(struct net_device *netdev)
q_no = lio->linfo.rxpciq[q].s.q_no;
wq = &lio->rxq_status_wq[q_no];
wq->wq = alloc_workqueue("rxq-oom-status",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!wq->wq) {
dev_err(&oct->pci_dev->dev, "unable to create cavium rxq oom status wq\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
index 1d79f6eaa41f..8e2fcec26ea1 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
@@ -526,7 +526,8 @@ static inline int setup_link_status_change_wq(struct net_device *netdev)
struct octeon_device *oct = lio->oct_dev;
lio->link_status_wq.wq = alloc_workqueue("link-status",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!lio->link_status_wq.wq) {
dev_err(&oct->pci_dev->dev, "unable to create cavium link status wq\n");
return -1;
@@ -659,7 +660,8 @@ static inline int setup_sync_octeon_time_wq(struct net_device *netdev)
struct octeon_device *oct = lio->oct_dev;
lio->sync_octeon_time_wq.wq =
- alloc_workqueue("update-octeon-time", WQ_MEM_RECLAIM, 0);
+ alloc_workqueue("update-octeon-time",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!lio->sync_octeon_time_wq.wq) {
dev_err(&oct->pci_dev->dev, "Unable to create wq to update octeon time\n");
return -1;
@@ -1734,7 +1736,7 @@ static inline int setup_tx_poll_fn(struct net_device *netdev)
struct octeon_device *oct = lio->oct_dev;
lio->txq_status_wq.wq = alloc_workqueue("txq-status",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!lio->txq_status_wq.wq) {
dev_err(&oct->pci_dev->dev, "unable to create cavium txq status wq\n");
return -1;
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
index 62c2eadc33e3..3230dff5ba05 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
@@ -304,7 +304,8 @@ static int setup_link_status_change_wq(struct net_device *netdev)
struct octeon_device *oct = lio->oct_dev;
lio->link_status_wq.wq = alloc_workqueue("link-status",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!lio->link_status_wq.wq) {
dev_err(&oct->pci_dev->dev, "unable to create cavium link status wq\n");
return -1;
diff --git a/drivers/net/ethernet/cavium/liquidio/request_manager.c b/drivers/net/ethernet/cavium/liquidio/request_manager.c
index de8a6ce86ad7..8b8e9953c4ee 100644
--- a/drivers/net/ethernet/cavium/liquidio/request_manager.c
+++ b/drivers/net/ethernet/cavium/liquidio/request_manager.c
@@ -132,7 +132,7 @@ int octeon_init_instr_queue(struct octeon_device *oct,
oct->fn_list.setup_iq_regs(oct, iq_no);
oct->check_db_wq[iq_no].wq = alloc_workqueue("check_iq_db",
- WQ_MEM_RECLAIM,
+ WQ_MEM_RECLAIM | WQ_PERCPU,
0);
if (!oct->check_db_wq[iq_no].wq) {
vfree(iq->request_list);
diff --git a/drivers/net/ethernet/cavium/liquidio/response_manager.c b/drivers/net/ethernet/cavium/liquidio/response_manager.c
index 861050966e18..de1a8335b545 100644
--- a/drivers/net/ethernet/cavium/liquidio/response_manager.c
+++ b/drivers/net/ethernet/cavium/liquidio/response_manager.c
@@ -39,7 +39,8 @@ int octeon_setup_response_list(struct octeon_device *oct)
}
spin_lock_init(&oct->cmd_resp_wqlock);
- oct->dma_comp_wq.wq = alloc_workqueue("dma-comp", WQ_MEM_RECLAIM, 0);
+ oct->dma_comp_wq.wq = alloc_workqueue("dma-comp",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!oct->dma_comp_wq.wq) {
dev_err(&oct->pci_dev->dev, "failed to create wq thread\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
index 29886a8ba73f..c689993a3cb0 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -4844,7 +4844,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
priv->tx_tstamp_type = HWTSTAMP_TX_OFF;
priv->rx_tstamp = false;
- priv->dpaa2_ptp_wq = alloc_workqueue("dpaa2_ptp_wq", 0, 0);
+ priv->dpaa2_ptp_wq = alloc_workqueue("dpaa2_ptp_wq", WQ_PERCPU, 0);
if (!priv->dpaa2_ptp_wq) {
err = -ENOMEM;
goto err_wq_alloc;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 3e28a08934ab..b3c06bb3d6be 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -12906,7 +12906,8 @@ static int __init hclge_init(void)
{
pr_info("%s is initializing\n", HCLGE_NAME);
- hclge_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, HCLGE_NAME);
+ hclge_wq = alloc_workqueue("%s", WQ_UNBOUND, 0,
+ HCLGE_NAME);
if (!hclge_wq) {
pr_err("%s: failed to create workqueue\n", HCLGE_NAME);
return -ENOMEM;
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
index 142f07ca8bc0..b8c15b837fda 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
@@ -37,7 +37,7 @@ static int __init fm10k_init_module(void)
pr_info("%s\n", fm10k_copyright);
/* create driver workqueue */
- fm10k_workqueue = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0,
+ fm10k_workqueue = alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 0,
fm10k_driver_name);
if (!fm10k_workqueue)
return -ENOMEM;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 120d68654e3f..73d9416803f7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -16690,7 +16690,7 @@ static int __init i40e_init_module(void)
* since we need to be able to guarantee forward progress even under
* memory pressure.
*/
- i40e_wq = alloc_workqueue("%s", 0, 0, i40e_driver_name);
+ i40e_wq = alloc_workqueue("%s", WQ_PERCPU, 0, i40e_driver_name);
if (!i40e_wq) {
pr_err("%s: Failed to create workqueue\n", i40e_driver_name);
return -ENOMEM;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
index 0b27a695008b..524ff869a91b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
@@ -1955,7 +1955,7 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* init wq for processing linkup requests */
INIT_WORK(&cgx->cgx_cmd_work, cgx_lmac_linkup_work);
- cgx->cgx_cmd_workq = alloc_workqueue("cgx_cmd_workq", 0, 0);
+ cgx->cgx_cmd_workq = alloc_workqueue("cgx_cmd_workq", WQ_PERCPU, 0);
if (!cgx->cgx_cmd_workq) {
dev_err(dev, "alloc workqueue failed for cgx cmd");
err = -ENOMEM;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
index 655dd4726d36..2b0cf25ba517 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
@@ -911,7 +911,7 @@ int rvu_mcs_init(struct rvu *rvu)
/* Initialize the wq for handling mcs interrupts */
INIT_LIST_HEAD(&rvu->mcs_intrq_head);
INIT_WORK(&rvu->mcs_intr_work, mcs_intr_handler_task);
- rvu->mcs_intr_wq = alloc_workqueue("mcs_intr_wq", 0, 0);
+ rvu->mcs_intr_wq = alloc_workqueue("mcs_intr_wq", WQ_PERCPU, 0);
if (!rvu->mcs_intr_wq) {
dev_err(rvu->dev, "mcs alloc workqueue failed\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
index 992fa0b82e8d..ddae82ee8ccc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
@@ -313,7 +313,7 @@ static int cgx_lmac_event_handler_init(struct rvu *rvu)
spin_lock_init(&rvu->cgx_evq_lock);
INIT_LIST_HEAD(&rvu->cgx_evq_head);
INIT_WORK(&rvu->cgx_evh_work, cgx_evhandler_task);
- rvu->cgx_evh_wq = alloc_workqueue("rvu_evh_wq", 0, 0);
+ rvu->cgx_evh_wq = alloc_workqueue("rvu_evh_wq", WQ_PERCPU, 0);
if (!rvu->cgx_evh_wq) {
dev_err(rvu->dev, "alloc workqueue failed");
return -ENOMEM;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
index 052ae5923e3a..258557978ab2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
@@ -375,7 +375,7 @@ int rvu_rep_install_mcam_rules(struct rvu *rvu)
spin_lock_init(&rvu->rep_evtq_lock);
INIT_LIST_HEAD(&rvu->rep_evtq_head);
INIT_WORK(&rvu->rep_evt_work, rvu_rep_wq_handler);
- rvu->rep_evt_wq = alloc_workqueue("rep_evt_wq", 0, 0);
+ rvu->rep_evt_wq = alloc_workqueue("rep_evt_wq", WQ_PERCPU, 0);
if (!rvu->rep_evt_wq) {
dev_err(rvu->dev, "REP workqueue allocation failed\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index fc59e50bafce..0fdc12b345be 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -798,7 +798,8 @@ int cn10k_ipsec_init(struct net_device *netdev)
pf->ipsec.sa_size = sa_size;
INIT_WORK(&pf->ipsec.sa_work, cn10k_ipsec_sa_wq_handler);
- pf->ipsec.sa_workq = alloc_workqueue("cn10k_ipsec_sa_workq", 0, 0);
+ pf->ipsec.sa_workq = alloc_workqueue("cn10k_ipsec_sa_workq",
+ WQ_PERCPU, 0);
if (!pf->ipsec.sa_workq) {
netdev_err(pf->netdev, "SA alloc workqueue failed\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_main.c b/drivers/net/ethernet/marvell/prestera/prestera_main.c
index 71ffb55d1fc4..65e7ef033bde 100644
--- a/drivers/net/ethernet/marvell/prestera/prestera_main.c
+++ b/drivers/net/ethernet/marvell/prestera/prestera_main.c
@@ -1500,7 +1500,7 @@ EXPORT_SYMBOL(prestera_device_unregister);
static int __init prestera_module_init(void)
{
- prestera_wq = alloc_workqueue("prestera", 0, 0);
+ prestera_wq = alloc_workqueue("prestera", WQ_PERCPU, 0);
if (!prestera_wq)
return -ENOMEM;
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_pci.c b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
index 35857dc19542..982a477ebb7f 100644
--- a/drivers/net/ethernet/marvell/prestera/prestera_pci.c
+++ b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
@@ -898,7 +898,7 @@ static int prestera_pci_probe(struct pci_dev *pdev,
dev_info(fw->dev.dev, "Prestera FW is ready\n");
- fw->wq = alloc_workqueue("prestera_fw_wq", WQ_HIGHPRI, 1);
+ fw->wq = alloc_workqueue("prestera_fw_wq", WQ_HIGHPRI | WQ_PERCPU, 1);
if (!fw->wq) {
err = -ENOMEM;
goto err_wq_alloc;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index 2bb2b77351bd..8a5d47a846c6 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -886,7 +886,7 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX))
return 0;
- emad_wq = alloc_workqueue("mlxsw_core_emad", 0, 0);
+ emad_wq = alloc_workqueue("mlxsw_core_emad", WQ_PERCPU, 0);
if (!emad_wq)
return -ENOMEM;
mlxsw_core->emad_wq = emad_wq;
@@ -3381,7 +3381,7 @@ static int __init mlxsw_core_module_init(void)
if (err)
return err;
- mlxsw_wq = alloc_workqueue(mlxsw_core_driver_name, 0, 0);
+ mlxsw_wq = alloc_workqueue(mlxsw_core_driver_name, WQ_PERCPU, 0);
if (!mlxsw_wq) {
err = -ENOMEM;
goto err_alloc_workqueue;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
index 71301dbd8fb5..48390b2fd44d 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
@@ -797,7 +797,7 @@ static int nfp_pci_probe(struct pci_dev *pdev,
pf->pdev = pdev;
pf->dev_info = dev_info;
- pf->wq = alloc_workqueue("nfp-%s", 0, 2, pci_name(pdev));
+ pf->wq = alloc_workqueue("nfp-%s", WQ_PERCPU, 2, pci_name(pdev));
if (!pf->wq) {
err = -ENOMEM;
goto err_pci_priv_unset;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
index 886061d7351a..d4685ad4b169 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
@@ -1214,7 +1214,8 @@ static int qed_slowpath_wq_start(struct qed_dev *cdev)
hwfn = &cdev->hwfns[i];
hwfn->slowpath_wq = alloc_workqueue("slowpath-%02x:%02x.%02x",
- 0, 0, cdev->pdev->bus->number,
+ WQ_PERCPU, 0,
+ cdev->pdev->bus->number,
PCI_SLOT(cdev->pdev->devfn),
hwfn->abs_pf_id);
diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c
index b77f096eaf99..c5424d882135 100644
--- a/drivers/net/ethernet/wiznet/w5100.c
+++ b/drivers/net/ethernet/wiznet/w5100.c
@@ -1142,7 +1142,7 @@ int w5100_probe(struct device *dev, const struct w5100_ops *ops,
if (err < 0)
goto err_register;
- priv->xfer_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0,
+ priv->xfer_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 0,
netdev_name(ndev));
if (!priv->xfer_wq) {
err = -ENOMEM;
diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c
index 4a4ed2ccf72f..b63965d9a1ba 100644
--- a/drivers/net/fjes/fjes_main.c
+++ b/drivers/net/fjes/fjes_main.c
@@ -1364,14 +1364,15 @@ static int fjes_probe(struct platform_device *plat_dev)
adapter->force_reset = false;
adapter->open_guard = false;
- adapter->txrx_wq = alloc_workqueue(DRV_NAME "/txrx", WQ_MEM_RECLAIM, 0);
+ adapter->txrx_wq = alloc_workqueue(DRV_NAME "/txrx",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (unlikely(!adapter->txrx_wq)) {
err = -ENOMEM;
goto err_free_netdev;
}
adapter->control_wq = alloc_workqueue(DRV_NAME "/control",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (unlikely(!adapter->control_wq)) {
err = -ENOMEM;
goto err_free_txrx_wq;
diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
index 3ffeeba5dccf..f6cc68e433ee 100644
--- a/drivers/net/wireguard/device.c
+++ b/drivers/net/wireguard/device.c
@@ -333,7 +333,8 @@ static int wg_newlink(struct net_device *dev,
goto err_free_peer_hashtable;
wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s",
- WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name);
+ WQ_CPU_INTENSIVE | WQ_FREEZABLE | WQ_PERCPU, 0,
+ dev->name);
if (!wg->handshake_receive_wq)
goto err_free_index_hashtable;
@@ -343,7 +344,8 @@ static int wg_newlink(struct net_device *dev,
goto err_destroy_handshake_receive;
wg->packet_crypt_wq = alloc_workqueue("wg-crypt-%s",
- WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 0, dev->name);
+ WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_PERCPU, 0,
+ dev->name);
if (!wg->packet_crypt_wq)
goto err_destroy_handshake_send;
diff --git a/drivers/net/wireless/ath/ath6kl/usb.c b/drivers/net/wireless/ath/ath6kl/usb.c
index 5220809841a6..e281bfe40fa7 100644
--- a/drivers/net/wireless/ath/ath6kl/usb.c
+++ b/drivers/net/wireless/ath/ath6kl/usb.c
@@ -637,7 +637,7 @@ static struct ath6kl_usb *ath6kl_usb_create(struct usb_interface *interface)
ar_usb = kzalloc(sizeof(struct ath6kl_usb), GFP_KERNEL);
if (ar_usb == NULL)
return NULL;
- ar_usb->wq = alloc_workqueue("ath6kl_wq", 0, 0);
+ ar_usb->wq = alloc_workqueue("ath6kl_wq", WQ_PERCPU, 0);
if (!ar_usb->wq) {
kfree(ar_usb);
return NULL;
diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/wireless/marvell/libertas/if_sdio.c
index 524034699972..1e29e80cad61 100644
--- a/drivers/net/wireless/marvell/libertas/if_sdio.c
+++ b/drivers/net/wireless/marvell/libertas/if_sdio.c
@@ -1181,7 +1181,8 @@ static int if_sdio_probe(struct sdio_func *func,
spin_lock_init(&card->lock);
INIT_LIST_HEAD(&card->packets);
- card->workqueue = alloc_workqueue("libertas_sdio", WQ_MEM_RECLAIM, 0);
+ card->workqueue = alloc_workqueue("libertas_sdio",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (unlikely(!card->workqueue)) {
ret = -ENOMEM;
goto err_queue;
diff --git a/drivers/net/wireless/marvell/libertas/if_spi.c b/drivers/net/wireless/marvell/libertas/if_spi.c
index b722a6587fd3..699bae8971f8 100644
--- a/drivers/net/wireless/marvell/libertas/if_spi.c
+++ b/drivers/net/wireless/marvell/libertas/if_spi.c
@@ -1153,7 +1153,8 @@ static int if_spi_probe(struct spi_device *spi)
priv->fw_ready = 1;
/* Initialize interrupt handling stuff. */
- card->workqueue = alloc_workqueue("libertas_spi", WQ_MEM_RECLAIM, 0);
+ card->workqueue = alloc_workqueue("libertas_spi",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!card->workqueue) {
err = -ENOMEM;
goto remove_card;
diff --git a/drivers/net/wireless/marvell/libertas_tf/main.c b/drivers/net/wireless/marvell/libertas_tf/main.c
index a57a11be57d8..1fc4b8c6e079 100644
--- a/drivers/net/wireless/marvell/libertas_tf/main.c
+++ b/drivers/net/wireless/marvell/libertas_tf/main.c
@@ -708,7 +708,7 @@ EXPORT_SYMBOL_GPL(lbtf_bcn_sent);
static int __init lbtf_init_module(void)
{
lbtf_deb_enter(LBTF_DEB_MAIN);
- lbtf_wq = alloc_workqueue("libertastf", WQ_MEM_RECLAIM, 0);
+ lbtf_wq = alloc_workqueue("libertastf", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (lbtf_wq == NULL) {
printk(KERN_ERR "libertastf: couldn't create workqueue\n");
return -ENOMEM;
diff --git a/drivers/net/wireless/quantenna/qtnfmac/core.c b/drivers/net/wireless/quantenna/qtnfmac/core.c
index 825b05dd3271..38af6cdc2843 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/core.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/core.c
@@ -714,7 +714,8 @@ int qtnf_core_attach(struct qtnf_bus *bus)
goto error;
}
- bus->hprio_workqueue = alloc_workqueue("QTNF_HPRI", WQ_HIGHPRI, 0);
+ bus->hprio_workqueue = alloc_workqueue("QTNF_HPRI",
+ WQ_HIGHPRI | WQ_PERCPU, 0);
if (!bus->hprio_workqueue) {
pr_err("failed to alloc high prio workqueue\n");
ret = -ENOMEM;
diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
index 6189edc1d8d7..30d295f65602 100644
--- a/drivers/net/wireless/realtek/rtlwifi/base.c
+++ b/drivers/net/wireless/realtek/rtlwifi/base.c
@@ -445,7 +445,7 @@ static int _rtl_init_deferred_work(struct ieee80211_hw *hw)
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct workqueue_struct *wq;
- wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
+ wq = alloc_workqueue("%s", WQ_PERCPU, 0, rtlpriv->cfg->name);
if (!wq)
return -ENOMEM;
diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
index c8092fa0d9f1..8338bfa0522e 100644
--- a/drivers/net/wireless/realtek/rtw88/usb.c
+++ b/drivers/net/wireless/realtek/rtw88/usb.c
@@ -909,7 +909,8 @@ static int rtw_usb_init_rx(struct rtw_dev *rtwdev)
struct sk_buff *rx_skb;
int i;
- rtwusb->rxwq = alloc_workqueue("rtw88_usb: rx wq", WQ_BH, 0);
+ rtwusb->rxwq = alloc_workqueue("rtw88_usb: rx wq", WQ_BH | WQ_PERCPU,
+ 0);
if (!rtwusb->rxwq) {
rtw_err(rtwdev, "failed to create RX work queue\n");
return -ENOMEM;
diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c
index a61128debbad..dda36e41eed1 100644
--- a/drivers/net/wireless/silabs/wfx/main.c
+++ b/drivers/net/wireless/silabs/wfx/main.c
@@ -364,7 +364,7 @@ int wfx_probe(struct wfx_dev *wdev)
wdev->pdata.gpio_wakeup = NULL;
wdev->poll_irq = true;
- wdev->bh_wq = alloc_workqueue("wfx_bh_wq", WQ_HIGHPRI, 0);
+ wdev->bh_wq = alloc_workqueue("wfx_bh_wq", WQ_HIGHPRI | WQ_PERCPU, 0);
if (!wdev->bh_wq)
return -ENOMEM;
diff --git a/drivers/net/wireless/st/cw1200/bh.c b/drivers/net/wireless/st/cw1200/bh.c
index 3b4ded2ac801..3f07f4e1deee 100644
--- a/drivers/net/wireless/st/cw1200/bh.c
+++ b/drivers/net/wireless/st/cw1200/bh.c
@@ -54,8 +54,8 @@ int cw1200_register_bh(struct cw1200_common *priv)
int err = 0;
/* Realtime workqueue */
priv->bh_workqueue = alloc_workqueue("cw1200_bh",
- WQ_MEM_RECLAIM | WQ_HIGHPRI
- | WQ_CPU_INTENSIVE, 1);
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_PERCPU,
+ 1);
if (!priv->bh_workqueue)
return -ENOMEM;
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
index 6a7a26085fc7..2310493203d3 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -1085,7 +1085,8 @@ static void t7xx_dpmaif_bat_release_work(struct work_struct *work)
int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl)
{
dpmaif_ctrl->bat_release_wq = alloc_workqueue("dpmaif_bat_release_work_queue",
- WQ_MEM_RECLAIM, 1);
+ WQ_MEM_RECLAIM | WQ_PERCPU,
+ 1);
if (!dpmaif_ctrl->bat_release_wq)
return -ENOMEM;
diff --git a/drivers/net/wwan/wwan_hwsim.c b/drivers/net/wwan/wwan_hwsim.c
index b02befd1b6fb..733688cd4607 100644
--- a/drivers/net/wwan/wwan_hwsim.c
+++ b/drivers/net/wwan/wwan_hwsim.c
@@ -509,7 +509,7 @@ static int __init wwan_hwsim_init(void)
if (wwan_hwsim_devsnum < 0 || wwan_hwsim_devsnum > 128)
return -EINVAL;
- wwan_wq = alloc_workqueue("wwan_wq", 0, 0);
+ wwan_wq = alloc_workqueue("wwan_wq", WQ_PERCPU, 0);
if (!wwan_wq)
return -ENOMEM;
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 26c459f0198d..4cfdb56ed930 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -3022,6 +3022,8 @@ static int __init nvme_tcp_init_module(void)
if (wq_unbound)
wq_flags |= WQ_UNBOUND;
+ else
+ wq_flags |= WQ_PERCPU;
nvme_tcp_wq = alloc_workqueue("nvme_tcp_wq", wq_flags, 0);
if (!nvme_tcp_wq)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 71f8d06998d6..9b0ea6b98a3d 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1896,12 +1896,13 @@ static int __init nvmet_init(void)
if (!nvmet_bvec_cache)
return -ENOMEM;
- zbd_wq = alloc_workqueue("nvmet-zbd-wq", WQ_MEM_RECLAIM, 0);
+ zbd_wq = alloc_workqueue("nvmet-zbd-wq", WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (!zbd_wq)
goto out_destroy_bvec_cache;
buffered_io_wq = alloc_workqueue("nvmet-buffered-io-wq",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!buffered_io_wq)
goto out_free_zbd_work_queue;
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 7318b736d414..29462766773a 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -795,9 +795,9 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc,
if (!queue)
return NULL;
- queue->work_q = alloc_workqueue("ntfc%d.%d.%d", 0, 0,
- assoc->tgtport->fc_target_port.port_num,
- assoc->a_id, qid);
+ queue->work_q = alloc_workqueue("ntfc%d.%d.%d", WQ_PERCPU, 0,
+ assoc->tgtport->fc_target_port.port_num,
+ assoc->a_id, qid);
if (!queue->work_q)
goto out_free_queue;
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index f2d0c920269b..cf9435c5fa6c 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -2233,7 +2233,7 @@ static int __init nvmet_tcp_init(void)
int ret;
nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq",
- WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0);
if (!nvmet_tcp_wq)
return -ENOMEM;
diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
index 6643a88c7a0c..27de533f0571 100644
--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
+++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
@@ -686,7 +686,7 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
goto err_release_tx;
}
- epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", 0, 0);
+ epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", WQ_PERCPU, 0);
if (!epf_mhi->dma_wq) {
ret = -ENOMEM;
goto err_release_rx;
diff --git a/drivers/pci/endpoint/functions/pci-epf-ntb.c b/drivers/pci/endpoint/functions/pci-epf-ntb.c
index e01a98e74d21..5e4ae7ef6f05 100644
--- a/drivers/pci/endpoint/functions/pci-epf-ntb.c
+++ b/drivers/pci/endpoint/functions/pci-epf-ntb.c
@@ -2124,8 +2124,9 @@ static int __init epf_ntb_init(void)
{
int ret;
- kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM |
- WQ_HIGHPRI, 0);
+ kpcintb_workqueue = alloc_workqueue("kpcintb",
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU,
+ 0);
ret = pci_epf_register_driver(&epf_ntb_driver);
if (ret) {
destroy_workqueue(kpcintb_workqueue);
diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
index 50eb4106369f..416d792c03b2 100644
--- a/drivers/pci/endpoint/functions/pci-epf-test.c
+++ b/drivers/pci/endpoint/functions/pci-epf-test.c
@@ -1036,7 +1036,8 @@ static int __init pci_epf_test_init(void)
int ret;
kpcitest_workqueue = alloc_workqueue("kpcitest",
- WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU,
+ 0);
if (!kpcitest_workqueue) {
pr_err("Failed to allocate the kpcitest work queue\n");
return -ENOMEM;
diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
index 874cb097b093..7ef693f38c30 100644
--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
+++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
@@ -1434,8 +1434,9 @@ static int __init epf_ntb_init(void)
{
int ret;
- kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM |
- WQ_HIGHPRI, 0);
+ kpcintb_workqueue = alloc_workqueue("kpcintb",
+ WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU,
+ 0);
ret = pci_epf_register_driver(&epf_ntb_driver);
if (ret) {
destroy_workqueue(kpcintb_workqueue);
diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
index 573a41869c15..4fe2bb1b7768 100644
--- a/drivers/pci/hotplug/pnv_php.c
+++ b/drivers/pci/hotplug/pnv_php.c
@@ -844,7 +844,8 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
int ret;
/* Allocate workqueue */
- php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
+ php_slot->wq = alloc_workqueue("pciehp-%s", WQ_PERCPU, 0,
+ php_slot->name);
if (!php_slot->wq) {
SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
pnv_php_disable_irq(php_slot, true);
diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c
index 0c341453afc6..56308515ecba 100644
--- a/drivers/pci/hotplug/shpchp_core.c
+++ b/drivers/pci/hotplug/shpchp_core.c
@@ -80,7 +80,8 @@ static int init_slots(struct controller *ctrl)
slot->device = ctrl->slot_device_offset + i;
slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i);
- slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number);
+ slot->wq = alloc_workqueue("shpchp-%d", WQ_PERCPU, 0,
+ slot->number);
if (!slot->wq) {
retval = -ENOMEM;
goto error_slot;
diff --git a/drivers/platform/surface/surface_acpi_notify.c b/drivers/platform/surface/surface_acpi_notify.c
index 3b30cfe3466b..a9dcb0bbe90e 100644
--- a/drivers/platform/surface/surface_acpi_notify.c
+++ b/drivers/platform/surface/surface_acpi_notify.c
@@ -862,7 +862,7 @@ static int __init san_init(void)
{
int ret;
- san_wq = alloc_workqueue("san_wq", 0, 0);
+ san_wq = alloc_workqueue("san_wq", WQ_PERCPU, 0);
if (!san_wq)
return -ENOMEM;
ret = platform_driver_register(&surface_acpi_notify);
diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c
index b00c84fbc33c..e5202a7b6209 100644
--- a/drivers/power/supply/ab8500_btemp.c
+++ b/drivers/power/supply/ab8500_btemp.c
@@ -667,7 +667,8 @@ static int ab8500_btemp_bind(struct device *dev, struct device *master,
/* Create a work queue for the btemp */
di->btemp_wq =
- alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM, 0);
+ alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM | WQ_PERCPU,
+ 0);
if (di->btemp_wq == NULL) {
dev_err(dev, "failed to create work queue\n");
return -ENOMEM;
diff --git a/drivers/power/supply/ipaq_micro_battery.c b/drivers/power/supply/ipaq_micro_battery.c
index 7e0568a5353f..ff8573a5ca6d 100644
--- a/drivers/power/supply/ipaq_micro_battery.c
+++ b/drivers/power/supply/ipaq_micro_battery.c
@@ -232,7 +232,8 @@ static int micro_batt_probe(struct platform_device *pdev)
return -ENOMEM;
mb->micro = dev_get_drvdata(pdev->dev.parent);
- mb->wq = alloc_workqueue("ipaq-battery-wq", WQ_MEM_RECLAIM, 0);
+ mb->wq = alloc_workqueue("ipaq-battery-wq",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!mb->wq)
return -ENOMEM;
diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c
index 9544b8ee0c96..8b7fbfbbe70e 100644
--- a/drivers/rapidio/rio.c
+++ b/drivers/rapidio/rio.c
@@ -2097,7 +2097,7 @@ int rio_init_mports(void)
* TODO: Implement restart of discovery process for all or
* individual discovering mports.
*/
- rio_wq = alloc_workqueue("riodisc", 0, 0);
+ rio_wq = alloc_workqueue("riodisc", WQ_PERCPU, 0);
if (!rio_wq) {
pr_err("RIO: unable allocate rio_wq\n");
goto no_disc;
diff --git a/drivers/s390/char/tape_3590.c b/drivers/s390/char/tape_3590.c
index 0d484fe43d7e..aee11fece701 100644
--- a/drivers/s390/char/tape_3590.c
+++ b/drivers/s390/char/tape_3590.c
@@ -1670,7 +1670,7 @@ tape_3590_init(void)
DBF_EVENT(3, "3590 init\n");
- tape_3590_wq = alloc_workqueue("tape_3590", 0, 0);
+ tape_3590_wq = alloc_workqueue("tape_3590", WQ_PERCPU, 0);
if (!tape_3590_wq)
return -ENOMEM;
diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
index 7d1b767d87fb..1a3ba0293716 100644
--- a/drivers/scsi/be2iscsi/be_main.c
+++ b/drivers/scsi/be2iscsi/be_main.c
@@ -5633,7 +5633,8 @@ static int beiscsi_dev_probe(struct pci_dev *pcidev,
phba->ctrl.mcc_alloc_index = phba->ctrl.mcc_free_index = 0;
- phba->wq = alloc_workqueue("beiscsi_%02x_wq", WQ_MEM_RECLAIM, 1,
+ phba->wq = alloc_workqueue("beiscsi_%02x_wq",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1,
phba->shost->host_no);
if (!phba->wq) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_INIT,
diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index de6574cccf58..3a9c429d1eb6 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -2695,7 +2695,7 @@ static int __init bnx2fc_mod_init(void)
if (rc)
goto detach_ft;
- bnx2fc_wq = alloc_workqueue("bnx2fc", 0, 0);
+ bnx2fc_wq = alloc_workqueue("bnx2fc", WQ_PERCPU, 0);
if (!bnx2fc_wq) {
rc = -ENOMEM;
goto release_bt;
diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
index 1bf5948d1188..6fd89ae33059 100644
--- a/drivers/scsi/device_handler/scsi_dh_alua.c
+++ b/drivers/scsi/device_handler/scsi_dh_alua.c
@@ -1300,7 +1300,7 @@ static int __init alua_init(void)
{
int r;
- kaluad_wq = alloc_workqueue("kaluad", WQ_MEM_RECLAIM, 0);
+ kaluad_wq = alloc_workqueue("kaluad", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!kaluad_wq)
return -ENOMEM;
diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index b911fdb387f3..0f749ae781d6 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -2458,7 +2458,7 @@ static int __init fcoe_init(void)
unsigned int cpu;
int rc = 0;
- fcoe_wq = alloc_workqueue("fcoe", 0, 0);
+ fcoe_wq = alloc_workqueue("fcoe", WQ_PERCPU, 0);
if (!fcoe_wq)
return -ENOMEM;
diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
index 9e42230e42b8..cde265752e0d 100644
--- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
@@ -3533,7 +3533,8 @@ static int ibmvscsis_probe(struct vio_dev *vdev,
init_completion(&vscsi->wait_idle);
init_completion(&vscsi->unconfig);
- vscsi->work_q = alloc_workqueue("ibmvscsis%s", WQ_MEM_RECLAIM, 1,
+ vscsi->work_q = alloc_workqueue("ibmvscsis%s",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1,
dev_name(&vdev->dev));
if (!vscsi->work_q) {
rc = -ENOMEM;
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 90021653e59e..becd7f081e5f 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -7938,7 +7938,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
/* Allocate all driver workqueues here */
/* The lpfc_wq workqueue for deferred irq use */
- phba->wq = alloc_workqueue("lpfc_wq", WQ_MEM_RECLAIM, 0);
+ phba->wq = alloc_workqueue("lpfc_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!phba->wq)
return -ENOMEM;
diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
index 599410bcdfea..9f00fc10dbaf 100644
--- a/drivers/scsi/pm8001/pm8001_init.c
+++ b/drivers/scsi/pm8001/pm8001_init.c
@@ -1533,7 +1533,7 @@ static int __init pm8001_init(void)
if (pm8001_use_tasklet && !pm8001_use_msix)
pm8001_use_tasklet = false;
- pm8001_wq = alloc_workqueue("pm80xx", 0, 0);
+ pm8001_wq = alloc_workqueue("pm80xx", WQ_PERCPU, 0);
if (!pm8001_wq)
goto err;
diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
index 436bd29d5eba..df016beaa789 100644
--- a/drivers/scsi/qedf/qedf_main.c
+++ b/drivers/scsi/qedf/qedf_main.c
@@ -3374,7 +3374,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_INFO, "qedf->io_mempool=%p.\n",
qedf->io_mempool);
- qedf->link_update_wq = alloc_workqueue("qedf_%u_link", WQ_MEM_RECLAIM,
+ qedf->link_update_wq = alloc_workqueue("qedf_%u_link",
+ WQ_MEM_RECLAIM | WQ_PERCPU,
1, qedf->lport->host->host_no);
INIT_DELAYED_WORK(&qedf->link_update, qedf_handle_link_update);
INIT_DELAYED_WORK(&qedf->link_recovery, qedf_link_recovery);
@@ -3585,7 +3586,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
ether_addr_copy(params.ll2_mac_address, qedf->mac);
/* Start LL2 processing thread */
- qedf->ll2_recv_wq = alloc_workqueue("qedf_%d_ll2", WQ_MEM_RECLAIM, 1,
+ qedf->ll2_recv_wq = alloc_workqueue("qedf_%d_ll2",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1,
host->host_no);
if (!qedf->ll2_recv_wq) {
QEDF_ERR(&(qedf->dbg_ctx), "Failed to LL2 workqueue.\n");
@@ -3628,7 +3630,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
}
qedf->timer_work_queue = alloc_workqueue("qedf_%u_timer",
- WQ_MEM_RECLAIM, 1, qedf->lport->host->host_no);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1,
+ qedf->lport->host->host_no);
if (!qedf->timer_work_queue) {
QEDF_ERR(&(qedf->dbg_ctx), "Failed to start timer "
"workqueue.\n");
@@ -3641,7 +3644,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
sprintf(host_buf, "qedf_%u_dpc",
qedf->lport->host->host_no);
qedf->dpc_wq =
- alloc_workqueue("%s", WQ_MEM_RECLAIM, 1, host_buf);
+ alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 1,
+ host_buf);
}
INIT_DELAYED_WORK(&qedf->recovery_work, qedf_recovery_handler);
@@ -4177,7 +4181,8 @@ static int __init qedf_init(void)
goto err3;
}
- qedf_io_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 1, "qedf_io_wq");
+ qedf_io_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 1,
+ "qedf_io_wq");
if (!qedf_io_wq) {
QEDF_ERR(NULL, "Could not create qedf_io_wq.\n");
goto err4;
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index e87885cc701c..4cccb62639e0 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -2776,7 +2776,7 @@ static int __qedi_probe(struct pci_dev *pdev, int mode)
}
qedi->offload_thread = alloc_workqueue("qedi_ofld%d",
- WQ_MEM_RECLAIM,
+ WQ_MEM_RECLAIM | WQ_PERCPU,
1, qedi->shost->host_no);
if (!qedi->offload_thread) {
QEDI_ERR(&qedi->dbg_ctx,
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index 87eeb8607b60..cecbeb16f54e 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -3409,7 +3409,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
"req->req_q_in=%p req->req_q_out=%p rsp->rsp_q_in=%p rsp->rsp_q_out=%p.\n",
req->req_q_in, req->req_q_out, rsp->rsp_q_in, rsp->rsp_q_out);
- ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 0);
+ ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (unlikely(!ha->wq)) {
ret = -ENOMEM;
goto probe_failed;
diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index 11eadb3bd36e..5ebe89d548a8 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -7300,7 +7300,7 @@ int __init qlt_init(void)
goto out_plogi_cachep;
}
- qla_tgt_wq = alloc_workqueue("qla_tgt_wq", 0, 0);
+ qla_tgt_wq = alloc_workqueue("qla_tgt_wq", WQ_PERCPU, 0);
if (!qla_tgt_wq) {
ql_log(ql_log_fatal, NULL, 0xe06f,
"alloc_workqueue for qla_tgt_wq failed\n");
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
index ceaf1c7b1d17..79374bf5548d 100644
--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -1884,7 +1884,7 @@ static int tcm_qla2xxx_register_configfs(void)
goto out_fabric;
tcm_qla2xxx_free_wq = alloc_workqueue("tcm_qla2xxx_free",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!tcm_qla2xxx_free_wq) {
ret = -ENOMEM;
goto out_fabric_npiv;
diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
index d540d66e6ffc..7ab3b27c8793 100644
--- a/drivers/scsi/qla4xxx/ql4_os.c
+++ b/drivers/scsi/qla4xxx/ql4_os.c
@@ -8815,7 +8815,8 @@ static int qla4xxx_probe_adapter(struct pci_dev *pdev,
}
INIT_WORK(&ha->dpc_work, qla4xxx_do_dpc);
- ha->task_wq = alloc_workqueue("qla4xxx_%lu_task", WQ_MEM_RECLAIM, 1,
+ ha->task_wq = alloc_workqueue("qla4xxx_%lu_task",
+ WQ_MEM_RECLAIM | WQ_PERCPU, 1,
ha->host_no);
if (!ha->task_wq) {
ql4_printk(KERN_WARNING, ha, "Unable to start task thread!\n");
diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
index 082f76e76721..e750682893b6 100644
--- a/drivers/scsi/scsi_transport_fc.c
+++ b/drivers/scsi/scsi_transport_fc.c
@@ -441,13 +441,14 @@ static int fc_host_setup(struct transport_container *tc, struct device *dev,
fc_host->next_vport_number = 0;
fc_host->npiv_vports_inuse = 0;
- fc_host->work_q = alloc_workqueue("fc_wq_%d", 0, 0, shost->host_no);
+ fc_host->work_q = alloc_workqueue("fc_wq_%d", WQ_PERCPU, 0,
+ shost->host_no);
if (!fc_host->work_q)
return -ENOMEM;
fc_host->dev_loss_tmo = fc_dev_loss_tmo;
- fc_host->devloss_work_q = alloc_workqueue("fc_dl_%d", 0, 0,
- shost->host_no);
+ fc_host->devloss_work_q = alloc_workqueue("fc_dl_%d", WQ_PERCPU, 0,
+ shost->host_no);
if (!fc_host->devloss_work_q) {
destroy_workqueue(fc_host->work_q);
fc_host->work_q = NULL;
diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
index 4dc8aba33d9b..a4890542b933 100644
--- a/drivers/soc/fsl/qbman/qman.c
+++ b/drivers/soc/fsl/qbman/qman.c
@@ -1073,7 +1073,7 @@ EXPORT_SYMBOL(qman_portal_set_iperiod);
int qman_wq_alloc(void)
{
- qm_portal_wq = alloc_workqueue("qman_portal_wq", 0, 1);
+ qm_portal_wq = alloc_workqueue("qman_portal_wq", WQ_PERCPU, 1);
if (!qm_portal_wq)
return -ENOMEM;
return 0;
diff --git a/drivers/staging/greybus/sdio.c b/drivers/staging/greybus/sdio.c
index 5326ea372b24..12c36a5e1d8c 100644
--- a/drivers/staging/greybus/sdio.c
+++ b/drivers/staging/greybus/sdio.c
@@ -806,7 +806,7 @@ static int gb_sdio_probe(struct gbphy_device *gbphy_dev,
mutex_init(&host->lock);
spin_lock_init(&host->xfer);
- host->mrq_workqueue = alloc_workqueue("mmc-%s", 0, 1,
+ host->mrq_workqueue = alloc_workqueue("mmc-%s", WQ_PERCPU, 1,
dev_name(&gbphy_dev->dev));
if (!host->mrq_workqueue) {
ret = -ENOMEM;
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 05d29201b730..cb0758a19973 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -126,12 +126,12 @@ int init_se_kmem_caches(void)
}
target_completion_wq = alloc_workqueue("target_completion",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!target_completion_wq)
goto out_free_lba_map_mem_cache;
target_submission_wq = alloc_workqueue("target_submission",
- WQ_MEM_RECLAIM, 0);
+ WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!target_submission_wq)
goto out_free_completion_wq;
diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
index 877ce58c0a70..93534a6e14b7 100644
--- a/drivers/target/target_core_xcopy.c
+++ b/drivers/target/target_core_xcopy.c
@@ -462,7 +462,7 @@ static const struct target_core_fabric_ops xcopy_pt_tfo = {
int target_xcopy_setup_pt(void)
{
- xcopy_wq = alloc_workqueue("xcopy_wq", WQ_MEM_RECLAIM, 0);
+ xcopy_wq = alloc_workqueue("xcopy_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!xcopy_wq) {
pr_err("Unable to allocate xcopy_wq\n");
return -ENOMEM;
diff --git a/drivers/target/tcm_fc/tfc_conf.c b/drivers/target/tcm_fc/tfc_conf.c
index 639fc358ed0f..f686d95d3273 100644
--- a/drivers/target/tcm_fc/tfc_conf.c
+++ b/drivers/target/tcm_fc/tfc_conf.c
@@ -250,7 +250,7 @@ static struct se_portal_group *ft_add_tpg(struct se_wwn *wwn, const char *name)
tpg->lport_wwn = ft_wwn;
INIT_LIST_HEAD(&tpg->lun_list);
- wq = alloc_workqueue("tcm_fc", 0, 1);
+ wq = alloc_workqueue("tcm_fc", WQ_PERCPU, 1);
if (!wq) {
kfree(tpg);
return NULL;
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 0e1dd6ef60a7..c06357d31ce7 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -6038,7 +6038,7 @@ int usb_hub_init(void)
* device was gone before the EHCI controller had handed its port
* over to the companion full-speed controller.
*/
- hub_wq = alloc_workqueue("usb_hub_wq", WQ_FREEZABLE, 0);
+ hub_wq = alloc_workqueue("usb_hub_wq", WQ_FREEZABLE | WQ_PERCPU, 0);
if (hub_wq)
return 0;
diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
index 740311c4fa24..637ee8de8c00 100644
--- a/drivers/usb/gadget/function/f_hid.c
+++ b/drivers/usb/gadget/function/f_hid.c
@@ -1254,8 +1254,7 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
INIT_WORK(&hidg->work, get_report_workqueue_handler);
hidg->workqueue = alloc_workqueue("report_work",
- WQ_FREEZABLE |
- WQ_MEM_RECLAIM,
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
1);
if (!hidg->workqueue) {
diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
index 4ed0dc19afe0..0657f5f7a51f 100644
--- a/drivers/usb/storage/uas.c
+++ b/drivers/usb/storage/uas.c
@@ -1265,7 +1265,7 @@ static int __init uas_init(void)
{
int rv;
- workqueue = alloc_workqueue("uas", WQ_MEM_RECLAIM, 0);
+ workqueue = alloc_workqueue("uas", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!workqueue)
return -ENOMEM;
diff --git a/drivers/usb/typec/anx7411.c b/drivers/usb/typec/anx7411.c
index 0ae0a5ee3fae..2e8ae1d2faf9 100644
--- a/drivers/usb/typec/anx7411.c
+++ b/drivers/usb/typec/anx7411.c
@@ -1516,8 +1516,7 @@ static int anx7411_i2c_probe(struct i2c_client *client)
INIT_WORK(&plat->work, anx7411_work_func);
plat->workqueue = alloc_workqueue("anx7411_work",
- WQ_FREEZABLE |
- WQ_MEM_RECLAIM,
+ WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU,
1);
if (!plat->workqueue) {
dev_err(dev, "fail to create work queue\n");
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 6a9a37351310..958a1454de2c 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -2172,7 +2172,8 @@ static int vduse_init(void)
if (!vduse_irq_wq)
goto err_wq;
- vduse_irq_bound_wq = alloc_workqueue("vduse-irq-bound", WQ_HIGHPRI, 0);
+ vduse_irq_bound_wq = alloc_workqueue("vduse-irq-bound",
+ WQ_HIGHPRI | WQ_PERCPU, 0);
if (!vduse_irq_bound_wq)
goto err_bound_wq;
diff --git a/drivers/virt/acrn/irqfd.c b/drivers/virt/acrn/irqfd.c
index b7da24ca1475..7dfc2b3d39cb 100644
--- a/drivers/virt/acrn/irqfd.c
+++ b/drivers/virt/acrn/irqfd.c
@@ -208,7 +208,8 @@ int acrn_irqfd_init(struct acrn_vm *vm)
{
INIT_LIST_HEAD(&vm->irqfds);
mutex_init(&vm->irqfds_lock);
- vm->irqfd_wq = alloc_workqueue("acrn_irqfd-%u", 0, 0, vm->vmid);
+ vm->irqfd_wq = alloc_workqueue("acrn_irqfd-%u", WQ_PERCPU, 0,
+ vm->vmid);
if (!vm->irqfd_wq)
return -ENOMEM;
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 89da052f4f68..d26fd2d910ac 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -987,7 +987,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
goto out_del_vqs;
}
vb->balloon_wq = alloc_workqueue("balloon-wq",
- WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);
+ WQ_FREEZABLE | WQ_CPU_INTENSIVE | WQ_PERCPU,
+ 0);
if (!vb->balloon_wq) {
err = -ENOMEM;
goto out_del_vqs;
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 13a10f3294a8..db29356cb92e 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -1093,7 +1093,8 @@ static long privcmd_ioctl_irqfd(struct file *file, void __user *udata)
static int privcmd_irqfd_init(void)
{
- irqfd_cleanup_wq = alloc_workqueue("privcmd-irqfd-cleanup", 0, 0);
+ irqfd_cleanup_wq = alloc_workqueue("privcmd-irqfd-cleanup", WQ_PERCPU,
+ 0);
if (!irqfd_cleanup_wq)
return -ENOMEM;
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 90258f228ea5..904f33655cff 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -409,7 +409,7 @@ enum wq_flags {
__WQ_LEGACY = 1 << 18, /* internal: create*_workqueue() */
/* BH wq only allows the following flags */
- __WQ_BH_ALLOWS = WQ_BH | WQ_HIGHPRI,
+ __WQ_BH_ALLOWS = WQ_BH | WQ_HIGHPRI | WQ_PERCPU,
};
enum wq_consts {
@@ -568,7 +568,7 @@ alloc_workqueue_lockdep_map(const char *fmt, unsigned int flags, int max_active,
alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | (flags), 1, ##args)
#define create_workqueue(name) \
- alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, 1, (name))
+ alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM | WQ_PERCPU, 1, (name))
#define create_freezable_workqueue(name) \
alloc_workqueue("%s", __WQ_LEGACY | WQ_FREEZABLE | WQ_UNBOUND | \
WQ_MEM_RECLAIM, 1, (name))
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index b8699ec4d766..f3da9400c178 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -34,7 +34,8 @@ static struct workqueue_struct *cgroup_bpf_destroy_wq;
static int __init cgroup_bpf_wq_init(void)
{
- cgroup_bpf_destroy_wq = alloc_workqueue("cgroup_bpf_destroy", 0, 1);
+ cgroup_bpf_destroy_wq = alloc_workqueue("cgroup_bpf_destroy",
+ WQ_PERCPU, 1);
if (!cgroup_bpf_destroy_wq)
panic("Failed to alloc workqueue for cgroup bpf destroy.\n");
return 0;
diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
index fa24c032ed6f..779d586e191c 100644
--- a/kernel/cgroup/cgroup-v1.c
+++ b/kernel/cgroup/cgroup-v1.c
@@ -1321,7 +1321,7 @@ static int __init cgroup1_wq_init(void)
* Cap @max_active to 1 too.
*/
cgroup_pidlist_destroy_wq = alloc_workqueue("cgroup_pidlist_destroy",
- 0, 1);
+ WQ_PERCPU, 1);
BUG_ON(!cgroup_pidlist_destroy_wq);
return 0;
}
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 1e39355194fd..54a66cf0cef9 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -6281,7 +6281,7 @@ static int __init cgroup_wq_init(void)
* We would prefer to do this in cgroup_init() above, but that
* is called before init_workqueues(): so leave this until after.
*/
- cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
+ cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", WQ_PERCPU, 1);
BUG_ON(!cgroup_destroy_wq);
return 0;
}
diff --git a/kernel/padata.c b/kernel/padata.c
index 76b39fc8b326..26cc9b748b3d 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -1030,8 +1030,9 @@ struct padata_instance *padata_alloc(const char *name)
cpus_read_lock();
- pinst->serial_wq = alloc_workqueue("%s_serial", WQ_MEM_RECLAIM |
- WQ_CPU_INTENSIVE, 1, name);
+ pinst->serial_wq = alloc_workqueue("%s_serial",
+ WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU,
+ 1, name);
if (!pinst->serial_wq)
goto err_put_cpus;
diff --git a/kernel/power/main.c b/kernel/power/main.c
index 6254814d4817..eb55ef540032 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -1012,7 +1012,7 @@ EXPORT_SYMBOL_GPL(pm_wq);
static int __init pm_start_workqueue(void)
{
- pm_wq = alloc_workqueue("pm", WQ_FREEZABLE, 0);
+ pm_wq = alloc_workqueue("pm", WQ_FREEZABLE | WQ_PERCPU, 0);
return pm_wq ? 0 : -ENOMEM;
}
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 659f83e71048..e763c3d1e851 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4829,10 +4829,10 @@ void __init rcu_init(void)
rcutree_online_cpu(cpu);
/* Create workqueue for Tree SRCU and for expedited GPs. */
- rcu_gp_wq = alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM, 0);
+ rcu_gp_wq = alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
WARN_ON(!rcu_gp_wq);
- sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM, 0);
+ sync_wq = alloc_workqueue("sync_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
WARN_ON(!sync_wq);
/* Fill in default value for rcutree.qovld boot parameter. */
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 89839eebb359..c2593868c5f1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5675,8 +5675,26 @@ static struct workqueue_struct *__alloc_workqueue(const char *fmt,
}
/* see the comment above the definition of WQ_POWER_EFFICIENT */
- if ((flags & WQ_POWER_EFFICIENT) && wq_power_efficient)
- flags |= WQ_UNBOUND;
+ if (flags & WQ_POWER_EFFICIENT) {
+ if (wq_power_efficient)
+ flags |= WQ_UNBOUND;
+ else
+ flags |= WQ_PERCPU;
+ }
+
+ /* one among WQ_UNBOUND and WQ_PERCPU should always be present */
+ if ((flags & WQ_UNBOUND) && (flags & WQ_PERCPU)) {
+ pr_warn_once("WQ_UNBOUND is the complement of the new WQ_PERCPU, "
+ "only one of them should be present. Fall back to the old behavior, removing WQ_UNBOUND.\n");
+
+ flags &= (~WQ_UNBOUND);
+ }
+
+ if (!(flags & WQ_PERCPU) && !(flags & WQ_UNBOUND)) {
+ pr_warn_once("One between WQ_PERCPU and WQ_UNBOUND should be present. WQ_PERCPU will be added to keep the old behavior.\n");
+
+ flags |= WQ_PERCPU;
+ }
/* allocate wq and format name */
if (flags & WQ_UNBOUND)
@@ -7819,22 +7837,23 @@ void __init workqueue_init_early(void)
ordered_wq_attrs[i] = attrs;
}
- system_wq = alloc_workqueue("events", 0, 0);
- system_percpu_wq = alloc_workqueue("events", 0, 0);
- system_highpri_wq = alloc_workqueue("events_highpri", WQ_HIGHPRI, 0);
- system_long_wq = alloc_workqueue("events_long", 0, 0);
+ system_wq = alloc_workqueue("events", WQ_PERCPU, 0);
+ system_percpu_wq = alloc_workqueue("events", WQ_PERCPU, 0);
+ system_highpri_wq = alloc_workqueue("events_highpri",
+ WQ_HIGHPRI | WQ_PERCPU, 0);
+ system_long_wq = alloc_workqueue("events_long", WQ_PERCPU, 0);
system_unbound_wq = alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MAX_ACTIVE);
system_dfl_wq = alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MAX_ACTIVE);
system_freezable_wq = alloc_workqueue("events_freezable",
- WQ_FREEZABLE, 0);
+ WQ_FREEZABLE | WQ_PERCPU, 0);
system_power_efficient_wq = alloc_workqueue("events_power_efficient",
WQ_POWER_EFFICIENT, 0);
system_freezable_power_efficient_wq = alloc_workqueue("events_freezable_pwr_efficient",
- WQ_FREEZABLE | WQ_POWER_EFFICIENT,
- 0);
- system_bh_wq = alloc_workqueue("events_bh", WQ_BH, 0);
+ WQ_FREEZABLE | WQ_POWER_EFFICIENT, 0);
+ system_bh_wq = alloc_workqueue("events_bh", WQ_BH | WQ_PERCPU, 0);
system_bh_highpri_wq = alloc_workqueue("events_bh_highpri",
- WQ_BH | WQ_HIGHPRI, 0);
+ WQ_BH | WQ_HIGHPRI | WQ_PERCPU, 0);
+
BUG_ON(!system_wq || !system_percpu_wq || !system_highpri_wq || !system_long_wq ||
!system_unbound_wq || !system_dfl_wq || !system_freezable_wq ||
!system_power_efficient_wq ||
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index 11e5d1e3f12e..4f0bdd67edb2 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -662,7 +662,7 @@ bool kvm_notify_irqfd_resampler(struct kvm *kvm,
*/
int kvm_irqfd_init(void)
{
- irqfd_cleanup_wq = alloc_workqueue("kvm-irqfd-cleanup", 0, 0);
+ irqfd_cleanup_wq = alloc_workqueue("kvm-irqfd-cleanup", WQ_PERCPU, 0);
if (!irqfd_cleanup_wq)
return -ENOMEM;
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v1 10/10] [Doc] Workqueue: WQ_UNBOUND doc upgraded
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (8 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 09/10] Workqueue: WQ_PERCPU added to all the remaining users Marco Crivellari
@ 2025-06-25 10:49 ` Marco Crivellari
2025-07-11 10:32 ` [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-06-25 10:49 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko
Doc upgraded to mention future removal of WQ_UNBOUND.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
Documentation/core-api/workqueue.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/workqueue.rst
index 165ca73e8351..c8ece1c38808 100644
--- a/Documentation/core-api/workqueue.rst
+++ b/Documentation/core-api/workqueue.rst
@@ -206,6 +206,10 @@ resources, scheduled and executed.
* Long running CPU intensive workloads which can be better
managed by the system scheduler.
+ **Note:** This flag will be removed in the future and all the work
+ items that don't need to be bound to a specific CPU, should not
+ use this flag.
+
``WQ_FREEZABLE``
A freezable wq participates in the freeze phase of the system
suspend operations. Work items on the wq are drained and no
--
2.49.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
` (9 preceding siblings ...)
2025-06-25 10:49 ` [PATCH v1 10/10] [Doc] Workqueue: WQ_UNBOUND doc upgraded Marco Crivellari
@ 2025-07-11 10:32 ` Marco Crivellari
10 siblings, 0 replies; 12+ messages in thread
From: Marco Crivellari @ 2025-07-11 10:32 UTC (permalink / raw)
To: linux-kernel
Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
Sebastian Andrzej Siewior, Michal Hocko, Alexander Viro,
Andrew Morton, Christian Brauner, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni
Hi,
Gentle ping.
On Wed, Jun 25, 2025 at 12:49 PM Marco Crivellari
<marco.crivellari@suse.com> wrote:
>
> Hi!
>
> Below is a summary of a discussion about the Workqueue API and cpu isolation
> considerations. Details and more information are available here:
>
> "workqueue: Always use wq_select_unbound_cpu() for WORK_CPU_UNBOUND."
> https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
>
> === Current situation: problems ===
>
> Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
> set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.
>
> This leads to different scenarios if a work item is scheduled on an isolated
> CPU where "delay" value is 0 or greater then 0:
> schedule_delayed_work(, 0);
>
> This will be handled by __queue_work() that will queue the work item on the
> current local (isolated) CPU, while:
>
> schedule_delayed_work(, 1);
>
> Will move the timer on an housekeeping CPU, and schedule the work there.
>
> Currently if a user enqueue a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
>
> This lack of consistentcy cannot be addressed without refactoring the API.
>
> === Plan and future plans ===
>
> This patchset is the first stone on a refactoring needed in order to
> address the points aforementioned; it will have a positive impact also
> on the cpu isolation, in the long term, moving away percpu workqueue in
> favor to an unbound model.
>
> These are the main steps:
> 1) API refactoring (that this patch is introducing)
> - Make more clear and uniform the system wq names, both per-cpu and
> unbound. This to avoid any possible confusion on what should be
> used.
>
> - Introduction of WQ_PERCPU: this flag is the complement of WQ_UNBOUND,
> introduced in this patchset and used on all the callers that are not
> currently using WQ_UNBOUND.
>
> WQ_UNBOUND will be removed in a future release cycle.
>
> Most users don't need to be per-cpu, because they don't have
> locality requirements, because of that, a next future step will be
> make "unbound" the default behavior.
>
> 2) Check who really needs to be per-cpu
> - Remove the WQ_PERCPU flag when is not strictly required.
>
> 3) Add a new API (prefer local cpu)
> - There are users that don't require a local execution, like mentioned
> above; despite that, local execution yeld to performance gain.
>
> This new API will prefer the local execution, without requiring it.
>
> === Introduced Changes by this patchset ===
>
> 1) [P 1-2-3] replace use of system_wq with system_percpu_wq per subsys:
>
> system_wq is a per-CPU workqueue, but his name is not clear.
> system_unbound_wq is to be used when locality is not required.
>
> Because of that, system_wq has been renamed in system_percpu_wq in the
> following subsystem: mm, net, fs (details on the next section).
>
> 2) [P 4-5] replace remaining system_wq and system_unbound_wq
>
> system_unbound_wq is to be used when locality is not required.
> Because of that, system_unbound_wq has been renamed in system_dfl_wq.
>
> The old wq are still around, but if used in queue_work(), queue_delayed_work(),
> mod_delayed_work(), a pr_warn_once() will be printed and the wq used is
> automatically assigned to the new default (system_dfl_wq or system_percpu_wq).
>
> 3) [P 6-7-8] add WQ_PERCPU to alloc_workqueue() users (per subsystem)
>
> Every alloc_workqueue() caller should use one among WQ_PERCPU or
> WQ_UNBOUND. This is actually enforced warning if both or none of them
> are present at the same time.
>
> These patches introduce WQ_PERCPU in the following subsystems:
> fs, net, mm (details on the next section).
>
> WQ_UNBOUND will be removed in a next release cycle.
>
> 4) [P 9] add WQ_PERCPU to remaining alloc_workqueue() users
>
> Every alloc_workqueue() caller should use one among WQ_PERCPU or
> WQ_UNBOUND. This is actually enforced warning if both or none of them
> are present at the same time.
>
> WQ_UNBOUND will be removed in a next release cycle.
>
> 5) [P 10] upgraded WQ_UNBOUND documentation
>
> Added a note about the WQ_UNBOUND removal in a next release cycle.
>
> === For fs, mm, net Maintainers ===
>
> If you agree with these changes, one option is pull the preparation changes from
> Tejun's wq branch [1].
>
> As an alternative, the patches can be routed through a wq branch.
>
> The preparation changes are described in the present cover letter, under the
> "main steps" section. The changes done in summary are:
>
> - add system_percpu_wq and system_dfl_wq, for now without replace the older wq(s)
> (system_unbound_wq and system_wq).
> - add WQ_PERCPU flag, currently without removing WQ_UNBOUND; it will be removed
> in a future release cycle.
>
> You can find the aforementioned changes reading:
>
> ("Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq")
> https://lore.kernel.org/all/20250614133531.76742-1-marco.crivellari@suse.com/
>
>
> - [1] git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git WQ_PERCPU
>
>
> Marco Crivellari (10):
> Workqueue: net: replace use of system_wq with system_percpu_wq
> Workqueue: mm: replace use of system_wq with system_percpu_wq
> Workqueue: fs: replace use of system_wq with system_percpu_wq
> Workqueue: replace use of system_wq with system_percpu_wq
> Workqueue: replace use of system_unbound_wq with system_dfl_wq
> Workqueue: net: WQ_PERCPU added to alloc_workqueue users
> Workqueue: mm: WQ_PERCPU added to alloc_workqueue users
> Workqueue: fs: WQ_PERCPU added to alloc_workqueue users
> Workqueue: WQ_PERCPU added to all the remaining users
> [Doc] Workqueue: WQ_UNBOUND doc upgraded
>
> Documentation/core-api/workqueue.rst | 4 ++
> arch/s390/kernel/diag/diag324.c | 4 +-
> arch/s390/kernel/hiperdispatch.c | 2 +-
> block/bio-integrity-auto.c | 5 +-
> block/bio.c | 3 +-
> block/blk-core.c | 3 +-
> block/blk-throttle.c | 3 +-
> block/blk-zoned.c | 3 +-
> crypto/cryptd.c | 3 +-
> drivers/accel/ivpu/ivpu_hw_btrs.c | 2 +-
> drivers/accel/ivpu/ivpu_ipc.c | 2 +-
> drivers/accel/ivpu/ivpu_job.c | 2 +-
> drivers/accel/ivpu/ivpu_mmu.c | 2 +-
> drivers/accel/ivpu/ivpu_pm.c | 4 +-
> drivers/acpi/ec.c | 3 +-
> drivers/acpi/osl.c | 6 +-
> drivers/acpi/scan.c | 2 +-
> drivers/acpi/thermal.c | 3 +-
> drivers/ata/libata-sff.c | 3 +-
> drivers/base/core.c | 2 +-
> drivers/base/dd.c | 2 +-
> drivers/base/devcoredump.c | 2 +-
> drivers/block/aoe/aoemain.c | 2 +-
> drivers/block/nbd.c | 2 +-
> drivers/block/rbd.c | 2 +-
> drivers/block/rnbd/rnbd-clt.c | 2 +-
> drivers/block/sunvdc.c | 4 +-
> drivers/block/virtio_blk.c | 2 +-
> drivers/block/zram/zram_drv.c | 2 +-
> drivers/bus/mhi/ep/main.c | 2 +-
> drivers/char/random.c | 8 +--
> drivers/char/tpm/tpm-dev-common.c | 3 +-
> drivers/char/xillybus/xillybus_core.c | 2 +-
> drivers/char/xillybus/xillyusb.c | 4 +-
> drivers/cpufreq/tegra194-cpufreq.c | 3 +-
> drivers/crypto/atmel-i2c.c | 2 +-
> drivers/crypto/cavium/nitrox/nitrox_mbx.c | 2 +-
> drivers/crypto/intel/qat/qat_common/adf_aer.c | 4 +-
> drivers/crypto/intel/qat/qat_common/adf_isr.c | 3 +-
> .../crypto/intel/qat/qat_common/adf_sriov.c | 3 +-
> .../crypto/intel/qat/qat_common/adf_vf_isr.c | 3 +-
> drivers/cxl/pci.c | 2 +-
> drivers/extcon/extcon-intel-int3496.c | 4 +-
> drivers/firewire/core-transaction.c | 3 +-
> drivers/firewire/ohci.c | 3 +-
> drivers/gpio/gpiolib-cdev.c | 4 +-
> drivers/gpu/drm/amd/amdgpu/aldebaran.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c | 2 +-
> drivers/gpu/drm/amd/amdkfd/kfd_process.c | 3 +-
> drivers/gpu/drm/bridge/analogix/anx7625.c | 3 +-
> drivers/gpu/drm/bridge/ite-it6505.c | 2 +-
> drivers/gpu/drm/bridge/ti-tfp410.c | 2 +-
> drivers/gpu/drm/drm_atomic_helper.c | 6 +-
> drivers/gpu/drm/drm_probe_helper.c | 2 +-
> drivers/gpu/drm/drm_self_refresh_helper.c | 2 +-
> drivers/gpu/drm/exynos/exynos_hdmi.c | 2 +-
> .../drm/i915/display/intel_display_driver.c | 3 +-
> .../drm/i915/display/intel_display_power.c | 2 +-
> drivers/gpu/drm/i915/display/intel_tc.c | 4 +-
> drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 2 +-
> drivers/gpu/drm/i915/gt/uc/intel_guc.c | 4 +-
> drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 4 +-
> .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 6 +-
> drivers/gpu/drm/i915/i915_active.c | 2 +-
> drivers/gpu/drm/i915/i915_driver.c | 5 +-
> drivers/gpu/drm/i915/i915_drv.h | 2 +-
> drivers/gpu/drm/i915/i915_sw_fence_work.c | 2 +-
> drivers/gpu/drm/i915/i915_vma_resource.c | 2 +-
> drivers/gpu/drm/i915/pxp/intel_pxp.c | 2 +-
> drivers/gpu/drm/i915/pxp/intel_pxp_irq.c | 2 +-
> .../gpu/drm/i915/selftests/i915_sw_fence.c | 2 +-
> .../gpu/drm/i915/selftests/mock_gem_device.c | 2 +-
> drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_sched.c | 3 +-
> drivers/gpu/drm/radeon/radeon_display.c | 3 +-
> .../gpu/drm/rockchip/dw_hdmi_qp-rockchip.c | 4 +-
> drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 2 +-
> drivers/gpu/drm/scheduler/sched_main.c | 2 +-
> drivers/gpu/drm/tilcdc/tilcdc_crtc.c | 2 +-
> drivers/gpu/drm/vc4/vc4_hdmi.c | 4 +-
> drivers/gpu/drm/xe/xe_devcoredump.c | 2 +-
> drivers/gpu/drm/xe/xe_device.c | 4 +-
> drivers/gpu/drm/xe/xe_execlist.c | 2 +-
> drivers/gpu/drm/xe/xe_ggtt.c | 2 +-
> drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 6 +-
> drivers/gpu/drm/xe/xe_guc_ct.c | 4 +-
> drivers/gpu/drm/xe/xe_hw_engine_group.c | 3 +-
> drivers/gpu/drm/xe/xe_oa.c | 2 +-
> drivers/gpu/drm/xe/xe_pt.c | 2 +-
> drivers/gpu/drm/xe/xe_sriov.c | 2 +-
> drivers/gpu/drm/xe/xe_vm.c | 4 +-
> drivers/greybus/operation.c | 2 +-
> drivers/hid/hid-nintendo.c | 3 +-
> drivers/hte/hte.c | 2 +-
> drivers/hv/mshv_eventfd.c | 2 +-
> drivers/i3c/master.c | 2 +-
> drivers/iio/adc/pac1934.c | 2 +-
> drivers/infiniband/core/cm.c | 2 +-
> drivers/infiniband/core/device.c | 4 +-
> drivers/infiniband/core/ucma.c | 2 +-
> drivers/infiniband/hw/hfi1/init.c | 3 +-
> drivers/infiniband/hw/hfi1/opfn.c | 3 +-
> drivers/infiniband/hw/mlx4/cm.c | 2 +-
> drivers/infiniband/hw/mlx5/odp.c | 4 +-
> drivers/infiniband/sw/rdmavt/cq.c | 3 +-
> drivers/infiniband/ulp/iser/iscsi_iser.c | 2 +-
> drivers/infiniband/ulp/isert/ib_isert.c | 2 +-
> drivers/infiniband/ulp/rtrs/rtrs-clt.c | 2 +-
> drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +-
> drivers/input/keyboard/gpio_keys.c | 2 +-
> drivers/input/misc/palmas-pwrbutton.c | 2 +-
> drivers/input/mouse/psmouse-smbus.c | 2 +-
> drivers/input/mouse/synaptics_i2c.c | 8 +--
> drivers/isdn/capi/kcapi.c | 2 +-
> drivers/leds/trigger/ledtrig-input-events.c | 2 +-
> drivers/md/bcache/btree.c | 3 +-
> drivers/md/bcache/super.c | 30 +++++-----
> drivers/md/bcache/writeback.c | 2 +-
> drivers/md/dm-bufio.c | 3 +-
> drivers/md/dm-cache-target.c | 3 +-
> drivers/md/dm-clone-target.c | 3 +-
> drivers/md/dm-crypt.c | 6 +-
> drivers/md/dm-delay.c | 4 +-
> drivers/md/dm-integrity.c | 15 +++--
> drivers/md/dm-kcopyd.c | 3 +-
> drivers/md/dm-log-userspace-base.c | 3 +-
> drivers/md/dm-mpath.c | 5 +-
> drivers/md/dm-raid1.c | 5 +-
> drivers/md/dm-snap-persistent.c | 3 +-
> drivers/md/dm-stripe.c | 2 +-
> drivers/md/dm-verity-target.c | 4 +-
> drivers/md/dm-writecache.c | 3 +-
> drivers/md/dm.c | 3 +-
> drivers/md/md.c | 4 +-
> drivers/media/pci/ddbridge/ddbridge-core.c | 2 +-
> .../platform/mediatek/mdp3/mtk-mdp3-core.c | 6 +-
> .../platform/synopsys/hdmirx/snps_hdmirx.c | 8 +--
> drivers/message/fusion/mptbase.c | 7 ++-
> drivers/mmc/core/block.c | 3 +-
> drivers/mmc/host/mtk-sd.c | 4 +-
> drivers/mmc/host/omap.c | 2 +-
> drivers/net/can/spi/hi311x.c | 3 +-
> drivers/net/can/spi/mcp251x.c | 3 +-
> .../net/ethernet/cavium/liquidio/lio_core.c | 2 +-
> .../net/ethernet/cavium/liquidio/lio_main.c | 8 ++-
> .../ethernet/cavium/liquidio/lio_vf_main.c | 3 +-
> .../cavium/liquidio/request_manager.c | 2 +-
> .../cavium/liquidio/response_manager.c | 3 +-
> .../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 2 +-
> .../hisilicon/hns3/hns3pf/hclge_main.c | 3 +-
> drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +-
> drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +-
> .../net/ethernet/marvell/octeontx2/af/cgx.c | 2 +-
> .../marvell/octeontx2/af/mcs_rvu_if.c | 2 +-
> .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 +-
> .../ethernet/marvell/octeontx2/af/rvu_rep.c | 2 +-
> .../marvell/octeontx2/nic/cn10k_ipsec.c | 3 +-
> .../ethernet/marvell/prestera/prestera_main.c | 2 +-
> .../ethernet/marvell/prestera/prestera_pci.c | 2 +-
> drivers/net/ethernet/mellanox/mlxsw/core.c | 4 +-
> drivers/net/ethernet/netronome/nfp/nfp_main.c | 2 +-
> drivers/net/ethernet/qlogic/qed/qed_main.c | 3 +-
> drivers/net/ethernet/sfc/efx_channels.c | 2 +-
> drivers/net/ethernet/sfc/siena/efx_channels.c | 2 +-
> drivers/net/ethernet/wiznet/w5100.c | 2 +-
> drivers/net/fjes/fjes_main.c | 5 +-
> drivers/net/macvlan.c | 2 +-
> drivers/net/netdevsim/dev.c | 6 +-
> drivers/net/phy/sfp.c | 12 ++--
> drivers/net/wireguard/device.c | 6 +-
> drivers/net/wireless/ath/ath6kl/usb.c | 2 +-
> drivers/net/wireless/intel/ipw2x00/ipw2100.c | 6 +-
> drivers/net/wireless/intel/ipw2x00/ipw2200.c | 2 +-
> drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 4 +-
> .../net/wireless/intel/iwlwifi/iwl-trans.h | 2 +-
> drivers/net/wireless/intel/iwlwifi/mvm/tdls.c | 6 +-
> .../net/wireless/marvell/libertas/if_sdio.c | 3 +-
> .../net/wireless/marvell/libertas/if_spi.c | 3 +-
> .../net/wireless/marvell/libertas_tf/main.c | 2 +-
> .../net/wireless/mediatek/mt76/mt7921/init.c | 2 +-
> .../net/wireless/mediatek/mt76/mt7925/init.c | 2 +-
> drivers/net/wireless/quantenna/qtnfmac/core.c | 3 +-
> drivers/net/wireless/realtek/rtlwifi/base.c | 2 +-
> drivers/net/wireless/realtek/rtw88/usb.c | 3 +-
> drivers/net/wireless/silabs/wfx/main.c | 2 +-
> drivers/net/wireless/st/cw1200/bh.c | 4 +-
> drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 3 +-
> drivers/net/wwan/wwan_hwsim.c | 2 +-
> drivers/nvdimm/security.c | 4 +-
> drivers/nvme/host/tcp.c | 2 +
> drivers/nvme/target/admin-cmd.c | 2 +-
> drivers/nvme/target/core.c | 5 +-
> drivers/nvme/target/fabrics-cmd-auth.c | 2 +-
> drivers/nvme/target/fc.c | 6 +-
> drivers/nvme/target/tcp.c | 2 +-
> drivers/pci/endpoint/functions/pci-epf-mhi.c | 2 +-
> drivers/pci/endpoint/functions/pci-epf-ntb.c | 5 +-
> drivers/pci/endpoint/functions/pci-epf-test.c | 3 +-
> drivers/pci/endpoint/functions/pci-epf-vntb.c | 5 +-
> drivers/pci/endpoint/pci-ep-cfs.c | 2 +-
> drivers/pci/hotplug/pnv_php.c | 3 +-
> drivers/pci/hotplug/shpchp_core.c | 3 +-
> drivers/phy/allwinner/phy-sun4i-usb.c | 14 ++---
> .../platform/cznic/turris-omnia-mcu-gpio.c | 2 +-
> .../surface/aggregator/ssh_packet_layer.c | 2 +-
> .../surface/aggregator/ssh_request_layer.c | 2 +-
> .../platform/surface/surface_acpi_notify.c | 2 +-
> drivers/platform/x86/gpd-pocket-fan.c | 4 +-
> .../x86/x86-android-tablets/vexia_atla10_ec.c | 2 +-
> drivers/power/supply/ab8500_btemp.c | 3 +-
> drivers/power/supply/bq2415x_charger.c | 2 +-
> drivers/power/supply/bq24190_charger.c | 2 +-
> drivers/power/supply/bq27xxx_battery.c | 6 +-
> drivers/power/supply/ipaq_micro_battery.c | 3 +-
> drivers/power/supply/rk817_charger.c | 6 +-
> drivers/power/supply/ucs1002_power.c | 2 +-
> drivers/power/supply/ug3105_battery.c | 6 +-
> drivers/rapidio/rio.c | 2 +-
> drivers/ras/cec.c | 2 +-
> drivers/regulator/irq_helpers.c | 2 +-
> drivers/regulator/qcom-labibb-regulator.c | 4 +-
> drivers/s390/char/tape_3590.c | 2 +-
> drivers/scsi/be2iscsi/be_main.c | 3 +-
> drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 2 +-
> drivers/scsi/device_handler/scsi_dh_alua.c | 2 +-
> drivers/scsi/fcoe/fcoe.c | 2 +-
> drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 3 +-
> drivers/scsi/lpfc/lpfc_init.c | 2 +-
> drivers/scsi/pm8001/pm8001_init.c | 2 +-
> drivers/scsi/qedf/qedf_main.c | 15 +++--
> drivers/scsi/qedi/qedi_main.c | 2 +-
> drivers/scsi/qla2xxx/qla_os.c | 4 +-
> drivers/scsi/qla2xxx/qla_target.c | 2 +-
> drivers/scsi/qla2xxx/tcm_qla2xxx.c | 2 +-
> drivers/scsi/qla4xxx/ql4_os.c | 3 +-
> drivers/scsi/scsi_transport_fc.c | 7 ++-
> drivers/scsi/scsi_transport_iscsi.c | 2 +-
> drivers/soc/fsl/qbman/qman.c | 2 +-
> drivers/soc/xilinx/zynqmp_power.c | 6 +-
> drivers/staging/greybus/sdio.c | 2 +-
> drivers/target/sbp/sbp_target.c | 8 +--
> drivers/target/target_core_transport.c | 4 +-
> drivers/target/target_core_xcopy.c | 2 +-
> drivers/target/tcm_fc/tfc_conf.c | 2 +-
> drivers/thunderbolt/tb.c | 2 +-
> drivers/tty/serial/8250/8250_dw.c | 4 +-
> drivers/tty/tty_buffer.c | 8 +--
> drivers/usb/core/hub.c | 2 +-
> drivers/usb/dwc3/gadget.c | 2 +-
> drivers/usb/gadget/function/f_hid.c | 3 +-
> drivers/usb/host/xhci-dbgcap.c | 8 +--
> drivers/usb/host/xhci-ring.c | 2 +-
> drivers/usb/storage/uas.c | 2 +-
> drivers/usb/typec/anx7411.c | 3 +-
> drivers/vdpa/vdpa_user/vduse_dev.c | 3 +-
> drivers/virt/acrn/irqfd.c | 3 +-
> drivers/virtio/virtio_balloon.c | 3 +-
> drivers/xen/events/events_base.c | 6 +-
> drivers/xen/privcmd.c | 3 +-
> fs/afs/callback.c | 4 +-
> fs/afs/main.c | 4 +-
> fs/afs/write.c | 2 +-
> fs/aio.c | 2 +-
> fs/bcachefs/btree_write_buffer.c | 2 +-
> fs/bcachefs/io_read.c | 12 ++--
> fs/bcachefs/journal_io.c | 2 +-
> fs/bcachefs/super.c | 10 ++--
> fs/btrfs/async-thread.c | 3 +-
> fs/btrfs/block-group.c | 2 +-
> fs/btrfs/disk-io.c | 2 +-
> fs/btrfs/extent_map.c | 2 +-
> fs/btrfs/space-info.c | 4 +-
> fs/btrfs/zoned.c | 2 +-
> fs/ceph/super.c | 2 +-
> fs/dlm/lowcomms.c | 2 +-
> fs/dlm/main.c | 2 +-
> fs/ext4/mballoc.c | 2 +-
> fs/fs-writeback.c | 4 +-
> fs/fuse/dev.c | 2 +-
> fs/fuse/inode.c | 2 +-
> fs/gfs2/main.c | 5 +-
> fs/gfs2/ops_fstype.c | 6 +-
> fs/netfs/objects.c | 2 +-
> fs/netfs/read_collect.c | 2 +-
> fs/netfs/write_collect.c | 2 +-
> fs/nfs/namespace.c | 2 +-
> fs/nfs/nfs4renewd.c | 2 +-
> fs/nfsd/filecache.c | 2 +-
> fs/notify/mark.c | 4 +-
> fs/ocfs2/dlm/dlmdomain.c | 3 +-
> fs/ocfs2/dlmfs/dlmfs.c | 3 +-
> fs/quota/dquot.c | 2 +-
> fs/smb/client/cifsfs.c | 16 +++--
> fs/smb/server/ksmbd_work.c | 2 +-
> fs/smb/server/transport_rdma.c | 3 +-
> fs/super.c | 3 +-
> fs/verity/verify.c | 2 +-
> fs/xfs/xfs_log.c | 3 +-
> fs/xfs/xfs_mru_cache.c | 3 +-
> fs/xfs/xfs_super.c | 15 ++---
> include/drm/gpu_scheduler.h | 2 +-
> include/linux/closure.h | 2 +-
> include/linux/workqueue.h | 60 ++++++++++++++-----
> io_uring/io_uring.c | 4 +-
> kernel/bpf/cgroup.c | 5 +-
> kernel/bpf/cpumap.c | 2 +-
> kernel/bpf/helpers.c | 4 +-
> kernel/bpf/memalloc.c | 2 +-
> kernel/bpf/syscall.c | 2 +-
> kernel/cgroup/cgroup-v1.c | 2 +-
> kernel/cgroup/cgroup.c | 4 +-
> kernel/module/dups.c | 4 +-
> kernel/padata.c | 9 +--
> kernel/power/main.c | 2 +-
> kernel/rcu/tasks.h | 4 +-
> kernel/rcu/tree.c | 4 +-
> kernel/sched/core.c | 4 +-
> kernel/sched/ext.c | 4 +-
> kernel/smp.c | 2 +-
> kernel/trace/trace_events_user.c | 2 +-
> kernel/umh.c | 2 +-
> kernel/workqueue.c | 45 ++++++++++----
> mm/backing-dev.c | 6 +-
> mm/kfence/core.c | 6 +-
> mm/memcontrol.c | 4 +-
> mm/slub.c | 3 +-
> mm/vmstat.c | 3 +-
> net/bridge/br_cfm.c | 6 +-
> net/bridge/br_mrp.c | 8 +--
> net/ceph/messenger.c | 3 +-
> net/ceph/mon_client.c | 2 +-
> net/core/link_watch.c | 4 +-
> net/core/skmsg.c | 2 +-
> net/core/sock_diag.c | 2 +-
> net/devlink/core.c | 2 +-
> net/ipv4/inet_fragment.c | 2 +-
> net/netfilter/nf_conntrack_ecache.c | 2 +-
> net/openvswitch/dp_notify.c | 2 +-
> net/rds/ib_rdma.c | 3 +-
> net/rfkill/input.c | 2 +-
> net/rxrpc/rxperf.c | 2 +-
> net/smc/af_smc.c | 6 +-
> net/smc/smc_core.c | 4 +-
> net/tls/tls_device.c | 2 +-
> net/unix/garbage.c | 2 +-
> net/vmw_vsock/af_vsock.c | 2 +-
> net/vmw_vsock/virtio_transport.c | 2 +-
> net/vmw_vsock/vsock_loopback.c | 2 +-
> net/wireless/core.c | 4 +-
> net/wireless/sysfs.c | 2 +-
> rust/kernel/workqueue.rs | 12 ++--
> sound/soc/codecs/aw88081.c | 2 +-
> sound/soc/codecs/aw88166.c | 2 +-
> sound/soc/codecs/aw88261.c | 2 +-
> sound/soc/codecs/aw88395/aw88395.c | 2 +-
> sound/soc/codecs/aw88399.c | 2 +-
> sound/soc/codecs/cs42l43-jack.c | 6 +-
> sound/soc/codecs/cs42l43.c | 4 +-
> sound/soc/codecs/es8326.c | 12 ++--
> sound/soc/codecs/rt5663.c | 6 +-
> sound/soc/codecs/wm_adsp.c | 2 +-
> sound/soc/intel/boards/sof_es8336.c | 2 +-
> sound/soc/sof/intel/cnl.c | 2 +-
> sound/soc/sof/intel/hda-ipc.c | 2 +-
> virt/kvm/eventfd.c | 2 +-
> 367 files changed, 750 insertions(+), 587 deletions(-)
>
> --
> 2.49.0
>
--
Marco Crivellari
L3 Support Engineer, Technology & Product
marco.crivellari@suse.com
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-07-11 10:32 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-25 10:49 [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 01/10] Workqueue: net: replace use of system_wq with system_percpu_wq Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 02/10] Workqueue: mm: " Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 03/10] Workqueue: fs: " Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 04/10] Workqueue: " Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 05/10] Workqueue: replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 06/10] Workqueue: net: WQ_PERCPU added to alloc_workqueue users Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 07/10] Workqueue: mm: " Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 08/10] Workqueue: fs: " Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 09/10] Workqueue: WQ_PERCPU added to all the remaining users Marco Crivellari
2025-06-25 10:49 ` [PATCH v1 10/10] [Doc] Workqueue: WQ_UNBOUND doc upgraded Marco Crivellari
2025-07-11 10:32 ` [PATCH v1 00/10] Workqueue: replace system wq and change alloc_workqueue callers Marco Crivellari
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).