From: Aaron Tomlin <atomlin@atomlin.com>
To: axboe@kernel.dk, kbusch@kernel.org, hch@lst.de, sagi@grimberg.me,
mst@redhat.com
Cc: atomlin@atomlin.com, aacraid@microsemi.com,
James.Bottomley@HansenPartnership.com,
martin.petersen@oracle.com, liyihang9@h-partners.com,
kashyap.desai@broadcom.com, sumit.saxena@broadcom.com,
shivasharan.srikanteshwara@broadcom.com,
chandrakanth.patil@broadcom.com, sathya.prakash@broadcom.com,
sreekanth.reddy@broadcom.com,
suganath-prabu.subramani@broadcom.com, ranjan.kumar@broadcom.com,
jinpu.wang@cloud.ionos.com, tglx@kernel.org, mingo@redhat.com,
peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, akpm@linux-foundation.org,
maz@kernel.org, ruanjinjie@huawei.com, bigeasy@linutronix.de,
yphbchou0911@gmail.com, wagi@kernel.org, frederic@kernel.org,
longman@redhat.com, chenridong@huawei.com, hare@suse.de,
kch@nvidia.com, ming.lei@redhat.com, steve@abita.co,
sean@ashe.io, chjohnst@gmail.com, neelx@suse.com,
mproche@gmail.com, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, virtualization@lists.linux.dev,
linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
megaraidlinux.pdl@broadcom.com, mpi3mr-linuxdrv.pdl@broadcom.com,
MPT-FusionLinux.pdl@broadcom.com
Subject: [PATCH v10 12/13] genirq/affinity: Restrict managed IRQ affinity to housekeeping CPUs
Date: Wed, 1 Apr 2026 18:23:11 -0400 [thread overview]
Message-ID: <20260401222312.772334-13-atomlin@atomlin.com> (raw)
In-Reply-To: <20260401222312.772334-1-atomlin@atomlin.com>
At present, the managed interrupt spreading algorithm distributes vectors
across all available CPUs within a given node or system. On systems
employing CPU isolation (e.g., "isolcpus=io_queue"), this behaviour
defeats the primary purpose of isolation by routing hardware interrupts
(such as NVMe completion queues) directly to isolated cores.
Update irq_create_affinity_masks() to respect the housekeeping CPU mask.
Introduce irq_spread_hk_filter() to intersect the natively calculated
affinity mask with the HK_TYPE_IO_QUEUE mask, thereby keeping managed
interrupts off isolated CPUs.
To ensure strict isolation whilst guaranteeing a valid routing destination:
1. Fallback mechanism: Should the initial spreading logic assign a
vector exclusively to isolated CPUs (resulting in an empty
intersection), the filter safely falls back to the system's
online housekeeping CPUs.
2. Hotplug safety: The fallback utilises data_race(cpu_online_mask)
instead of allocating a local cpumask snapshot. This circumvents
CONFIG_CPUMASK_OFFSTACK stack bloat hazards on high-core-count
systems. Furthermore, it prevents deadlocks with concurrent CPU
hotplug operations (e.g., during storage driver error recovery)
by eliminating the need to hold the CPU hotplug read lock.
3. Fast-path optimisation: The filtering logic is conditionally
executed only if housekeeping is enabled, thereby ensuring zero
overhead for standard configurations.
Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
---
kernel/irq/affinity.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 076a5ef1e306..dd9e7f5fbdec 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -8,6 +8,24 @@
#include <linux/slab.h>
#include <linux/cpu.h>
#include <linux/group_cpus.h>
+#include <linux/sched/isolation.h>
+
+/**
+ * irq_spread_hk_filter - Restrict an interrupt affinity mask to housekeeping CPUs
+ * @mask: The interrupt affinity mask to filter (in/out)
+ * @hk_mask: The system's housekeeping CPU mask
+ *
+ * Intersects @mask with @hk_mask to keep interrupts off isolated CPUs.
+ * If this intersection is empty (meaning all targeted CPUs were isolated),
+ * it falls back to the online housekeeping CPUs to guarantee a valid
+ * routing destination.
+ */
+static void irq_spread_hk_filter(struct cpumask *mask,
+ const struct cpumask *hk_mask)
+{
+ if (!cpumask_and(mask, mask, hk_mask))
+ cpumask_and(mask, hk_mask, data_race(cpu_online_mask));
+}
static void default_calc_sets(struct irq_affinity *affd, unsigned int affvecs)
{
@@ -27,6 +45,8 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd)
{
unsigned int affvecs, curvec, usedvecs, i;
struct irq_affinity_desc *masks = NULL;
+ const struct cpumask *hk_mask = housekeeping_cpumask(HK_TYPE_IO_QUEUE);
+ bool hk_enabled = housekeeping_enabled(HK_TYPE_IO_QUEUE);
/*
* Determine the number of vectors which need interrupt affinities
@@ -83,8 +103,12 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd)
return NULL;
}
- for (int j = 0; j < nr_masks; j++)
+ for (int j = 0; j < nr_masks; j++) {
cpumask_copy(&masks[curvec + j].mask, &result[j]);
+ if (hk_enabled)
+ irq_spread_hk_filter(&masks[curvec + j].mask,
+ hk_mask);
+ }
kfree(result);
curvec += nr_masks;
--
2.51.0
next prev parent reply other threads:[~2026-04-01 22:24 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-01 22:22 [PATCH v10 00/13] blk: honor isolcpus configuration Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 01/13] scsi: aacraid: use block layer helpers to calculate num of queues Aaron Tomlin
2026-04-03 1:43 ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 02/13] lib/group_cpus: remove dead !SMP code Aaron Tomlin
2026-04-03 1:45 ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 03/13] lib/group_cpus: Add group_mask_cpus_evenly() Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 04/13] genirq/affinity: Add cpumask to struct irq_affinity Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 05/13] blk-mq: add blk_mq_{online|possible}_queue_affinity Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 06/13] nvme-pci: use block layer helpers to constrain queue affinity Aaron Tomlin
2026-04-03 1:46 ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 07/13] scsi: Use " Aaron Tomlin
2026-04-03 1:46 ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 08/13] virtio: blk/scsi: use " Aaron Tomlin
2026-04-03 1:47 ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 09/13] isolation: Introduce io_queue isolcpus type Aaron Tomlin
2026-04-03 1:47 ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 10/13] blk-mq: use hk cpus only when isolcpus=io_queue is enabled Aaron Tomlin
2026-04-03 2:06 ` Waiman Long
2026-04-05 23:09 ` Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 11/13] blk-mq: prevent offlining hk CPUs with associated online isolated CPUs Aaron Tomlin
2026-04-01 22:23 ` Aaron Tomlin [this message]
2026-04-01 22:23 ` [PATCH v10 13/13] docs: add io_queue flag to isolcpus Aaron Tomlin
2026-04-03 2:30 ` Ming Lei
2026-04-06 1:15 ` Aaron Tomlin
2026-04-06 3:29 ` Ming Lei
2026-04-08 15:58 ` Aaron Tomlin
2026-04-09 15:00 ` Ming Lei
2026-04-10 1:45 ` Aaron Tomlin
2026-04-10 2:44 ` Ming Lei
2026-04-10 19:31 ` Aaron Tomlin
2026-04-11 12:52 ` Ming Lei
2026-04-12 22:50 ` Aaron Tomlin
2026-04-13 15:11 ` Ming Lei
2026-04-15 8:34 ` Sebastian Andrzej Siewior
2026-04-15 8:58 ` Ming Lei
2026-04-15 14:47 ` Aaron Tomlin
2026-04-15 14:56 ` Aaron Tomlin
2026-04-16 0:48 ` Ming Lei
2026-04-16 13:40 ` Aaron Tomlin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260401222312.772334-13-atomlin@atomlin.com \
--to=atomlin@atomlin.com \
--cc=James.Bottomley@HansenPartnership.com \
--cc=MPT-FusionLinux.pdl@broadcom.com \
--cc=aacraid@microsemi.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bigeasy@linutronix.de \
--cc=chandrakanth.patil@broadcom.com \
--cc=chenridong@huawei.com \
--cc=chjohnst@gmail.com \
--cc=frederic@kernel.org \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jinpu.wang@cloud.ionos.com \
--cc=juri.lelli@redhat.com \
--cc=kashyap.desai@broadcom.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=liyihang9@h-partners.com \
--cc=longman@redhat.com \
--cc=martin.petersen@oracle.com \
--cc=maz@kernel.org \
--cc=megaraidlinux.pdl@broadcom.com \
--cc=ming.lei@redhat.com \
--cc=mingo@redhat.com \
--cc=mpi3mr-linuxdrv.pdl@broadcom.com \
--cc=mproche@gmail.com \
--cc=mst@redhat.com \
--cc=neelx@suse.com \
--cc=peterz@infradead.org \
--cc=ranjan.kumar@broadcom.com \
--cc=ruanjinjie@huawei.com \
--cc=sagi@grimberg.me \
--cc=sathya.prakash@broadcom.com \
--cc=sean@ashe.io \
--cc=shivasharan.srikanteshwara@broadcom.com \
--cc=sreekanth.reddy@broadcom.com \
--cc=steve@abita.co \
--cc=suganath-prabu.subramani@broadcom.com \
--cc=sumit.saxena@broadcom.com \
--cc=tglx@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=virtualization@lists.linux.dev \
--cc=wagi@kernel.org \
--cc=yphbchou0911@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.