* [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-01 1:40 [RFC PATCH v2 0/1] " Aaron Tomlin
@ 2025-06-01 1:40 ` Aaron Tomlin
2025-06-16 20:51 ` Martin K. Petersen
2025-06-17 7:18 ` John Garry
0 siblings, 2 replies; 13+ messages in thread
From: Aaron Tomlin @ 2025-06-01 1:40 UTC (permalink / raw)
To: mpi3mr-linuxdrv.pdl
Cc: kashyap.desai, sumit.saxena, sreekanth.reddy, James.Bottomley,
martin.petersen, atomlin, linux-scsi, linux-kernel
This patch introduces a new module parameter namely
"smp_affinity_enable", to govern the application of system-wide IRQ
affinity (with kernel boot-time parameter "irqaffinity") for MSI-X
interrupts. By default, the default IRQ affinity mask will not be
respected. Set smp_affinity_enable to 0 disables this behaviour.
Consequently, preventing the auto-assignment of MSI-X IRQs.
Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
---
drivers/scsi/mpi3mr/mpi3mr.h | 1 +
drivers/scsi/mpi3mr/mpi3mr_fw.c | 14 ++++++++++++--
drivers/scsi/mpi3mr/mpi3mr_os.c | 14 +++++++++++---
3 files changed, 24 insertions(+), 5 deletions(-)
diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
index 9bbc7cb98ca3..82a0c1dd2f59 100644
--- a/drivers/scsi/mpi3mr/mpi3mr.h
+++ b/drivers/scsi/mpi3mr/mpi3mr.h
@@ -1378,6 +1378,7 @@ struct mpi3mr_ioc {
u32 num_tb_segs;
struct dma_pool *trace_buf_pool;
struct segments *trace_buf;
+ bool smp_affinity_enable;
};
diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
index 1d7901a8f0e4..9cbe1744213d 100644
--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
+++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
@@ -22,6 +22,10 @@ static int poll_queues;
module_param(poll_queues, int, 0444);
MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)");
+static int smp_affinity_enable = 1;
+module_param(smp_affinity_enable, int, 0444);
+MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)");
+
#if defined(writeq) && defined(CONFIG_64BIT)
static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr)
{
@@ -821,6 +825,7 @@ static int mpi3mr_setup_isr(struct mpi3mr_ioc *mrioc, u8 setup_one)
int retval;
int i;
struct irq_affinity desc = { .pre_vectors = 1, .post_vectors = 1 };
+ struct irq_affinity *descp = &desc;
if (mrioc->is_intr_info_set)
return 0;
@@ -852,10 +857,13 @@ static int mpi3mr_setup_isr(struct mpi3mr_ioc *mrioc, u8 setup_one)
desc.post_vectors = mrioc->requested_poll_qcount;
min_vec = desc.pre_vectors + desc.post_vectors;
- irq_flags |= PCI_IRQ_AFFINITY | PCI_IRQ_ALL_TYPES;
+ if (mrioc->smp_affinity_enable)
+ irq_flags |= PCI_IRQ_AFFINITY | PCI_IRQ_ALL_TYPES;
+ else
+ descp = NULL;
retval = pci_alloc_irq_vectors_affinity(mrioc->pdev,
- min_vec, max_vectors, irq_flags, &desc);
+ min_vec, max_vectors, irq_flags, descp);
if (retval < 0) {
ioc_err(mrioc, "cannot allocate irq vectors, ret %d\n",
@@ -4233,6 +4241,8 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)
goto out_failed_noretry;
}
+ mrioc->smp_affinity_enable = smp_affinity_enable ? true : false;
+
retval = mpi3mr_setup_isr(mrioc, 1);
if (retval) {
ioc_err(mrioc, "Failed to setup ISR error %d\n",
diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
index ce444efd859e..6ea73cf7579b 100644
--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
+++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
@@ -4064,6 +4064,9 @@ static void mpi3mr_map_queues(struct Scsi_Host *shost)
int i, qoff, offset;
struct blk_mq_queue_map *map = NULL;
+ if (shost->nr_hw_queues == 1)
+ return;
+
offset = mrioc->op_reply_q_offset;
for (i = 0, qoff = 0; i < HCTX_MAX_TYPES; i++) {
@@ -5422,8 +5425,6 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
shost->max_channel = 0;
shost->max_id = 0xFFFFFFFF;
- shost->host_tagset = 1;
-
if (prot_mask >= 0)
scsi_host_set_prot(shost, prot_mask);
else {
@@ -5471,7 +5472,14 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto init_ioc_failed;
}
- shost->nr_hw_queues = mrioc->num_op_reply_q;
+ shost->host_tagset = 0;
+ shost->nr_hw_queues = 1;
+
+ if (mrioc->smp_affinity_enable) {
+ shost->nr_hw_queues = mrioc->num_op_reply_q;
+ shost->host_tagset = 1;
+ }
+
if (mrioc->active_poll_qcount)
shost->nr_maps = 3;
--
2.49.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-01 1:40 ` [RFC PATCH v2 1/1] " Aaron Tomlin
@ 2025-06-16 20:51 ` Martin K. Petersen
2025-06-17 7:18 ` John Garry
1 sibling, 0 replies; 13+ messages in thread
From: Martin K. Petersen @ 2025-06-16 20:51 UTC (permalink / raw)
To: Aaron Tomlin
Cc: mpi3mr-linuxdrv.pdl, kashyap.desai, sumit.saxena, sreekanth.reddy,
James.Bottomley, martin.petersen, linux-scsi, linux-kernel
> This patch introduces a new module parameter namely
> "smp_affinity_enable", to govern the application of system-wide IRQ
> affinity (with kernel boot-time parameter "irqaffinity") for MSI-X
> interrupts. By default, the default IRQ affinity mask will not be
> respected. Set smp_affinity_enable to 0 disables this behaviour.
> Consequently, preventing the auto-assignment of MSI-X IRQs.
Broadcom: Please review!
--
Martin K. Petersen
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-01 1:40 ` [RFC PATCH v2 1/1] " Aaron Tomlin
2025-06-16 20:51 ` Martin K. Petersen
@ 2025-06-17 7:18 ` John Garry
1 sibling, 0 replies; 13+ messages in thread
From: John Garry @ 2025-06-17 7:18 UTC (permalink / raw)
To: Aaron Tomlin, mpi3mr-linuxdrv.pdl
Cc: kashyap.desai, sumit.saxena, sreekanth.reddy, James.Bottomley,
martin.petersen, linux-scsi, linux-kernel
On 01/06/2025 02:40, Aaron Tomlin wrote:
> This patch introduces a new module parameter namely
> "smp_affinity_enable", to govern the application of system-wide IRQ
> affinity (with kernel boot-time parameter "irqaffinity") for MSI-X
> interrupts. By default, the default IRQ affinity mask will not be
> respected. Set smp_affinity_enable to 0 disables this behaviour.
> Consequently, preventing the auto-assignment of MSI-X IRQs.
You have given no substantial motivation for this change.
On the cover letter you have, "I noticed that the Linux MegaRAID driver
for SAS based RAID controllers has the same aforementioned module
parameter ..." and " I suspect it would be useful ..."
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
@ 2025-06-17 16:34 Sean A.
2025-06-18 6:49 ` John Garry
0 siblings, 1 reply; 13+ messages in thread
From: Sean A. @ 2025-06-17 16:34 UTC (permalink / raw)
To: john.g.garry@oracle.com
Cc: James.Bottomley@hansenpartnership.com, atomlin@atomlin.com,
kashyap.desai@broadcom.com, linux-kernel@vger.kernel.org,
linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
mpi3mr-linuxdrv.pdl@broadcom.com, sreekanth.reddy@broadcom.com,
sumit.saxena@broadcom.com
Le 17 Jun 2025, John Garry a écrit :
> You have given no substantial motivation for this change
From my perspective, workloads exist (defense, telecom, finance, RT etc) that prefer not to be interrupted and developers may opt to utilize CPU isolation and other mechanisms to reduce the likelihood of being pre-empted, evicted, etc. This includes steering interrupts away from an isolated set of cores. Also while this doesn't result from any actual benchmarking, it would seem that forcing your way on to every core in a 192 core system and refusing to move might be needlessly greedy or even detrimental to performance if most of the core set is NUMA-foreign to the storage controller. One should be able to make placement decisions to protect app threads from interruption and to ensure the interrupt handler has a sleepy, local core to play with without lighting up a bunch of interconnect paths on the way.
Generically, I believe interfaces like /proc/$pid/smp_affinity[_list] should be allowed to work as expected, and things like irqbalance should also be able to do their jobs unless there's a good (documented) reason they should not.
SA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-17 16:34 [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter Sean A.
@ 2025-06-18 6:49 ` John Garry
2025-06-18 15:53 ` Sean A.
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: John Garry @ 2025-06-18 6:49 UTC (permalink / raw)
To: Sean A.
Cc: James.Bottomley@hansenpartnership.com, atomlin@atomlin.com,
kashyap.desai@broadcom.com, linux-kernel@vger.kernel.org,
linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
mpi3mr-linuxdrv.pdl@broadcom.com, sreekanth.reddy@broadcom.com,
sumit.saxena@broadcom.com
On 17/06/2025 17:34, Sean A. wrote:
>
> Le 17 Jun 2025, John Garry a écrit :
>> You have given no substantial motivation for this change
>
> From my perspective, workloads exist (defense, telecom, finance, RT etc) that prefer not to be interrupted and developers may opt to utilize CPU isolation and other mechanisms to reduce the likelihood of being pre-empted, evicted, etc. This includes steering interrupts away from an isolated set of cores. Also while this doesn't result from any actual benchmarking, it would seem that forcing your way on to every core in a 192 core system and refusing to move might be needlessly greedy or even detrimental to performance if most of the core set is NUMA-foreign to the storage controller. One should be able to make placement decisions to protect app threads from interruption and to ensure the interrupt handler has a sleepy, local core to play with without lighting up a bunch of interconnect paths on the way.
>
> Generically, I believe interfaces like /proc/$pid/smp_affinity[_list] should be allowed to work as expected, and things like irqbalance should also be able to do their jobs unless there's a good (documented) reason they should not.
There is a good reason. Some of these storage controllers have hundreds
of MSI-Xs - typically one per CPU. If you offline CPUs, those interrupts
need to be migrated to target other CPUs. And for architectures like
x86, CPUs can only handle a finite and relatively modest amount of
interrupts (being targeted). That is why managed interrupts are used
(which this module parameter would disable for this controller).
BTW, if you use taskset to set the affinity of a process and ensure that
/sys/block/xxx/queue/rq_affinity is set so that we complete on same CPU
as submitted, then I thought that this would ensure that interrupts are
not bothering other CPUs.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-18 6:49 ` John Garry
@ 2025-06-18 15:53 ` Sean A.
2025-06-19 19:35 ` Aaron Tomlin
2025-06-23 5:17 ` Christoph Hellwig
2 siblings, 0 replies; 13+ messages in thread
From: Sean A. @ 2025-06-18 15:53 UTC (permalink / raw)
To: John Garry
Cc: James.Bottomley@hansenpartnership.com, atomlin@atomlin.com,
kashyap.desai@broadcom.com, linux-kernel@vger.kernel.org,
linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
mpi3mr-linuxdrv.pdl@broadcom.com, sreekanth.reddy@broadcom.com,
sumit.saxena@broadcom.com
Thank you, we'll certainly look into rq_affinity. We do isolate with managed_irq, so I did not expect to see these spanning the isolated core set.
Every other driver we use honors isolation+managed_irq, or exposes tunables (as proposed in parent) to afford some control over these behaviors for people like us. I realize we are in the minority here; there is tangible impact to our sort of business from an increase in interrupt rates on critical cores across a population of machines at scale. It would be good to know if this was a conscious decision by the maintainers to prioritize their controller's performance or a simple omission so that we can decide whether to continue pursuing this vs researching other [vendor] options.
SA
On Wednesday, June 18th, 2025 at 2:49 AM, John Garry <john.g.garry@oracle.com> wrote:
>
>
> On 17/06/2025 17:34, Sean A. wrote:
>
> > Le 17 Jun 2025, John Garry a écrit :
> >
> > > You have given no substantial motivation for this change
> >
> > From my perspective, workloads exist (defense, telecom, finance, RT etc) that prefer not to be interrupted and developers may opt to utilize CPU isolation and other mechanisms to reduce the likelihood of being pre-empted, evicted, etc. This includes steering interrupts away from an isolated set of cores. Also while this doesn't result from any actual benchmarking, it would seem that forcing your way on to every core in a 192 core system and refusing to move might be needlessly greedy or even detrimental to performance if most of the core set is NUMA-foreign to the storage controller. One should be able to make placement decisions to protect app threads from interruption and to ensure the interrupt handler has a sleepy, local core to play with without lighting up a bunch of interconnect paths on the way.
> >
> > Generically, I believe interfaces like /proc/$pid/smp_affinity[_list] should be allowed to work as expected, and things like irqbalance should also be able to do their jobs unless there's a good (documented) reason they should not.
>
>
> There is a good reason. Some of these storage controllers have hundreds
> of MSI-Xs - typically one per CPU. If you offline CPUs, those interrupts
> need to be migrated to target other CPUs. And for architectures like
> x86, CPUs can only handle a finite and relatively modest amount of
> interrupts (being targeted). That is why managed interrupts are used
> (which this module parameter would disable for this controller).
>
> BTW, if you use taskset to set the affinity of a process and ensure that
> /sys/block/xxx/queue/rq_affinity is set so that we complete on same CPU
> as submitted, then I thought that this would ensure that interrupts are
> not bothering other CPUs.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-18 6:49 ` John Garry
2025-06-18 15:53 ` Sean A.
@ 2025-06-19 19:35 ` Aaron Tomlin
2025-06-20 10:43 ` John Garry
2025-06-23 5:17 ` Christoph Hellwig
2 siblings, 1 reply; 13+ messages in thread
From: Aaron Tomlin @ 2025-06-19 19:35 UTC (permalink / raw)
To: John Garry
Cc: Sean A., James.Bottomley@hansenpartnership.com,
kashyap.desai@broadcom.com, linux-kernel@vger.kernel.org,
linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
mpi3mr-linuxdrv.pdl@broadcom.com, sreekanth.reddy@broadcom.com,
sumit.saxena@broadcom.com
On Wed, Jun 18, 2025 at 07:49:16AM +0100, John Garry wrote:
> BTW, if you use taskset to set the affinity of a process and ensure that
> /sys/block/xxx/queue/rq_affinity is set so that we complete on same CPU as
> submitted, then I thought that this would ensure that interrupts are not
> bothering other CPUs.
Hi John,
I'm trying to understand this better. If I'm not mistaken, modifying
/sys/block/[device]/queue/rq_affinity impacts where requests are processed.
Could you clarify how this would prevent an IRQ from being delivered to an
isolated CPU?
--
Aaron Tomlin
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-19 19:35 ` Aaron Tomlin
@ 2025-06-20 10:43 ` John Garry
0 siblings, 0 replies; 13+ messages in thread
From: John Garry @ 2025-06-20 10:43 UTC (permalink / raw)
To: Aaron Tomlin
Cc: Sean A., James.Bottomley@hansenpartnership.com,
kashyap.desai@broadcom.com, linux-kernel@vger.kernel.org,
linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
mpi3mr-linuxdrv.pdl@broadcom.com, sreekanth.reddy@broadcom.com,
sumit.saxena@broadcom.com
On 19/06/2025 20:35, Aaron Tomlin wrote:
> On Wed, Jun 18, 2025 at 07:49:16AM +0100, John Garry wrote:
>> BTW, if you use taskset to set the affinity of a process and ensure that
>> /sys/block/xxx/queue/rq_affinity is set so that we complete on same CPU as
>> submitted, then I thought that this would ensure that interrupts are not
>> bothering other CPUs.
> Hi John,
>
> I'm trying to understand this better. If I'm not mistaken, modifying
> /sys/block/[device]/queue/rq_affinity impacts where requests are processed.
> Could you clarify how this would prevent an IRQ from being delivered to an
> isolated CPU?
If you echo 2 > /sys/block/[device]/queue/rq_affinity, then completions
will occur on the same CPU which their originated. And through taskset,
if you keep the processes generating traffic on a certain group of CPUs,
then other CPUs should not see any loading from those same processes.
I am assuming that there is a one-to-one CPU <-> HW queue relationship.
That's my unverified idea...
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-18 6:49 ` John Garry
2025-06-18 15:53 ` Sean A.
2025-06-19 19:35 ` Aaron Tomlin
@ 2025-06-23 5:17 ` Christoph Hellwig
2025-06-24 14:29 ` Daniel Wagner
2 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2025-06-23 5:17 UTC (permalink / raw)
To: John Garry, Daniel Wagner
Cc: Sean A., James.Bottomley@hansenpartnership.com,
atomlin@atomlin.com, kashyap.desai@broadcom.com,
linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
martin.petersen@oracle.com, mpi3mr-linuxdrv.pdl@broadcom.com,
sreekanth.reddy@broadcom.com, sumit.saxena@broadcom.com
On Wed, Jun 18, 2025 at 07:49:16AM +0100, John Garry wrote:
> BTW, if you use taskset to set the affinity of a process and ensure that
> /sys/block/xxx/queue/rq_affinity is set so that we complete on same CPU as
> submitted, then I thought that this would ensure that interrupts are not
> bothering other CPUs.
The RT folks want to not even have interrupts on the application CPUs.
That's perfectly reasonable and a common request. Why doing driver
hacks as in this patch and many others is so completely insane. Instead
we need common functionality for that. The core irq layer has added
them for managed interrupts, and Daniel has been working on the blk-mq side
for a while.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-23 5:17 ` Christoph Hellwig
@ 2025-06-24 14:29 ` Daniel Wagner
2025-06-25 13:03 ` Aaron Tomlin
0 siblings, 1 reply; 13+ messages in thread
From: Daniel Wagner @ 2025-06-24 14:29 UTC (permalink / raw)
To: Christoph Hellwig
Cc: John Garry, Daniel Wagner, Sean A.,
James.Bottomley@hansenpartnership.com, atomlin@atomlin.com,
kashyap.desai@broadcom.com, linux-kernel@vger.kernel.org,
linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
mpi3mr-linuxdrv.pdl@broadcom.com, sreekanth.reddy@broadcom.com,
sumit.saxena@broadcom.com
On Sun, Jun 22, 2025 at 10:17:51PM -0700, Christoph Hellwig wrote:
> On Wed, Jun 18, 2025 at 07:49:16AM +0100, John Garry wrote:
> > BTW, if you use taskset to set the affinity of a process and ensure that
> > /sys/block/xxx/queue/rq_affinity is set so that we complete on same CPU as
> > submitted, then I thought that this would ensure that interrupts are not
> > bothering other CPUs.
>
> The RT folks want to not even have interrupts on the application CPUs.
> That's perfectly reasonable and a common request. Why doing driver
> hacks as in this patch and many others is so completely insane. Instead
> we need common functionality for that. The core irq layer has added
> them for managed interrupts, and Daniel has been working on the blk-mq side
> for a while.
Indeed, I am in the process to finish the work on my next version for
the isolcpus support in the block layer. I hope to send it out the next
version this week.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-24 14:29 ` Daniel Wagner
@ 2025-06-25 13:03 ` Aaron Tomlin
2025-07-01 13:54 ` Daniel Wagner
0 siblings, 1 reply; 13+ messages in thread
From: Aaron Tomlin @ 2025-06-25 13:03 UTC (permalink / raw)
To: Daniel Wagner
Cc: Christoph Hellwig, John Garry, Daniel Wagner, Sean A.,
James.Bottomley@hansenpartnership.com, kashyap.desai@broadcom.com,
linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
martin.petersen@oracle.com, mpi3mr-linuxdrv.pdl@broadcom.com,
sreekanth.reddy@broadcom.com, sumit.saxena@broadcom.com
On Tue, Jun 24, 2025 at 04:29:58PM +0200, Daniel Wagner wrote:
> On Sun, Jun 22, 2025 at 10:17:51PM -0700, Christoph Hellwig wrote:
> > On Wed, Jun 18, 2025 at 07:49:16AM +0100, John Garry wrote:
> > > BTW, if you use taskset to set the affinity of a process and ensure that
> > > /sys/block/xxx/queue/rq_affinity is set so that we complete on same CPU as
> > > submitted, then I thought that this would ensure that interrupts are not
> > > bothering other CPUs.
> >
> > The RT folks want to not even have interrupts on the application CPUs.
> > That's perfectly reasonable and a common request. Why doing driver
> > hacks as in this patch and many others is so completely insane. Instead
> > we need common functionality for that. The core irq layer has added
> > them for managed interrupts, and Daniel has been working on the blk-mq side
> > for a while.
>
> Indeed, I am in the process to finish the work on my next version for
> the isolcpus support in the block layer. I hope to send it out the next
> version this week.
Hi Christoph, Daniel,
Understood. I agree, common functionality is indeed preferred.
Daniel, I look forward to your submission.
Kind regards,
--
Aaron Tomlin
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-06-25 13:03 ` Aaron Tomlin
@ 2025-07-01 13:54 ` Daniel Wagner
2025-07-02 0:30 ` Aaron Tomlin
0 siblings, 1 reply; 13+ messages in thread
From: Daniel Wagner @ 2025-07-01 13:54 UTC (permalink / raw)
To: Aaron Tomlin
Cc: Christoph Hellwig, John Garry, Daniel Wagner, Sean A.,
James.Bottomley@hansenpartnership.com, kashyap.desai@broadcom.com,
linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
martin.petersen@oracle.com, mpi3mr-linuxdrv.pdl@broadcom.com,
sreekanth.reddy@broadcom.com, sumit.saxena@broadcom.com
On Wed, Jun 25, 2025 at 09:03:42AM -0400, Aaron Tomlin wrote:
> Understood. I agree, common functionality is indeed preferred.
> Daniel, I look forward to your submission.
Sorry for the delay. I found a few bugs in the new cpu queue mapping
code and it took a while to debug and fix them all. I should have it
ready to post by tomorrow. Currently, my brain overheating due to summer
:)
FWIW, the last standing issue is that the qla2xxx driver allocates
queues for scsi and reuses a subset of these queues for nvme. So the irq
allocating is done for the scsi queues, e.g 16 queues, but the nvme code
limits it to 8 queues. Currently, there is a disconnect between the irq
mapping and the cpu mapping code. The solution here is to use the
irq_get_affinity function instead creating a new map based only on the
housekeeping cpumask.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter
2025-07-01 13:54 ` Daniel Wagner
@ 2025-07-02 0:30 ` Aaron Tomlin
0 siblings, 0 replies; 13+ messages in thread
From: Aaron Tomlin @ 2025-07-02 0:30 UTC (permalink / raw)
To: Daniel Wagner
Cc: Christoph Hellwig, John Garry, Daniel Wagner, Sean A.,
James.Bottomley@hansenpartnership.com, kashyap.desai@broadcom.com,
linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
martin.petersen@oracle.com, mpi3mr-linuxdrv.pdl@broadcom.com,
sreekanth.reddy@broadcom.com, sumit.saxena@broadcom.com
On Tue, Jul 01, 2025 at 03:54:43PM +0200, Daniel Wagner wrote:
> On Wed, Jun 25, 2025 at 09:03:42AM -0400, Aaron Tomlin wrote:
> > Understood. I agree, common functionality is indeed preferred.
> > Daniel, I look forward to your submission.
>
> Sorry for the delay. I found a few bugs in the new cpu queue mapping
> code and it took a while to debug and fix them all. I should have it
> ready to post by tomorrow. Currently, my brain overheating due to summer
> :)
>
> FWIW, the last standing issue is that the qla2xxx driver allocates
> queues for scsi and reuses a subset of these queues for nvme. So the irq
> allocating is done for the scsi queues, e.g 16 queues, but the nvme code
> limits it to 8 queues. Currently, there is a disconnect between the irq
> mapping and the cpu mapping code. The solution here is to use the
> irq_get_affinity function instead creating a new map based only on the
> housekeeping cpumask.
Hi Daniel,
No problem and thank you for the update. Excellent! Looking forward to
review.
Kind regards,
--
Aaron Tomlin
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-07-02 0:30 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-17 16:34 [RFC PATCH v2 1/1] scsi: mpi3mr: Introduce smp_affinity_enable module parameter Sean A.
2025-06-18 6:49 ` John Garry
2025-06-18 15:53 ` Sean A.
2025-06-19 19:35 ` Aaron Tomlin
2025-06-20 10:43 ` John Garry
2025-06-23 5:17 ` Christoph Hellwig
2025-06-24 14:29 ` Daniel Wagner
2025-06-25 13:03 ` Aaron Tomlin
2025-07-01 13:54 ` Daniel Wagner
2025-07-02 0:30 ` Aaron Tomlin
-- strict thread matches above, loose matches on Subject: below --
2025-06-01 1:40 [RFC PATCH v2 0/1] " Aaron Tomlin
2025-06-01 1:40 ` [RFC PATCH v2 1/1] " Aaron Tomlin
2025-06-16 20:51 ` Martin K. Petersen
2025-06-17 7:18 ` John Garry
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).