From: Daniel Wagner <wagi@kernel.org>
To: Jens Axboe <axboe@kernel.dk>, Bjorn Helgaas <bhelgaas@google.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>,
Sagi Grimberg <sagi@grimberg.me>
Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-pci@vger.kernel.org, virtualization@lists.linux.dev,
linux-scsi@vger.kernel.org, megaraidlinux.pdl@broadcom.com,
mpi3mr-linuxdrv.pdl@broadcom.com,
MPT-FusionLinux.pdl@broadcom.com, storagedev@microchip.com,
linux-nvme@lists.infradead.org, Daniel Wagner <dwagner@suse.de>,
20240912-do-not-overwrite-pci-mapping-v1-1-85724b6cec49@suse.de
Subject: [PATCH 3/6] scsi: hisi_sas: replace blk_mq_pci_map_queues with blk_mq_hctx_map_queues
Date: Fri, 13 Sep 2024 09:42:01 +0200 [thread overview]
Message-ID: <20240913-refactor-blk-affinity-helpers-v1-3-8e058f77af12@suse.de> (raw)
In-Reply-To: <20240913-refactor-blk-affinity-helpers-v1-0-8e058f77af12@suse.de>
From: Daniel Wagner <dwagner@suse.de>
Replace all users of blk_mq_pci_map_queues with the more generic
blk_mq_hctx_map_queues. This in preparation to retire
blk_mq_pci_map_queues.
For his_sas_v2_hw.c we have to provide its own callback for retrieving
the affinity because pci_get_blk_mq_affinity is using
pci_irq_get_affinity and not irq_data_get_affinity_mask.
But at least we can replace the open code loop with
blk_mq_hctx_map_queues.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
drivers/scsi/hisi_sas/hisi_sas.h | 1 -
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 20 ++++++++++----------
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 5 +++--
3 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
index d223f482488f..010479a354ee 100644
--- a/drivers/scsi/hisi_sas/hisi_sas.h
+++ b/drivers/scsi/hisi_sas/hisi_sas.h
@@ -9,7 +9,6 @@
#include <linux/acpi.h>
#include <linux/blk-mq.h>
-#include <linux/blk-mq-pci.h>
#include <linux/clk.h>
#include <linux/debugfs.h>
#include <linux/dmapool.h>
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
index 342d75f12051..31be34f23164 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
@@ -3549,21 +3549,21 @@ static const struct attribute_group *sdev_groups_v2_hw[] = {
NULL
};
+static const struct cpumask *hisi_hba_get_queue_affinity(void *dev_data,
+ int offset, int queue)
+{
+ struct hisi_hba *hba = dev_data;
+
+ return irq_get_affinity_mask(hba->irq_map[offset + queue]);
+}
+
static void map_queues_v2_hw(struct Scsi_Host *shost)
{
struct hisi_hba *hisi_hba = shost_priv(shost);
struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT];
- const struct cpumask *mask;
- unsigned int queue, cpu;
- for (queue = 0; queue < qmap->nr_queues; queue++) {
- mask = irq_get_affinity_mask(hisi_hba->irq_map[96 + queue]);
- if (!mask)
- continue;
-
- for_each_cpu(cpu, mask)
- qmap->mq_map[cpu] = qmap->queue_offset + queue;
- }
+ blk_mq_hctx_map_queues(qmap, hisi_hba, CQ0_IRQ_INDEX,
+ hisi_hba_get_queue_affinity);
}
static const struct scsi_host_template sht_v2_hw = {
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index feda9b54b443..1576eee943ba 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -3322,8 +3322,9 @@ static void hisi_sas_map_queues(struct Scsi_Host *shost)
if (i == HCTX_TYPE_POLL)
blk_mq_map_queues(qmap);
else
- blk_mq_pci_map_queues(qmap, hisi_hba->pci_dev,
- BASE_VECTORS_V3_HW);
+ blk_mq_hctx_map_queues(qmap, hisi_hba->pci_dev,
+ BASE_VECTORS_V3_HW,
+ pci_get_blk_mq_affinity);
qoff += qmap->nr_queues;
}
}
--
2.46.0
next prev parent reply other threads:[~2024-09-13 7:42 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-13 7:41 [PATCH 0/6] EDITME: blk: refactor queue affinity helpers Daniel Wagner
2024-09-13 7:41 ` [PATCH 1/6] blk-mq: introduce blk_mq_hctx_map_queues Daniel Wagner
2024-09-13 16:26 ` Bjorn Helgaas
2024-09-15 20:32 ` Jens Axboe
2024-09-16 6:26 ` Daniel Wagner
2024-09-16 6:48 ` Christoph Hellwig
2024-09-13 7:42 ` [PATCH 2/6] scsi: replace blk_mq_pci_map_queues with blk_mq_hctx_map_queues Daniel Wagner
2024-09-13 7:42 ` Daniel Wagner [this message]
2024-09-13 7:42 ` [PATCH 4/6] nvme: " Daniel Wagner
2024-09-13 7:42 ` [PATCH 5/6] virtio: blk/scsi: replace blk_mq_virtio_map_queues " Daniel Wagner
2024-09-13 7:42 ` [PATCH 6/6] blk-mq: remove unused queue mapping helpers Daniel Wagner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240913-refactor-blk-affinity-helpers-v1-3-8e058f77af12@suse.de \
--to=wagi@kernel.org \
--cc=20240912-do-not-overwrite-pci-mapping-v1-1-85724b6cec49@suse.de \
--cc=MPT-FusionLinux.pdl@broadcom.com \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=dwagner@suse.de \
--cc=hch@lst.de \
--cc=jasowang@redhat.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=megaraidlinux.pdl@broadcom.com \
--cc=mpi3mr-linuxdrv.pdl@broadcom.com \
--cc=mst@redhat.com \
--cc=sagi@grimberg.me \
--cc=storagedev@microchip.com \
--cc=virtualization@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).