From: Sagi Grimberg <sagi@grimberg.me>
To: Jens Axboe <axboe@kernel.dk>,
linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
linux-rdma@vger.kernel.org
Cc: Christoph Hellwig <hch@lst.de>,
Or Gerlitz <ogerlitz@mellanox.com>,
Saeed Mahameed <saeedm@mellanox.com>,
Leon Romanovsky <leonro@mellanox.com>
Subject: [PATCH v2 6/6] nvme-rdma: use intelligent affinity based queue mappings
Date: Thu, 6 Apr 2017 13:36:38 +0300 [thread overview]
Message-ID: <1491474998-16423-7-git-send-email-sagi@grimberg.me> (raw)
In-Reply-To: <1491474998-16423-1-git-send-email-sagi@grimberg.me>
Use the generic block layer affinity mapping helper. Also,
limit nr_hw_queues to the rdma device number of irq vectors
as we don't really need more.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/host/rdma.c | 29 ++++++++++++++++++++++-------
1 file changed, 22 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 4aae363943e3..22334b6e8fc3 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -19,6 +19,7 @@
#include <linux/string.h>
#include <linux/atomic.h>
#include <linux/blk-mq.h>
+#include <linux/blk-mq-rdma.h>
#include <linux/types.h>
#include <linux/list.h>
#include <linux/mutex.h>
@@ -496,14 +497,10 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue,
queue->device = dev;
/*
- * The admin queue is barely used once the controller is live, so don't
- * bother to spread it out.
+ * Spread I/O queues completion vectors according their queue index.
+ * Admin queues can always go on completion vector 0.
*/
- if (idx == 0)
- comp_vector = 0;
- else
- comp_vector = idx % ibdev->num_comp_vectors;
-
+ comp_vector = idx == 0 ? idx : idx - 1;
/* +1 for ib_stop_cq */
queue->ib_cq = ib_alloc_cq(dev->dev, queue,
@@ -645,10 +642,20 @@ static int nvme_rdma_connect_io_queues(struct nvme_rdma_ctrl *ctrl)
static int nvme_rdma_init_io_queues(struct nvme_rdma_ctrl *ctrl)
{
struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+ struct ib_device *ibdev = ctrl->device->dev;
unsigned int nr_io_queues;
int i, ret;
nr_io_queues = min(opts->nr_io_queues, num_online_cpus());
+
+ /*
+ * we map queues according to the device irq vectors for
+ * optimal locality so we don't need more queues than
+ * completion vectors.
+ */
+ nr_io_queues = min_t(unsigned int, nr_io_queues,
+ ibdev->num_comp_vectors);
+
ret = nvme_set_queue_count(&ctrl->ctrl, &nr_io_queues);
if (ret)
return ret;
@@ -1523,6 +1530,13 @@ static void nvme_rdma_complete_rq(struct request *rq)
nvme_complete_rq(rq);
}
+static int nvme_rdma_map_queues(struct blk_mq_tag_set *set)
+{
+ struct nvme_rdma_ctrl *ctrl = set->driver_data;
+
+ return blk_mq_rdma_map_queues(set, ctrl->device->dev, 0);
+}
+
static const struct blk_mq_ops nvme_rdma_mq_ops = {
.queue_rq = nvme_rdma_queue_rq,
.complete = nvme_rdma_complete_rq,
@@ -1532,6 +1546,7 @@ static const struct blk_mq_ops nvme_rdma_mq_ops = {
.init_hctx = nvme_rdma_init_hctx,
.poll = nvme_rdma_poll,
.timeout = nvme_rdma_timeout,
+ .map_queues = nvme_rdma_map_queues,
};
static const struct blk_mq_ops nvme_rdma_admin_mq_ops = {
--
2.7.4
next prev parent reply other threads:[~2017-04-06 10:36 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-06 10:36 [PATCH v2 0/6] Automatic affinity settings for nvme over rdma Sagi Grimberg
2017-04-06 10:36 ` [PATCH v2 1/6] mlx5: convert to generic pci_alloc_irq_vectors Sagi Grimberg
2017-04-06 14:24 ` Leon Romanovsky
2017-04-06 10:36 ` [PATCH v2 2/6] mlx5: move affinity hints assignments to generic code Sagi Grimberg
2017-04-06 14:27 ` Leon Romanovsky
2017-04-06 10:36 ` [PATCH v2 3/6] RDMA/core: expose affinity mappings per completion vector Sagi Grimberg
2017-04-28 22:48 ` Doug Ledford
2017-05-03 8:02 ` Sagi Grimberg
2017-05-03 10:34 ` Håkon Bugge
2017-04-06 10:36 ` [PATCH v2 4/6] mlx5: support ->get_vector_affinity Sagi Grimberg
2017-04-06 14:30 ` Leon Romanovsky
2017-04-06 10:36 ` [PATCH v2 5/6] block: Add rdma affinity based queue mapping helper Sagi Grimberg
2017-04-06 10:36 ` Sagi Grimberg [this message]
2017-04-23 12:01 ` [PATCH v2 0/6] Automatic affinity settings for nvme over rdma Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1491474998-16423-7-git-send-email-sagi@grimberg.me \
--to=sagi@grimberg.me \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=leonro@mellanox.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-rdma@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=saeedm@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).