From: Sagi Grimberg <sagi@grimberg.me>
To: linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
Keith Busch <keith.busch@intel.com>
Cc: linux-block@vger.kernel.org
Subject: [PATCH 02/12] nvme-rdma: move admin specific resources to alloc_queue
Date: Tue, 15 Aug 2017 12:52:15 +0300 [thread overview]
Message-ID: <1502790745-12569-3-git-send-email-sagi@grimberg.me> (raw)
In-Reply-To: <1502790745-12569-1-git-send-email-sagi@grimberg.me>
We're trying to make admin queue configuration generic, so
move the rdma specifics to the queue allocation (based on
the queue index passed).
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/host/rdma.c | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 10e54f81e3d9..d94b4df364a5 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -544,6 +544,23 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
goto out_destroy_cm_id;
}
+ if (!idx) {
+ ctrl->device = ctrl->queues[0].device;
+ ctrl->max_fr_pages = min_t(u32, NVME_RDMA_MAX_SEGMENTS,
+ ctrl->device->dev->attrs.max_fast_reg_page_list_len);
+ ctrl->ctrl.max_hw_sectors =
+ (ctrl->max_fr_pages - 1) << (PAGE_SHIFT - 9);
+
+ ret = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev,
+ &ctrl->async_event_sqe, sizeof(struct nvme_command),
+ DMA_TO_DEVICE);
+ if (ret) {
+ nvme_rdma_destroy_queue_ib(&ctrl->queues[0]);
+ goto out_destroy_cm_id;
+ }
+
+ }
+
clear_bit(NVME_RDMA_Q_DELETING, &queue->flags);
return 0;
@@ -567,6 +584,10 @@ static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
if (test_and_set_bit(NVME_RDMA_Q_DELETING, &queue->flags))
return;
+ if (!nvme_rdma_queue_idx(queue))
+ nvme_rdma_free_qe(queue->device->dev,
+ &queue->ctrl->async_event_sqe,
+ sizeof(struct nvme_command), DMA_TO_DEVICE);
nvme_rdma_destroy_queue_ib(queue);
rdma_destroy_id(queue->cm_id);
}
@@ -735,8 +756,6 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl,
bool remove)
{
- nvme_rdma_free_qe(ctrl->queues[0].device->dev, &ctrl->async_event_sqe,
- sizeof(struct nvme_command), DMA_TO_DEVICE);
nvme_rdma_stop_queue(&ctrl->queues[0]);
if (remove) {
blk_cleanup_queue(ctrl->ctrl.admin_q);
@@ -754,11 +773,6 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
if (error)
return error;
- ctrl->device = ctrl->queues[0].device;
-
- ctrl->max_fr_pages = min_t(u32, NVME_RDMA_MAX_SEGMENTS,
- ctrl->device->dev->attrs.max_fast_reg_page_list_len);
-
if (new) {
ctrl->ctrl.admin_tagset = nvme_rdma_alloc_tagset(&ctrl->ctrl, true);
if (IS_ERR(ctrl->ctrl.admin_tagset))
@@ -794,19 +808,10 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
if (error)
goto out_cleanup_queue;
- ctrl->ctrl.max_hw_sectors =
- (ctrl->max_fr_pages - 1) << (PAGE_SHIFT - 9);
-
error = nvme_init_identify(&ctrl->ctrl);
if (error)
goto out_cleanup_queue;
- error = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev,
- &ctrl->async_event_sqe, sizeof(struct nvme_command),
- DMA_TO_DEVICE);
- if (error)
- goto out_cleanup_queue;
-
return 0;
out_cleanup_queue:
--
2.7.4
next prev parent reply other threads:[~2017-08-15 9:52 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-15 9:52 Centralize nvme controller reset, delete and fabrics periodic reconnects Sagi Grimberg
2017-08-15 9:52 ` [PATCH 01/12] nvme: move err and reconnect work to nvme ctrl Sagi Grimberg
2017-08-15 9:52 ` Sagi Grimberg [this message]
2017-08-15 9:52 ` [PATCH 03/12] nvme-rdma: split nvme_rdma_alloc_io_queues Sagi Grimberg
2017-08-15 9:52 ` [PATCH 04/12] nvme-rdma: restructure create_ctrl a bit Sagi Grimberg
2017-08-15 9:52 ` [PATCH 05/12] nvme-rdma: introduce nvme_rdma_alloc/stop/free_admin_queue Sagi Grimberg
2017-08-15 9:52 ` [PATCH 06/12] nvme-rdma: plumb nvme ctrl to various routines Sagi Grimberg
2017-08-15 9:52 ` [PATCH 07/12] nvme-rdma: split generic probe out of create_ctrl Sagi Grimberg
2017-08-15 9:52 ` [PATCH 08/12] nvme: add some ctrl ops for centralizing control plane logic Sagi Grimberg
2017-08-15 9:52 ` [PATCH 09/12] nvme: move control plane handling to nvme core Sagi Grimberg
2017-08-15 9:52 ` [PATCH 10/12] nvme-fabrics: handle reconnects in fabrics library Sagi Grimberg
2017-08-15 9:52 ` [PATCH 11/12] nvme: add sed-opal ctrl manipulation in admin configuration Sagi Grimberg
2017-08-15 9:52 ` [PATCH 12/12] nvme-loop: convert to nvme-core control plane management Sagi Grimberg
2017-08-16 8:16 ` Centralize nvme controller reset, delete and fabrics periodic reconnects Christoph Hellwig
2017-08-16 9:33 ` Sagi Grimberg
2017-08-16 9:35 ` Christoph Hellwig
2017-08-16 9:46 ` Sagi Grimberg
2017-08-16 9:57 ` Christoph Hellwig
2017-08-16 10:09 ` Sagi Grimberg
2017-08-16 10:49 ` Christoph Hellwig
2017-08-16 13:51 ` Sagi Grimberg
2017-08-16 15:17 ` Christoph Hellwig
2017-08-17 7:21 ` Sagi Grimberg
2017-08-17 7:36 ` Christoph Hellwig
2017-08-20 6:37 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1502790745-12569-3-git-send-email-sagi@grimberg.me \
--to=sagi@grimberg.me \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).