From: Krishnamraju Eraparaju <krishna2@chelsio.com>
To: Max Gurtovoy <maxg@mellanox.com>
Cc: sagi@grimberg.me, Chaitanya.Kulkarni@wdc.com, bharat@chelsio.com,
nirranjan@chelsio.com, linux-nvme@lists.infradead.org,
jgg@mellanox.com, kbusch@kernel.org, hch@lst.de
Subject: Re: [PATCH 3/3] nvmet-rdma: allocate RW ctxs according to mdts
Date: Thu, 5 Mar 2020 00:48:49 +0530 [thread overview]
Message-ID: <20200304191848.GA30485@chelsio.com> (raw)
In-Reply-To: <20200304153935.101063-3-maxg@mellanox.com>
Hi Max Gurtovoy,
I just tested this patch series, the issue is not occuring with these
patches.
Have couple of questons:
- Say both host & target has max_fr_pages size of 128 pages, then
the number of MRs allocated at target will be twice the size of
send_queue_size, as NVMET_RDMA_MAX_MDTS is set to 256 pages.
so, in this case, as host can never request an IO of size greater
than 128 pages, half of the MRs allocated at target will always
left unused.
If this is true, will this be a concern in future when
NVMET_RDMA_MAX_MDTS limit is increased, but max_fr_pages
size of few devices remained at 128 pages?
- Also, will just passing the optimal mdts(derived based on
max_fr_pages) to host during ctrl identification fixes this issue
properly(instead of increasing the max_rdma_ctxs with factor)? I think
the target doesn't require multiple MRs in this case as host's blk
max_segments got tuned with target's mdts.
Please correct me if I'm wrong.
Thanks,
Krishna.
On Wednesday, March 03/04/20, 2020 at 17:39:35 +0200, Max Gurtovoy wrote:
> Current nvmet-rdma code allocates MR pool budget based on queue size,
> assuming both host and target use the same "max_pages_per_mr" count.
> After limiting the mdts value for RDMA controllers, we know the factor
> of maximum MR's per IO operation. Thus, make sure MR pool will be
> sufficient for the required IO depth and IO size.
>
> That is, say host's SQ size is 100, then the MR pool budget allocated
> currently at target will also be 100 MRs. But 100 IO WRITE Requests
> with 256 sg_count(IO size above 1MB) require 200 MRs when target's
> "max_pages_per_mr" is 128.
>
> Reported-by: Krishnamraju Eraparaju <krishna2@chelsio.com>
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> ---
> drivers/nvme/target/rdma.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 5ba76d2..a6c9d11 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -976,7 +976,7 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
> {
> struct ib_qp_init_attr qp_attr;
> struct nvmet_rdma_device *ndev = queue->dev;
> - int comp_vector, nr_cqe, ret, i;
> + int comp_vector, nr_cqe, ret, i, factor;
>
> /*
> * Spread the io queues across completion vectors,
> @@ -1009,7 +1009,9 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
> qp_attr.qp_type = IB_QPT_RC;
> /* +1 for drain */
> qp_attr.cap.max_send_wr = queue->send_queue_size + 1;
> - qp_attr.cap.max_rdma_ctxs = queue->send_queue_size;
> + factor = rdma_rw_mr_factor(ndev->device, queue->cm_id->port_num,
> + 1 << NVMET_RDMA_MAX_MDTS);
> + qp_attr.cap.max_rdma_ctxs = queue->send_queue_size * factor;
> qp_attr.cap.max_send_sge = max(ndev->device->attrs.max_sge_rd,
> ndev->device->attrs.max_send_sge);
>
> --
> 1.8.3.1
>
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-03-04 19:19 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-04 15:39 [PATCH 1/3] nvmet: Add mdts setting op for controllers Max Gurtovoy
2020-03-04 15:39 ` [PATCH 2/3] nvmet-rdma: Implement set_mdts controller op Max Gurtovoy
2020-03-04 16:01 ` Christoph Hellwig
2020-03-04 16:15 ` Max Gurtovoy
2020-03-04 16:18 ` Christoph Hellwig
2020-03-04 16:26 ` Max Gurtovoy
2020-03-04 16:30 ` Christoph Hellwig
2020-03-04 16:32 ` Christoph Hellwig
2020-03-04 15:39 ` [PATCH 3/3] nvmet-rdma: allocate RW ctxs according to mdts Max Gurtovoy
2020-03-04 16:32 ` Christoph Hellwig
2020-03-04 19:18 ` Krishnamraju Eraparaju [this message]
2020-03-04 22:19 ` Max Gurtovoy
2020-03-05 9:58 ` Krishnamraju Eraparaju
2020-03-04 16:31 ` [PATCH 1/3] nvmet: Add mdts setting op for controllers Christoph Hellwig
2020-03-04 16:36 ` Bart Van Assche
2020-03-04 16:48 ` Max Gurtovoy
-- strict thread matches above, loose matches on Subject: below --
2020-03-08 10:55 [PATCH V3 1/3] nvmet: Add get_mdts " Max Gurtovoy
2020-03-08 10:55 ` [PATCH 3/3] nvmet-rdma: allocate RW ctxs according to mdts Max Gurtovoy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200304191848.GA30485@chelsio.com \
--to=krishna2@chelsio.com \
--cc=Chaitanya.Kulkarni@wdc.com \
--cc=bharat@chelsio.com \
--cc=hch@lst.de \
--cc=jgg@mellanox.com \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=maxg@mellanox.com \
--cc=nirranjan@chelsio.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).