public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Mark Ruijter <mruijter@primelogic.nl>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: SPDK initiators (Vmware 7.x) can not connect to nvmet-rdma.
Date: Fri, 3 Sep 2021 00:36:16 +0300	[thread overview]
Message-ID: <2d8e7197-e25d-ba02-8e27-5869a9cf1cfe@nvidia.com> (raw)
In-Reply-To: <CA3F5384-4B57-47ED-9DFE-27E80F3D312C@primelogic.nl>


On 8/31/2021 4:42 PM, Mark Ruijter wrote:
> When I connect an SPDK initiator it will try to connect using 1024 connections.
> The linux target is unable to handle this situation and return an error.
>
> Aug 28 14:22:56 crashme kernel: [169366.627010] infiniband mlx5_0: create_qp:2789:(pid 33755): Create QP type 2 failed
> Aug 28 14:22:56 crashme kernel: [169366.627913] nvmet_rdma: failed to create_qp ret= -12
> Aug 28 14:22:56 crashme kernel: [169366.628498] nvmet_rdma: nvmet_rdma_alloc_queue: creating RDMA queue failed (-12).
>
> It is really easy to reproduce the problem, even when not using the SPDK initiator.
>
> Just type:
> nvme connect --transport=rdma --queue-size=1024 --nqn=SOME.NQN --traddr=SOME.IP --trsvcid=XXXX
> While a linux initiator attempts to setup 64 connections, SPDK attempts to create 1024 connections.

1024 connections or is it the queue depth ?

how many cores you have in initiator ?

can you give more details on the systems ?

>
> The result is that anything which relies on SPDK, like VMware 7.x for example, won't be able to connect.
> Forcing the queues to be restricted to 256 QD solves some of it. In this case SPDK and VMware seem to connect.
> See the code section below. Sadly, VMware declares the path to be dead afterwards. I guess this 'fix' needs more work. ;-(
>
> In noticed that someone reported this problem on the SPDK list:
> https://github.com/spdk/spdk/issues/1719
>
> Thanks,
>
> Mark
>
> ---
> static int
> nvmet_rdma_parse_cm_connect_req(struct rdma_conn_param *conn,
>                                  struct nvmet_rdma_queue *queue)
> {
>          struct nvme_rdma_cm_req *req;
>
>          req = (struct nvme_rdma_cm_req *)conn->private_data;
>          if (!req || conn->private_data_len == 0)
>                  return NVME_RDMA_CM_INVALID_LEN;
>
>          if (le16_to_cpu(req->recfmt) != NVME_RDMA_CM_FMT_1_0)
>                  return NVME_RDMA_CM_INVALID_RECFMT;
>
>          queue->host_qid = le16_to_cpu(req->qid);
>
>          /*
>           * req->hsqsize corresponds to our recv queue size plus 1
>           * req->hrqsize corresponds to our send queue size
>           */
>          queue->recv_queue_size = le16_to_cpu(req->hsqsize) + 1;
>          queue->send_queue_size = le16_to_cpu(req->hrqsize);
>          if (!queue->host_qid && queue->recv_queue_size > NVME_AQ_DEPTH) {
>                  pr_info("MARK nvmet_rdma_parse_cm_connect_req return %i", NVME_RDMA_CM_INVALID_HSQSIZE);
>                  return NVME_RDMA_CM_INVALID_HSQSIZE;
>          }
>
> +        if (queue->recv_queue_size > 256)
> +               queue->recv_queue_size = 256;
> +        if (queue->send_queue_size > 256)
> +               queue->send_queue_size = 256;
> +       pr_info("MARK queue->recv_queue_size = %i", queue->recv_queue_size);
> +       pr_info("MARK queue->send_queue_size = %i", queue->send_queue_size);
>
>          /* XXX: Should we enforce some kind of max for IO queues? */
>          return 0;
> }
>
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2021-09-02 21:37 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-31 13:42 SPDK initiators (Vmware 7.x) can not connect to nvmet-rdma Mark Ruijter
2021-09-01 12:52 ` Sagi Grimberg
2021-09-01 14:51   ` Mark Ruijter
2021-09-01 14:58     ` Sagi Grimberg
2021-09-01 15:08       ` Mark Ruijter
2021-09-02 21:36 ` Max Gurtovoy [this message]
2021-09-06  9:12   ` Mark Ruijter
2021-09-07 14:25     ` Max Gurtovoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2d8e7197-e25d-ba02-8e27-5869a9cf1cfe@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mruijter@primelogic.nl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox