linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: sagi@grimberg.me (Sagi Grimberg)
Subject: mlx4_core 0000:07:00.0: swiotlb buffer is full and OOM observed during stress test on reset_controller
Date: Sat, 18 Mar 2017 19:50:59 +0200	[thread overview]
Message-ID: <059299cc-7f45-e8eb-f1b1-7da2cf49cf5a@grimberg.me> (raw)
In-Reply-To: <1768681609.3995777.1489837916289.JavaMail.zimbra@redhat.com>


> Hi Sagi
> With this path, the OOM cannot be reproduced now.
>
> But there is another problem, the reset operation[1] failed at iteration 1007.
> [1]
> echo 1 >/sys/block/nvme0n1/device/reset_controller

We can relax this a bit by only flushing for admin queue accepts, and
also let the host accept longer time for establishing a connection.

Does this help?
--
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 47a479f26e5d..e1db1736823f 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -34,7 +34,7 @@
  #include "fabrics.h"


-#define NVME_RDMA_CONNECT_TIMEOUT_MS   1000            /* 1 second */
+#define NVME_RDMA_CONNECT_TIMEOUT_MS   5000            /* 5 seconds */

  #define NVME_RDMA_MAX_SEGMENT_SIZE     0xffffff        /* 24-bit SGL 
field */

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index ecc4fe862561..88bb5814c264 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1199,6 +1199,11 @@ static int nvmet_rdma_queue_connect(struct 
rdma_cm_id *cm_id,
         }
         queue->port = cm_id->context;

+       if (queue->host_qid == 0) {
+               /* Let inflight controller teardown complete */
+               flush_scheduled_work();
+       }
+
         ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
         if (ret)
                 goto release_queue;
--

  reply	other threads:[~2017-03-18 17:50 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1908657724.31179983.1488539944957.JavaMail.zimbra@redhat.com>
2017-03-03 11:55 ` mlx4_core 0000:07:00.0: swiotlb buffer is full and OOM observed during stress test on reset_controller Yi Zhang
2017-03-05  8:12   ` Leon Romanovsky
2017-03-08 15:48     ` Christoph Hellwig
2017-03-09  8:42       ` Leon Romanovsky
2017-03-09  8:46     ` Leon Romanovsky
2017-03-09 10:33       ` Yi Zhang
2017-03-06 11:23   ` Sagi Grimberg
2017-03-09  4:20     ` Yi Zhang
2017-03-09 11:42       ` Max Gurtovoy
2017-03-10  8:12         ` Yi Zhang
2017-03-10 16:52       ` Leon Romanovsky
2017-03-12 18:16         ` Max Gurtovoy
2017-03-14 13:35           ` Yi Zhang
2017-03-14 16:52             ` Max Gurtovoy
2017-03-15  7:48               ` Yi Zhang
2017-03-16 16:51                 ` Sagi Grimberg
2017-03-18 11:51                   ` Yi Zhang
2017-03-18 17:50                     ` Sagi Grimberg [this message]
2017-03-19  7:01                   ` Leon Romanovsky
2017-05-18 17:01                     ` Yi Zhang
2017-05-19 16:17                       ` Yi Zhang
2017-06-04 15:49                         ` Sagi Grimberg
2017-06-15  8:45                           ` Yi Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=059299cc-7f45-e8eb-f1b1-7da2cf49cf5a@grimberg.me \
    --to=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).