linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: swise@opengridcomputing.com (Steve Wise)
Subject: crash on device removal
Date: Wed, 13 Jul 2016 10:05:05 -0500	[thread overview]
Message-ID: <005a01d1dd17$f008a620$d019f260$@opengridcomputing.com> (raw)
In-Reply-To: <578611AA.4000103@grimberg.me>

> On 12/07/16 19:34, Steve Wise wrote:
> > Hey Christoph,
> >
> > I see a crash when shutting down a nvme host node via 'reboot' that has 1
target
> > device attached.  The shutdown causes iw_cxgb4 to be removed which triggers
> the
> > device removal logic in the nvmf rdma transport.  The crash is here:
> >
> > (gdb) list *nvme_rdma_free_qe+0x18
> > 0x1e8 is in nvme_rdma_free_qe (drivers/nvme/host/rdma.c:196).
> > 191     }
> > 192
> > 193     static void nvme_rdma_free_qe(struct ib_device *ibdev, struct
> > nvme_rdma_qe *qe,
> > 194                     size_t capsule_size, enum dma_data_direction dir)
> > 195     {
> > 196             ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir);
> > 197             kfree(qe->data);
> > 198     }
> > 199
> > 200     static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct
> > nvme_rdma_qe *qe,
> >
> > Apparently qe is NULL.
> >
> > Looking at the device removal path, the logic appears correct (see
> > nvme_rdma_device_unplug() and the nice function comment :) ).  I'm wondering
> if
> > concurrently to the host device removal path cleaning up queues, the target
is
> > disconnecting all of its queues due to the first disconnect event from the
host
> > causing some cleanup race on the host side?  Although since the removal path
> > executing in the cma event handler upcall, I don't think another thread
would be
> > handling a disconnect event.  Maybe the qp async event handler flow?
> 
> Hey Steve,
> 
> I never got this error (but didn't test with cxgb4, did this happen
> with mlx4/5?).

Hey Sagi, I don't see this with mlx4. 

> Can you track which qe is it? is it a request qe? is
> it a rsp qe? is it the async qe?
> 

It happens due to the call to nvme_rdma_destroy_queue_ib() in
nvme_rdma_device_unplug().  So it is queue->rsp_ring.


> Also, it would be beneficial to know which queue handled the event
> (admin/io)?

How do I know this?

Steve.

  reply	other threads:[~2016-07-13 15:05 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-12 16:34 crash on device removal Steve Wise
2016-07-12 20:40 ` Ming Lin
2016-07-12 21:09   ` Steve Wise
2016-07-12 21:47     ` Ming Lin
2016-07-12 22:17       ` Steve Wise
2016-07-13 10:06     ` Sagi Grimberg
2016-07-13 10:02 ` Sagi Grimberg
2016-07-13 15:05   ` Steve Wise [this message]
2016-07-13 16:17     ` Steve Wise
     [not found] <00cc01d1dc5b$51c7fa90$f557efb0$@opengridcomputing.com>
2016-07-12 16:38 ` Steve Wise

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='005a01d1dd17$f008a620$d019f260$@opengridcomputing.com' \
    --to=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).