From: swise@opengridcomputing.com (Steve Wise)
Subject: crash on device removal
Date: Tue, 12 Jul 2016 17:17:03 -0500 [thread overview]
Message-ID: <013f01d1dc8b$1df9cc90$59ed65b0$@opengridcomputing.com> (raw)
In-Reply-To: <1468360076.10045.0.camel@ssi>
> On Tue, 2016-07-12@16:09 -0500, Steve Wise wrote:
> > > On Tue, 2016-07-12@11:34 -0500, Steve Wise wrote:
> > > > Hey Christoph,
> > > >
> > > > I see a crash when shutting down a nvme host node via 'reboot' that has 1
> target
> > > > device attached. The shutdown causes iw_cxgb4 to be removed which
> triggers
> > > the
> > > > device removal logic in the nvmf rdma transport. The crash is here:
> > > >
> > > > (gdb) list *nvme_rdma_free_qe+0x18
> > > > 0x1e8 is in nvme_rdma_free_qe (drivers/nvme/host/rdma.c:196).
> > > > 191 }
> > > > 192
> > > > 193 static void nvme_rdma_free_qe(struct ib_device *ibdev, struct
> > > > nvme_rdma_qe *qe,
> > > > 194 size_t capsule_size, enum dma_data_direction dir)
> > > > 195 {
> > > > 196 ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir);
> > > > 197 kfree(qe->data);
> > > > 198 }
> > > > 199
> > > > 200 static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct
> > > > nvme_rdma_qe *qe,
> > > >
> > > > Apparently qe is NULL.
> > > >
> > > > Looking at the device removal path, the logic appears correct (see
> > > > nvme_rdma_device_unplug() and the nice function comment :) ). I'm
> wondering
> > > if
> > > > concurrently to the host device removal path cleaning up queues, the target is
> > > > disconnecting all of its queues due to the first disconnect event from the host
> > > > causing some cleanup race on the host side? Although since the removal path
> > > > executing in the cma event handler upcall, I don't think another thread would
> be
> > > > handling a disconnect event. Maybe the qp async event handler flow?
> > > >
> > > > Thoughts?
> > >
> > > We actually missed a kref_get in nvme_get_ns_from_disk().
> > >
> > > This should fix it. Could you help to verify?
> > >
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index 4babdf0..b146f52 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c
> > > @@ -183,6 +183,8 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct
> > > gendisk *disk)
> > > }
> > > spin_unlock(&dev_list_lock);
> > >
> > > + kref_get(&ns->ctrl->kref);
> > > +
> > > return ns;
> > >
> > > fail_put_ns:
> >
> > Hey Ming. This avoids the crash in nvme_rdma_free_qe(), but now I see another
> crash:
> >
> > [ 975.633436] nvme nvme0: new ctrl: NQN "nqn.2014-
> 08.org.nvmexpress.discovery", addr 10.0.1.14:4420
> > [ 978.463636] nvme nvme0: creating 32 I/O queues.
> > [ 979.187826] nvme nvme0: new ctrl: NQN "testnqn", addr 10.0.1.14:4420
> > [ 987.778287] nvme nvme0: Got rdma device removal event, deleting ctrl
> > [ 987.882202] BUG: unable to handle kernel paging request at ffff880e770e01f8
> > [ 987.890024] IP: [<ffffffffa03a1a46>] __ib_process_cq+0x46/0xc0 [ib_core]
> >
> > This looks like another problem with freeing the tag sets before stopping the QP.
> I thought we fixed that once and for all, but perhaps there is some other path we
> missed. :(
>
> Sorry, the previous patch was wrong.
> Here is the right one.
>
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 1ad47c5..f13e3a6 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -845,6 +845,7 @@ static ssize_t nvmf_dev_write(struct file *file, const char
> __user *ubuf,
> goto out_unlock;
> }
>
> + kref_get(&ctrl->kref);
> seq_file->private = ctrl;
>
> out_unlock:
I still see the ib_poll_cq crash with this patch. I'm adding new debug to figure out the flush issue...
next prev parent reply other threads:[~2016-07-12 22:17 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-12 16:34 crash on device removal Steve Wise
2016-07-12 20:40 ` Ming Lin
2016-07-12 21:09 ` Steve Wise
2016-07-12 21:47 ` Ming Lin
2016-07-12 22:17 ` Steve Wise [this message]
2016-07-13 10:06 ` Sagi Grimberg
2016-07-13 10:02 ` Sagi Grimberg
2016-07-13 15:05 ` Steve Wise
2016-07-13 16:17 ` Steve Wise
[not found] <00cc01d1dc5b$51c7fa90$f557efb0$@opengridcomputing.com>
2016-07-12 16:38 ` Steve Wise
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='013f01d1dc8b$1df9cc90$59ed65b0$@opengridcomputing.com' \
--to=swise@opengridcomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).