linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: sagi@grimberg.me (Sagi Grimberg)
Subject: target crash / host hang with nvme-all.3 branch of nvme-fabrics
Date: Wed, 22 Jun 2016 13:22:49 +0300	[thread overview]
Message-ID: <576A66F9.6030100@grimberg.me> (raw)
In-Reply-To: <20160621160134.GA8428@lst.de>


>>> to false because the queue is on the local list, and now we have thread 1
>>> and 2 racing for disconnecting the queue.
>>
>> But the list removal and list_empty evaluation is still under a mutex,
>> isn't that sufficient to avoid the race?
>
> If only once side takes the lock it's not very helpful.  We can
> execute nvmet_rdma_queue_disconnect from the CM handler at the
> same time that the queue is on the to be removed list, which creates
> two issues: a) we manipulate local del_list without any knowledge
> of the thread calling nvmet_rdma_delete_ctrl, leading to potential
> list corruption

It's not racy because the two call sites are mutually exclusive
(calling nvmet_rdma_queue_disconnect which is protected by a mutex)
and the iteration on del_list is safe, but I'm not sure how safe it
is because it's a local list and another context may manipulate it.

However, note that this is not the case here. The report did not
include a CM thread in the mix.

, and b) we can call into __nvmet_rdma_queue_disconnect
> concurrently.  As you pointed out we still have the per-queue
> state_lock inside __nvmet_rdma_queue_disconnect so b) probably
> is harmless at the moment as long as the queue hasn't been freed
> yet by one of the racing threads, which is fairly unlikely.

I don't think we can end up with a use-after-free condition, we
terminate both the CM events and the list removal so I don't see
how that can happen.

> Either way - using list_empty to check if something is still alive due
> to being linked in a list and using a local dispose list simply don't
> mix.  Both are useful patterns on their own, but should not be mixed.

I guess you are correct. I just don't like lock manipulations inside
the list iterations, it can be an opening for future bugs as it requires
a very strict understanding of what is going on...

  reply	other threads:[~2016-06-22 10:22 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-16 14:53 target crash / host hang with nvme-all.3 branch of nvme-fabrics Steve Wise
2016-06-16 14:57 ` Christoph Hellwig
2016-06-16 15:10   ` Christoph Hellwig
2016-06-16 15:17     ` Steve Wise
2016-06-16 19:11     ` Sagi Grimberg
2016-06-16 20:38       ` Christoph Hellwig
2016-06-16 21:37         ` Sagi Grimberg
2016-06-16 21:40           ` Sagi Grimberg
2016-06-21 16:01           ` Christoph Hellwig
2016-06-22 10:22             ` Sagi Grimberg [this message]
2016-06-16 15:24   ` Steve Wise
2016-06-16 16:41     ` Steve Wise
2016-06-16 15:56   ` Steve Wise
2016-06-16 19:55     ` Sagi Grimberg
2016-06-16 19:59       ` Steve Wise
2016-06-16 20:07         ` Sagi Grimberg
2016-06-16 20:12           ` Steve Wise
2016-06-16 20:27             ` Ming Lin
2016-06-16 20:28               ` Steve Wise
2016-06-16 20:34                 ` 'Christoph Hellwig'
2016-06-16 20:49                   ` Steve Wise
2016-06-16 21:06                     ` Steve Wise
2016-06-16 21:42                       ` Sagi Grimberg
2016-06-16 21:47                         ` Ming Lin
2016-06-16 21:53                           ` Steve Wise
2016-06-16 21:46                       ` Steve Wise
2016-06-27 22:29                       ` Ming Lin
2016-06-28  9:14                         ` 'Christoph Hellwig'
2016-06-28 14:15                           ` Steve Wise
2016-06-28 15:51                             ` 'Christoph Hellwig'
2016-06-28 16:31                               ` Steve Wise
2016-06-28 16:49                                 ` Ming Lin
2016-06-28 19:20                                   ` Steve Wise
2016-06-28 19:43                                     ` Steve Wise
2016-06-28 21:04                                       ` Ming Lin
2016-06-29 14:11                                         ` Steve Wise
2016-06-27 17:26                   ` Ming Lin
2016-06-16 20:35           ` Steve Wise
2016-06-16 20:01       ` Steve Wise
2016-06-17 14:05       ` Steve Wise
     [not found]       ` <005f01d1c8a1$5a229240$0e67b6c0$@opengridcomputing.com>
2016-06-17 14:16         ` Steve Wise
2016-06-17 17:20           ` Ming Lin
2016-06-19 11:57             ` Sagi Grimberg
2016-06-21 14:18               ` Steve Wise
2016-06-21 17:33                 ` Ming Lin
2016-06-21 17:59                   ` Steve Wise
     [not found]               ` <006e01d1cbc7$d0d9cc40$728d64c0$@opengridcomputing.com>
2016-06-22 13:42                 ` Steve Wise
2016-06-27 14:19                   ` Steve Wise
2016-06-28  8:50                     ` 'Christoph Hellwig'
2016-07-04  9:57                       ` Yoichi Hayakawa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=576A66F9.6030100@grimberg.me \
    --to=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).