Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>, Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>,
	Anton Eidelman <anton@lightbitslabs.com>,
	linux-nvme <linux-nvme@lists.infradead.org>
Subject: Re: nvme deadlock with ANA
Date: Thu, 2 Apr 2020 17:30:34 +0200	[thread overview]
Message-ID: <4ec0c3ba-398d-0922-87f4-4b0a99a79abb@suse.de> (raw)
In-Reply-To: <7fce512e-deb6-2357-d627-d1a698a8269b@grimberg.me>

On 4/2/20 5:24 PM, Sagi Grimberg wrote:
> 
>>> I want to consult with you guys on a deadlock condition I'm able to
>>> hit with a test that incorporate controller reconnect, ana updates
>>> and live I/O with timeouts.
>>>
>>> This is true for NVMe/TCP, but can also happen in rdma or pci drivers as
>>> well.
>>>
>>> The deadlock combines 4 flows in parallel:
>>> - ns scanning (triggered from reconnect)
>>> - request timeout
>>> - ANA update (triggered from reconnect)
>>> - FS I/O coming into the mpath device
>>>
>>> (1) ns scanning triggers disk revalidation -> update disk info ->
>>>      freeze queue -> but blocked, why?
>>
>> What does -> but blocked mean?
> 
> It is blocked and cannot complete, because of (2)
> 
>>> (2) timeout handler reference the g_usage_counter - > but blocks in
>>>      the timeout handler, why?
>>
>> The timeout handler obviously needs to keep the queue alive while
>> running.  We could think of doing a try_get, though?
> 
> It is keeping the queue alive, that is not the issue. it is blocked in
> the driver .timeout() handler (i.e. nvme_tcp_timeout).
> 
> The reason that it blocked and cannot make forward progress is because
> the driver timeout handler will call nvme_stop_queues(), which is
> blocked as this takes namespaces_rwsem...
> 
> There is a chain of dependency that is deadlocking with circular
> dependency.

Can't you simply call 'nvme_reset_ctrl()' ?
Seems to work reasonably well on the fc side, so I wonder what's 
different for tcp ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-04-02 15:30 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-26  6:23 nvme deadlock with ANA Sagi Grimberg
2020-03-26  6:29 ` Sagi Grimberg
2020-04-02  7:09   ` Sagi Grimberg
2020-04-02 15:18 ` Christoph Hellwig
2020-04-02 15:24   ` Sagi Grimberg
2020-04-02 15:30     ` Hannes Reinecke [this message]
2020-04-02 15:38       ` Sagi Grimberg
2020-04-02 17:22       ` James Smart
2020-04-02 16:00 ` Keith Busch
2020-04-02 16:08   ` Sagi Grimberg
2020-04-02 16:12   ` Hannes Reinecke
2020-04-02 16:18     ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4ec0c3ba-398d-0922-87f4-4b0a99a79abb@suse.de \
    --to=hare@suse.de \
    --cc=anton@lightbitslabs.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox