From: Ming Lei <ming.lei@redhat.com>
To: Daniel Wagner <dwagner@suse.de>
Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
James Smart <james.smart@broadcom.com>,
Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
Date: Tue, 6 Jul 2021 15:29:11 +0800 [thread overview]
Message-ID: <YOQGRwLfLaFGqlVA@T590> (raw)
In-Reply-To: <20210705162519.qqlklisxcsiopflw@beryllium.lan>
On Mon, Jul 05, 2021 at 06:34:00PM +0200, Daniel Wagner wrote:
> On Tue, Jun 29, 2021 at 09:39:30AM +0800, Ming Lei wrote:
> > Can you investigate a bit on why there is the hang? FC shouldn't use
> > managed IRQ, so the interrupt won't be shutdown.
>
> So far, I was not able to figure out why this hangs. In my test setup I
> don't have to do any I/O, I just toggle the remote port.
>
> grep busy /sys/kernel/debug/block/*/hctx*/tags | grep -v busy=0
>
> and this seems to confirm, no I/O in flight.
What is the output of the following command after the hang is triggered?
(cd /sys/kernel/debug/block/nvme0n1 && find . -type f -exec grep -aH . {} \;)
Suppose the hang disk is nvme0n1.
>
> So I started to look at the q_usage_counter. The obvious observational
> is that counter is not 0. The least bit is set, thus we are in atomic
> mode.
>
> (gdb) p/x *((struct request_queue*)0xffff8ac992fbef20)->q_usage_counter->data
> $10 = {
> count = {
> counter = 0x8000000000000001
> },
> release = 0xffffffffa02e78b0,
> confirm_switch = 0x0,
> force_atomic = 0x0,
> allow_reinit = 0x1,
> rcu = {
> next = 0x0,
> func = 0x0
> },
> ref = 0xffff8ac992fbef30
> }
>
> I am a bit confused about the percpu-refcount API. My naive
> interpretation is that when we are in atomic mode percpu_ref_is_zero()
> can't be used. But this seems rather strange. I must miss something.
No, percpu_ref_is_zero() is fine to be called in atomic mode.
Thanks,
Ming
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-07-06 7:29 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-25 10:16 [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-06-25 10:16 ` [PATCH 1/2] nvme-fc: Update hardware queues before using them Daniel Wagner
2021-06-27 13:47 ` James Smart
2021-06-29 1:32 ` Ming Lei
2021-06-29 12:31 ` Hannes Reinecke
2021-06-25 10:16 ` [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze Daniel Wagner
2021-06-27 14:04 ` James Smart
2021-06-29 1:39 ` Ming Lei
2021-06-29 7:48 ` Daniel Wagner
2021-07-05 16:34 ` Daniel Wagner
2021-07-06 7:29 ` Ming Lei [this message]
2021-07-06 8:10 ` Daniel Wagner
2021-07-06 8:45 ` Ming Lei
2021-07-06 8:59 ` Daniel Wagner
2021-07-06 12:21 ` Daniel Wagner
2021-07-07 2:46 ` Ming Lei
2021-06-29 12:31 ` Hannes Reinecke
2021-06-25 12:21 ` [PATCH 0/2] Handle update hardware queues and queue freeze more carefully Daniel Wagner
2021-06-25 13:00 ` Ming Lei
2021-06-29 1:31 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YOQGRwLfLaFGqlVA@T590 \
--to=ming.lei@redhat.com \
--cc=axboe@fb.com \
--cc=dwagner@suse.de \
--cc=james.smart@broadcom.com \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).