From: Hannes Reinecke <hare@suse.de>
To: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Cc: Sagi Grimberg <sagi@grimberg.me>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: Re: [bug report] nvme/063 failure (tcp transport)
Date: Tue, 20 May 2025 13:45:12 +0200 [thread overview]
Message-ID: <f540dd31-75f6-4e1c-9bee-304530984610@suse.de> (raw)
In-Reply-To: <3ef6roj5exuktcobnailtjstndhnyyw264y7uwzhtuaaptst5n@gl6id4fhjhcu>
On 5/19/25 04:53, Shinichiro Kawasaki wrote:
> On May 18, 2025 / 13:25, Hannes Reinecke wrote:
>> On 5/18/25 12:01, Sagi Grimberg wrote:
>>>
>>>
>>> On 16/05/2025 15:31, Shinichiro Kawasaki wrote:
>>>> Hello all,
>>>>
>>>> Using the kernel v6.15-rc6 and the latest blktests (git hash
>>>> 613b8377e4d3), I
>>>> observe the test case nvme/063 fails with tcp transport. Kernel
>>>> reported WARN in
>>>> blk_mq_unquiesce_queue and KASAN sauf in blk_mq_queue_tag_busy_iter
>>>> [1]. The
>>>> failure is recreated in stable manner on my test nodes.
>>>>
>>>> The test case script had a bug then this failure was not found until
>>>> the bug get
>>>> fixed. I tried the kernel v6.15-rc1, and observed the same failure
>>>> symptom. This
>>>> test case cannot be run with the kernel v6.14, since it does not
>>>> have secure
>>>> concatenation feature.
>>>>
>>>> Actions for fix will be appreciated.
>>>
>>> Hannes, did you encounter this?
>>>
>> No; I would think it's an artifact due to multipath not being enabled.
>> Shin'ichiro, can you reproduce it with CONFIG_NVME_MULTIPATH on?
>
> I tried both CONFIG_NVME_MULTIPATH on and off, and the failure was recreated
> regardless of the config value. Of note is that sometimes it is recreated at
> the first test case run, and sometimes it is required to repeat the test case a
> few times.
>
> FYI, I attached the kernel config which I used to recreate the failure.
Hmm.
Can you check with this patch (on top of the previous one):
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 55569eb7770b..43d86e8c6df3 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2197,6 +2197,7 @@ static int nvme_tcp_configure_io_queues(struct
nvme_ctrl *ctrl, bool new)
blk_mq_update_nr_hw_queues(ctrl->tagset,
ctrl->queue_count - 1);
nvme_unfreeze(ctrl);
+ nvme_quiesce_io_queues(ctrl);
}
/*
Thanks.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
next prev parent reply other threads:[~2025-05-20 11:52 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-16 12:31 [bug report] nvme/063 failure (tcp transport) Shinichiro Kawasaki
2025-05-18 10:01 ` Sagi Grimberg
2025-05-18 11:25 ` Hannes Reinecke
2025-05-19 2:53 ` Shinichiro Kawasaki
2025-05-20 11:45 ` Hannes Reinecke [this message]
2025-05-21 11:51 ` Shinichiro Kawasaki
2025-06-02 2:14 ` Shinichiro Kawasaki
2025-06-02 6:38 ` Hannes Reinecke
2025-05-19 6:46 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f540dd31-75f6-4e1c-9bee-304530984610@suse.de \
--to=hare@suse.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
--cc=shinichiro.kawasaki@wdc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox