From: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
To: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
"nbd@other.debian.org" <nbd@other.debian.org>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>
Subject: blktests failures with v7.0 kernel
Date: Thu, 16 Apr 2026 06:39:52 +0000 [thread overview]
Message-ID: <aeCDXI5hY_ivSWm4@shinmob> (raw)
Hi all,
I ran the latest blktests (git hash: 255189f0c4e5) with the v7.0 kernel. I
observed 6 failures listed below. Comparing with the previous report for the
v7.0-rc1 kernel [1], 2 failure were resolved (blktrace/002, zbd/009) and the
hangs at nvme/058 and nvme/061 are no longer observed. Thank you very much for
the fixes.
[1] https://lore.kernel.org/linux-block/aZ_-cH8euZLySxdD@shinmob/
List of failures
================
#1: nvme/005,063 (tcp transport)
#2: nvme/058 (fc transport)(kmemleak)
#3: nvme/060
#4: nvme/061 (rdma transport, siw driver)(kmemleak)
#5: nvme/061 (fc transport)
#6: nbd/002
Failure description
===================
#1: nvme/005,063 (tcp transport)
The test case nvme/005 and 063 fail for tcp transport due to the lockdep
WARN related to the three locks q->q_usage_counter, q->elevator_lock and
set->srcu. Refer to the nvme/063 failure report for v6.16-rc1 kernel [2].
[2] https://lore.kernel.org/linux-block/4fdm37so3o4xricdgfosgmohn63aa7wj3ua4e5vpihoamwg3ui@fq42f5q5t5ic/
#2: nvme/058 (fc transport)(kmemleak)
When the test case nvme/058 is repeated for fc transport several times on
the kernel with CONFIG_DEBUG_KMEMLEAK enabled, it fails with kmemleak
messages. Refer to the report for v7.0-rc1 [1]. This test case had hanged
with v7.0-rc1 kernel. The hang is no longer observed, but still kmemleak is
observed.
#3: nvme/060 (rdma transport)
When the test case is repeated for rdma transports around 50 times, the test
case fails. There are two failure symptoms. Both symptoms does not look
kernel side problems, but blktests side problems. I will allocate time to
look into them.
[symptom 1]
nvme/060 (tr=rdma) (test nvme fabrics target reset) [failed]
runtime ... 87.444s
--- tests/nvme/060.out 2026-02-20 12:15:11.066947841 +0000
+++ /home/fedora/blktests/results/nodev_tr_rdma/nvme/060.out.bad 2026-02-20 15:06:44.552705787 +0000
@@ -1,2 +1,3 @@
Running nvme/060
+FAIL: nvme connect return error code
Test complete
[symptom 2]
nvme/060 (tr=rdma) (test nvme fabrics target reset) [failed]
runtime ... 22.545s
--- tests/nvme/060.out 2025-08-26 21:28:52.798847739 +0900
+++ /home/shin/Blktests/blktests/results/nodev_tr_rdma/nvme/060.out.bad 2026-02-26 15:20:36.973686247 +0900
@@ -1,2 +1,3 @@
Running nvme/060
+_: line 1: /sys/kernel/debug/nvmet/blktests-subsystem-1/ctrl1/state: No such file or directory
Test complete
#4: nvme/061 (rdma transport, siw driver)(kmemleak)
When the test case nvme/061 is repeated twice for the rdma transport and the
siw driver on the kernel v6.19 with CONFIG_DEBUG_KMEMLEAK enabled, it fails
with a kmemleak message. Refer to the nvme/061 failure report for v6.19
kernel [3].
[3] https://lore.kernel.org/linux-block/aY7ZBfMjVIhe_wh3@shinmob/
#5: nvme/061 (fc transport)
When the test case nvme/061 is repeated around 50 times for the fc
transport, the test process fails after Oops and KASAN null-ptr-deref. It
had hanged with v7.0-rc1 kernel, but it does not hang with v7.0 kernel :)
Further debug is required for the Oops and KASAN. Refer to the kernel
report for v6.19 kernel [1].
#6: nbd/002
The test case nbd/002 fails due to the lockdep WARN related to
mm->mmap_lock, sk_lock-AF_INET6 and fs_reclaim. Refer to the nbd/002 failure
report for v6.18-rc1 kernel [4].
[4] https://lore.kernel.org/linux-block/ynmi72x5wt5ooljjafebhcarit3pvu6axkslqenikb2p5txe57@ldytqa2t4i2x/
reply other threads:[~2026-04-16 6:40 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aeCDXI5hY_ivSWm4@shinmob \
--to=shinichiro.kawasaki@wdc.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=nbd@other.debian.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox