* blktests failures with v6.15 kernel
@ 2025-05-29 8:46 Shinichiro Kawasaki
2025-06-05 13:02 ` Daniel Wagner
0 siblings, 1 reply; 7+ messages in thread
From: Shinichiro Kawasaki @ 2025-05-29 8:46 UTC (permalink / raw)
To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
linux-scsi@vger.kernel.org, nbd@other.debian.org,
linux-rdma@vger.kernel.org
Hi all,
I ran the latest blktests (git hash: 283923df5bee) with the v6.15 kernel. I
observed 6 failures listed below. Comparing with the previous report with the
v6.15-rc1 kernel [1], 2 failures are no longer observed (rxe driver test hang
and nvme/037), and 4 new failures are observed (nvme/023, nvme/061 hang and
failure, nvme/063 failure).
[1] https://lore.kernel.org/linux-block/x2gnkogq46h66r2fctksnu4yu4wpndkopawbsudq6vqbcgjszu@fjrowpmrran5/
List of failures
================
#1: nvme/023
#2: nvme/041 (fc transport)
#3: nvme/061 hang (rdma transport, siw driver)
#4: nvme/061 failure (fc transport)
#5: nvme/063 failure (tcp transport)
#6: q_usage_counter WARN during system boot
Failure description
===================
#1: nvme/023
When libnvme has version 1.13 or later and built with liburing, nvme-cli
command "nvme smart-log" command fails for namespace block devices. This
makes the test case nvme/032 fail [2]. Fix in libnvme is expected.
[2] https://lore.kernel.org/linux-nvme/32c3e9ef-ab3c-40b5-989a-7aa323f5d611@flourine.local/T/#m6519ce3e641e7011231d955d9002d1078510e3ee
#2: nvme/041 (fc transport)
The test case nvme/041 fails for fc transport. Refer to the report for v6.12
kernel [3].
[3] https://lore.kernel.org/linux-nvme/6crydkodszx5vq4ieox3jjpwkxtu7mhbohypy24awlo5w7f4k6@to3dcng24rd4/
#3: nvme/061 hang (rdma transport, siw driver)
The new test case nvme/061 revealed a bug in RDMA core, which causes
KASAN slab-use-after-free of cm_id_private work objects. A fix patch is
queued for v6.16-rcX [4].
[4] https://lore.kernel.org/linux-rdma/20250510101036.1756439-1-shinichiro.kawasaki@wdc.com/
#4: nvme/061 failure (fc transport)
The test case nvme/061 sometimes fails due to a WARN [5]. Just before the
WARN, The kernel reported "refcount_t: underflow; use-after-free." This
failure can be recreated in stable manner by repeating the test case 10
times or so.
I tried v6.15-rcX kernels. When I ran v6.15-rc1 kernel, the test case always
failed with different symptom. With v6.15-rc2 kernel, the test case passed
in most runs, but sometimes it failed with the same symptom as v6.15. I
guess the nvme-fc changes in v6.15-rc2 fixed most of the refcounting issue,
but still rare refcounting failure scenario is left.
#5: nvme/063 failure (tcp transport)
The new test case nvme/063 triggers a WARN in blk_mq_unquiesce_queue() and
KASAN slab-use-after-free in blk_mq_queue_tag_busy_iter() [6]. Some debug
effort was made, but it still needs further work.
[6] https://lore.kernel.org/linux-nvme/6mhxskdlbo6fk6hotsffvwriauurqky33dfb3s44mqtr5dsxmf@gywwmnyh3twm/
#6: q_usage_counter WARN during system boot
This is not a blktests failure, but I observe it on test systems for
blktests. During the system boot process, a lockdep WARN relevant to
q_usage_counter. Refer to the report for v6.15-rc1 [1].
[5] dmesg at nvme/061 failure
[65984.926261] [ T26143] run blktests nvme/061 at 2025-05-29 14:38:34
[65984.980383] [ T26188] loop0: detected capacity change from 0 to 2097152
[65984.995441] [ T26191] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[65985.050303] [ T23244] nvme nvme1: NVME-FC{0}: create association : host wwpn 0x20001100aa000001 rport wwpn 0x20001100ab000001: NQN "blktests-subsystem-1"
[65985.052545] [ T23343] (NULL device *): {0:0} Association created
[65985.053586] [ T25919] nvmet: Created nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
[65985.059926] [ T23244] nvme nvme1: NVME-FC{0}: controller connect complete
[65985.061770] [ T26214] nvme nvme1: NVME-FC{0}: new ctrl: NQN "blktests-subsystem-1", hostnqn: nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
[65985.125347] [ T23936] nvme nvme2: NVME-FC{1}: create association : host wwpn 0x20001100aa000001 rport wwpn 0x20001100ab000001: NQN "nqn.2014-08.org.nvmexpress.discovery"
[65985.128362] [ T4511] (NULL device *): {0:1} Association created
[65985.130389] [ T23342] nvmet: Created discovery controller 2 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:3a8a427d-68a5-4129-8b0f-1a53fd94be80.
[65985.133718] [ T23936] nvme nvme2: NVME-FC{1}: controller connect complete
[65985.134599] [ T26217] nvme nvme2: NVME-FC{1}: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", hostnqn: nqn.2014-08.org.nvmexpress:uuid:3a8a427d-68a5-4129-8b0f-1a53fd94be80
[65985.139708] [ T26217] nvme nvme2: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
[65985.153785] [ T4511] (NULL device *): {0:1} Association deleted
[65985.164940] [ T4511] (NULL device *): {0:1} Association freed
[65985.166099] [ T25142] (NULL device *): Disconnect LS failed: No Association
[65986.133054] [ T4511] nvme nvme1: NVME-FC{0}: io failed due to lldd error -107
[65986.133073] [ T25919] nvme nvme1: NVME-FC{0}: io failed due to lldd error -107
[65986.133502] [ T23343] nvme nvme1: NVME-FC{0}: io failed due to lldd error -107
[65986.133519] [ T23936] nvme nvme1: NVME-FC{0}: transport association event: transport detected io error
[65986.133524] [ T23936] nvme nvme1: NVME-FC{0}: resetting controller
[65986.133530] [ T23936] nvme nvme1: NVME-FC{0}: io failed due to lldd error -107
[65986.133546] [ T15792] block nvme1n1: no usable path - requeuing I/O
[65986.133576] [ T26241] block nvme1n1: no usable path - requeuing I/O
[65986.133925] [ T1217] block nvme1n1: no usable path - requeuing I/O
[65986.145862] [ T23342] (NULL device *): {0:0} Association deleted
[65986.160121] [ T4511] nvme nvme1: NVME-FC{0}: create association : host wwpn 0x20001100aa000001 rport wwpn 0x20001100ab000001: NQN "blktests-subsystem-1"
[65986.162170] [ T4511] (NULL device *): queue 0 connect admin queue failed (-111).
[65986.163062] [ T4511] nvme nvme1: NVME-FC{0}: reset: Reconnect attempt failed (-111)
[65986.163065] [ T4511] nvme nvme1: NVME-FC{0}: Reconnect attempt in 1 seconds
[65986.189933] [ T23342] (NULL device *): {0:0} Association freed
[65986.190779] [ T15160] (NULL device *): Disconnect LS failed: No Association
[65986.191973] [ T23342] ------------[ cut here ]------------
[65986.192759] [ T23342] refcount_t: underflow; use-after-free.
[65986.193537] [ T23342] WARNING: CPU: 3 PID: 23342 at lib/refcount.c:28 refcount_warn_saturate+0xee/0x150
[65986.194436] [ T23342] Modules linked in: nvme_fcloop nvmet_fc nvmet nvme_fc nvme_fabrics chacha_generic chacha20poly1305 tls nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables qrtr sunrpc ppdev 9pnet_virtio 9pnet netfs parport_pc parport i2c_piix4 i2c_smbus e1000 pcspkr fuse loop dm_multipath nfnetlink vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock vmw_vmci zram bochs drm_client_lib drm_shmem_helper drm_kms_helper xfs nvme drm sym53c8xx scsi_transport_spi nvme_core nvme_keyring serio_raw nvme_auth floppy ata_generic pata_acpi qemu_fw_cfg [last unloaded: nvmet]
[65986.200276] [ T23342] CPU: 3 UID: 0 PID: 23342 Comm: kworker/u16:5 Tainted: G B 6.15.0+ #41 PREEMPT(voluntary)
[65986.201617] [ T23342] Tainted: [B]=BAD_PAGE
[65986.202522] [ T23342] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014
[65986.203723] [ T23342] Workqueue: nvmet-wq nvmet_fc_delete_assoc_work [nvmet_fc]
[65986.204774] [ T23342] RIP: 0010:refcount_warn_saturate+0xee/0x150
[65986.205754] [ T23342] Code: 24 27 3f 03 01 e8 b2 e1 cd fe 0f 0b eb 91 80 3d 13 27 3f 03 00 75 88 48 c7 c7 a0 e8 3c 87 c6 05 03 27 3f 03 01 e8 92 e1 cd fe <0f> 0b e9 6e ff ff ff 80 3d f3 26 3f 03 00 0f 85 61 ff ff ff 48 c7
[65986.208055] [ T23342] RSP: 0018:ffff88811cf37c28 EFLAGS: 00010296
[65986.209072] [ T23342] RAX: 0000000000000000 RBX: ffff888106198440 RCX: 0000000000000000
[65986.210118] [ T23342] RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000001
[65986.211162] [ T23342] RBP: 0000000000000003 R08: 0000000000000001 R09: ffffed1075c35981
[65986.212215] [ T23342] R10: ffff8883ae1acc0b R11: fffffffffffd4e60 R12: ffff888109d62938
[65986.213268] [ T23342] R13: ffff888106198440 R14: ffff88812cc3883c R15: ffff888106198448
[65986.214361] [ T23342] FS: 0000000000000000(0000) GS:ffff8884245bd000(0000) knlGS:0000000000000000
[65986.215467] [ T23342] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[65986.216458] [ T23342] CR2: 00007f66ec449c00 CR3: 000000012ffcc000 CR4: 00000000000006f0
[65986.217479] [ T23342] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[65986.218476] [ T23342] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7: 0000000000000400
[65986.219437] [ T23342] Call Trace:
[65986.220202] [ T23342] <TASK>
[65986.220942] [ T23342] nvmet_fc_delete_assoc_work+0xf1/0x2d0 [nvmet_fc]
[65986.221821] [ T23342] process_one_work+0x84f/0x1460
[65986.222663] [ T23342] ? __pfx_process_one_work+0x10/0x10
[65986.223481] [ T23342] ? assign_work+0x16c/0x240
[65986.224301] [ T23342] worker_thread+0x5ef/0xfd0
[65986.225094] [ T23342] ? __kthread_parkme+0xb4/0x200
[65986.225930] [ T23342] ? __pfx_worker_thread+0x10/0x10
[65986.226722] [ T23342] kthread+0x3b0/0x770
[65986.227494] [ T23342] ? __pfx_kthread+0x10/0x10
[65986.228324] [ T23342] ? rcu_is_watching+0x11/0xb0
[65986.229152] [ T23342] ? _raw_spin_unlock_irq+0x24/0x50
[65986.229970] [ T23342] ? rcu_is_watching+0x11/0xb0
[65986.230747] [ T23342] ? __pfx_kthread+0x10/0x10
[65986.231527] [ T23342] ret_from_fork+0x30/0x70
[65986.232295] [ T23342] ? __pfx_kthread+0x10/0x10
[65986.233081] [ T23342] ret_from_fork_asm+0x1a/0x30
[65986.233863] [ T23342] </TASK>
[65986.234571] [ T23342] irq event stamp: 0
[65986.235279] [ T23342] hardirqs last enabled at (0): [<0000000000000000>] 0x0
[65986.236195] [ T23342] hardirqs last disabled at (0): [<ffffffff844f4e98>] copy_process+0x1f08/0x87c0
[65986.237174] [ T23342] softirqs last enabled at (0): [<ffffffff844f4efd>] copy_process+0x1f6d/0x87c0
[65986.238085] [ T23342] softirqs last disabled at (0): [<0000000000000000>] 0x0
[65986.238945] [ T23342] ---[ end trace 0000000000000000 ]---
[65986.243357] [ T26143] nvme nvme1: NVME-FC{0}: controller connectivity lost. Awaiting Reconnect
[65986.258391] [ T26255] nvme_fc: nvme_fc_create_ctrl: nn-0x10001100ab000001:pn-0x20001100ab000001 - nn-0x10001100aa000001:pn-0x20001100aa000001 combination not found
...
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: blktests failures with v6.15 kernel
2025-05-29 8:46 blktests failures with v6.15 kernel Shinichiro Kawasaki
@ 2025-06-05 13:02 ` Daniel Wagner
2025-06-06 7:54 ` Shinichiro Kawasaki
0 siblings, 1 reply; 7+ messages in thread
From: Daniel Wagner @ 2025-06-05 13:02 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
linux-scsi@vger.kernel.org, nbd@other.debian.org,
linux-rdma@vger.kernel.org
Hi,
On Thu, May 29, 2025 at 08:46:35AM +0000, Shinichiro Kawasaki wrote:
> #1: nvme/023
>
> When libnvme has version 1.13 or later and built with liburing, nvme-cli
> command "nvme smart-log" command fails for namespace block devices. This
> makes the test case nvme/032 fail [2]. Fix in libnvme is expected.
>
> [2]
> https://lore.kernel.org/linux-nvme/32c3e9ef-ab3c-40b5-989a-7aa323f5d611@flourine.local/T/#m6519ce3e641e7011231d955d9002d1078510e3ee
Should be fixed now. If you want, I can do another release soon, so the
fix get packaged up by the distros.
> #2: nvme/041 (fc transport)
>
> The test case nvme/041 fails for fc transport. Refer to the report for v6.12
> kernel [3].
>
> [3]
> https://lore.kernel.org/linux-nvme/6crydkodszx5vq4ieox3jjpwkxtu7mhbohypy24awlo5w7f4k6@to3dcng24rd4/
Is still on my TODO list. Sorry.
> #4: nvme/061 failure (fc transport)
>
> The test case nvme/061 sometimes fails due to a WARN [5]. Just before the
> WARN, The kernel reported "refcount_t: underflow; use-after-free." This
> failure can be recreated in stable manner by repeating the test case 10
> times or so.
>
> I tried v6.15-rcX kernels. When I ran v6.15-rc1 kernel, the test case always
> failed with different symptom. With v6.15-rc2 kernel, the test case passed
> in most runs, but sometimes it failed with the same symptom as v6.15. I
> guess the nvme-fc changes in v6.15-rc2 fixed most of the refcounting issue,
> but still rare refcounting failure scenario is left.
The nvmet-fcloop changes for 6.16 should address this (fingers crossed).
Thanks,
Daniel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: blktests failures with v6.15 kernel
2025-06-05 13:02 ` Daniel Wagner
@ 2025-06-06 7:54 ` Shinichiro Kawasaki
2025-06-06 14:25 ` Yi Zhang
0 siblings, 1 reply; 7+ messages in thread
From: Shinichiro Kawasaki @ 2025-06-06 7:54 UTC (permalink / raw)
To: Daniel Wagner, yizhan@redhat.com
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
linux-scsi@vger.kernel.org, nbd@other.debian.org,
linux-rdma@vger.kernel.org
To+: Yi,
On Jun 05, 2025 / 15:02, Daniel Wagner wrote:
> Hi,
Hi Daniel, thank you for the fix actions!
>
> On Thu, May 29, 2025 at 08:46:35AM +0000, Shinichiro Kawasaki wrote:
> > #1: nvme/023
> >
> > When libnvme has version 1.13 or later and built with liburing, nvme-cli
> > command "nvme smart-log" command fails for namespace block devices. This
> > makes the test case nvme/032 fail [2]. Fix in libnvme is expected.
> >
> > [2]
> > https://lore.kernel.org/linux-nvme/32c3e9ef-ab3c-40b5-989a-7aa323f5d611@flourine.local/T/#m6519ce3e641e7011231d955d9002d1078510e3ee
>
> Should be fixed now. If you want, I can do another release soon, so the
> fix get packaged up by the distros.
As of today, CKI project keeps on reporting the failure:
https://datawarehouse.cki-project.org/kcidb/tests/redhat:1851238698-aarch64-kernel_upt_7
Yi, do you think the new libnvme release will help to silence the failure
reports? I'm guessing the release will help RedHat to pick up and apply to CKI
blktests runs.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: blktests failures with v6.15 kernel
2025-06-06 7:54 ` Shinichiro Kawasaki
@ 2025-06-06 14:25 ` Yi Zhang
2025-06-06 14:58 ` Daniel Wagner
0 siblings, 1 reply; 7+ messages in thread
From: Yi Zhang @ 2025-06-06 14:25 UTC (permalink / raw)
To: Shinichiro Kawasaki, Daniel Wagner
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
linux-scsi@vger.kernel.org, nbd@other.debian.org,
linux-rdma@vger.kernel.org, Tomas Bzatek
On Fri, Jun 6, 2025 at 3:55 PM Shinichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> To+: Yi,
>
> On Jun 05, 2025 / 15:02, Daniel Wagner wrote:
> > Hi,
>
> Hi Daniel, thank you for the fix actions!
>
> >
> > On Thu, May 29, 2025 at 08:46:35AM +0000, Shinichiro Kawasaki wrote:
> > > #1: nvme/023
> > >
> > > When libnvme has version 1.13 or later and built with liburing, nvme-cli
> > > command "nvme smart-log" command fails for namespace block devices. This
> > > makes the test case nvme/032 fail [2]. Fix in libnvme is expected.
> > >
> > > [2]
> > > https://lore.kernel.org/linux-nvme/32c3e9ef-ab3c-40b5-989a-7aa323f5d611@flourine.local/T/#m6519ce3e641e7011231d955d9002d1078510e3ee
> >
> > Should be fixed now. If you want, I can do another release soon, so the
> > fix get packaged up by the distros.
>
> As of today, CKI project keeps on reporting the failure:
>
> https://datawarehouse.cki-project.org/kcidb/tests/redhat:1851238698-aarch64-kernel_upt_7
>
> Yi, do you think the new libnvme release will help to silence the failure
I've created one CKI issue to track the nvme/023 failure, so the
failure will be waived in the future test.
> reports? I'm guessing the release will help RedHat to pick up and apply to CKI
Yes, if we have the new release for libnvme, our Fedora libnvme
maintainer can build the new one for Fedora. I also created the Fedora
issue to track it on libnvme side.
https://bugzilla.redhat.com/show_bug.cgi?id=2370805
> blktests runs.
>
--
Best Regards,
Yi Zhang
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: blktests failures with v6.15 kernel
2025-06-06 14:25 ` Yi Zhang
@ 2025-06-06 14:58 ` Daniel Wagner
2025-06-06 15:03 ` Jens Axboe
0 siblings, 1 reply; 7+ messages in thread
From: Daniel Wagner @ 2025-06-06 14:58 UTC (permalink / raw)
To: Yi Zhang
Cc: Shinichiro Kawasaki, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
nbd@other.debian.org, linux-rdma@vger.kernel.org, Tomas Bzatek
On Fri, Jun 06, 2025 at 10:25:25PM +0800, Yi Zhang wrote:
> > As of today, CKI project keeps on reporting the failure:
> >
> > https://datawarehouse.cki-project.org/kcidb/tests/redhat:1851238698-aarch64-kernel_upt_7
> >
> > Yi, do you think the new libnvme release will help to silence the failure
>
> I've created one CKI issue to track the nvme/023 failure, so the
> failure will be waived in the future test.
>
> > reports? I'm guessing the release will help RedHat to pick up and apply to CKI
>
> Yes, if we have the new release for libnvme, our Fedora libnvme
> maintainer can build the new one for Fedora. I also created the Fedora
> issue to track it on libnvme side.
Sure; a stop gap solution, just don't build with liburing. In hindsight,
I should have set it to disabled per default, will do it now.
FWIW, the contributor for the io_uring feature, stated that it improved
the performance for some workloads. Though, I think the whole
integration is sub-optimal, as a new io_uring is created/configured for
each get_log_page call. So only for a large transfers there is going to
help.
I am currently working on libnvme 2 and I think we can improve this
quite a bit though. But for libnvme 1 I'd recommend to disable
liburing.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: blktests failures with v6.15 kernel
2025-06-06 14:58 ` Daniel Wagner
@ 2025-06-06 15:03 ` Jens Axboe
2025-06-06 15:31 ` Daniel Wagner
0 siblings, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2025-06-06 15:03 UTC (permalink / raw)
To: Daniel Wagner, Yi Zhang
Cc: Shinichiro Kawasaki, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
nbd@other.debian.org, linux-rdma@vger.kernel.org, Tomas Bzatek
On 6/6/25 8:58 AM, Daniel Wagner wrote:
> FWIW, the contributor for the io_uring feature, stated that it improved
> the performance for some workloads. Though, I think the whole
> integration is sub-optimal, as a new io_uring is created/configured for
> each get_log_page call. So only for a large transfers there is going to
> help.
That's crazy... What commit is that?
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: blktests failures with v6.15 kernel
2025-06-06 15:03 ` Jens Axboe
@ 2025-06-06 15:31 ` Daniel Wagner
0 siblings, 0 replies; 7+ messages in thread
From: Daniel Wagner @ 2025-06-06 15:31 UTC (permalink / raw)
To: Jens Axboe
Cc: Yi Zhang, Shinichiro Kawasaki, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
nbd@other.debian.org, linux-rdma@vger.kernel.org, Tomas Bzatek
On Fri, Jun 06, 2025 at 09:03:11AM -0600, Jens Axboe wrote:
> On 6/6/25 8:58 AM, Daniel Wagner wrote:
> > FWIW, the contributor for the io_uring feature, stated that it improved
> > the performance for some workloads. Though, I think the whole
> > integration is sub-optimal, as a new io_uring is created/configured for
> > each get_log_page call. So only for a large transfers there is going to
> > help.
>
> That's crazy... What commit is that?
adee4ed1c8c8 ("ioctl: get_log_page by nvme uring cmd")
ioctl: get_log_page by nvme uring cmd
Use io_uring for fetching log pages.
This showed about a 10% performance improvement for some large log pages.
https://github.com/linux-nvme/libnvme/commit/adee4ed1c8c8
Should I rip it out? I am not really attached to it.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-06-06 15:31 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-29 8:46 blktests failures with v6.15 kernel Shinichiro Kawasaki
2025-06-05 13:02 ` Daniel Wagner
2025-06-06 7:54 ` Shinichiro Kawasaki
2025-06-06 14:25 ` Yi Zhang
2025-06-06 14:58 ` Daniel Wagner
2025-06-06 15:03 ` Jens Axboe
2025-06-06 15:31 ` Daniel Wagner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox