public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
To: Nilay Shroff <nilay@linux.ibm.com>
Cc: "Daniel Wagner" <dwagner@suse.de>,
	"Chaitanya Kulkarni" <chaitanyak@nvidia.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>,
	"Bart Van Assche" <bvanassche@acm.org>,
	"Hannes Reinecke" <hare@suse.de>, hch <hch@lst.de>,
	"Jens Axboe" <axboe@kernel.dk>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	"tytso@mit.edu" <tytso@mit.edu>,
	"Johannes Thumshirn" <Johannes.Thumshirn@wdc.com>,
	"Christian Brauner" <brauner@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"Javier González" <javier@javigon.com>,
	"willy@infradead.org" <willy@infradead.org>,
	"Jan Kara" <jack@suse.cz>,
	"amir73il@gmail.com" <amir73il@gmail.com>,
	"vbabka@suse.cz" <vbabka@suse.cz>,
	"Damien Le Moal" <dlemoal@kernel.org>
Subject: Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
Date: Mon, 27 Apr 2026 20:50:11 +0900	[thread overview]
Message-ID: <ae9KM8mYjUTvlu31@shinmob> (raw)
In-Reply-To: <d6282aa7-4673-4bae-a0ff-fbd84f0a610f@linux.ibm.com>

On Apr 23, 2026 / 13:35, Nilay Shroff wrote:
> On 4/21/26 11:49 AM, Shin'ichiro Kawasaki wrote:
> > On Feb 16, 2026 / 00:08, Nilay Shroff wrote:
> > > On 2/13/26 4:53 PM, Shinichiro Kawasaki wrote:
[...]
> > > >   4. Long standing failures make test result reports dirty
> > > >      - I feel lockdep WARNs are tend to be left unfixed rather long period.
> > > >        How can we gather effort to fix them?
> > > 
> > > I agree regarding lockdep; recently we did see quite a few lockdep splats.
> > > That said, I believe the number has dropped significantly and only a small
> > > set remains. From what I can tell, most of the outstanding lockdep issues
> > > are related to fs-reclaim paths recursing into the block layer while the
> > > queue is frozen. We should be able to resolve most of these soon, or at
> > > least before the conference. If anything is still outstanding after that,
> > > we can discuss it during the conference and work toward addressing it as
> > > quickly as possible.
> > 
> > Taking this chance, I'd like to express my appreciation for the effort to
> > resolve the lockdep issues. It is great that a number of lockdeps are already
> > fixed. Said that, two lockdep issues are still observed with v7.0 kernel at
> > nvme/005 and nbd/002 [1]. I would like to gather attentions to the failures.
> > 
> > [1] https://lore.kernel.org/linux-block/ynmi72x5wt5ooljjafebhcarit3pvu6axkslqenikb2p5txe57@ldytqa2t4i2x/
> > 
> I think nvme/005 and nbd/002 failures shall be addressed with this
> patch: https://lore.kernel.org/all/20260413171628.6204-1-kch@nvidia.com/
> 
> It's currently applied to nvme-7.1 and not there yet to mainline kernel.

Ah, I missed that patch. Thanks a lot, Chaitanya!

Today, I applied the nvme fix patch on top of v7.1-rc1, and ran nvme/005 with
tcp transport. Unfortunately, I still observe the lockdep splat for
&q->elevator_lock, &q->q_usage_counter(io) and set->srcu [*]. This time, the
call chain looks a bit different (cpu_hotplug_lock is involved?).

I also still observe the nbd/002 failure. The nvme fix patch does not affect
nbd, then I think it is expected that the nbd/002 failure is still there.


[*]

Apr 27 20:32:07 testnode1 unknown: run blktests nvme/005 at 2026-04-27 20:32:07
Apr 27 20:32:08 testnode1 kernel: loop0: detected capacity change from 0 to 2097152
Apr 27 20:32:08 testnode1 kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1
Apr 27 20:32:08 testnode1 kernel: nvmet_tcp: enabling port 0 (127.0.0.1:4420)
Apr 27 20:32:08 testnode1 kernel: nvmet: Created nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
Apr 27 20:32:08 testnode1 kernel: nvme nvme5: creating 4 I/O queues.
Apr 27 20:32:08 testnode1 kernel: nvme nvme5: mapped 4/0/0 default/read/poll queues.
Apr 27 20:32:08 testnode1 kernel: nvme nvme5: new ctrl: NQN "blktests-subsystem-1", addr 127.0.0.1:4420, hostnqn: nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
Apr 27 20:32:08 testnode1 kernel: nvmet: Created nvm controller 2 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
Apr 27 20:32:08 testnode1 kernel: nvme nvme5: creating 4 I/O queues.
Apr 27 20:32:08 testnode1 kernel: nvme nvme5: mapped 4/0/0 default/read/poll queues.
Apr 27 20:32:08 testnode1 kernel: nvme nvme5: Removing ctrl: NQN "blktests-subsystem-1"
Apr 27 20:32:08 testnode1 kernel: 
Apr 27 20:32:08 testnode1 kernel: ======================================================
Apr 27 20:32:08 testnode1 kernel: WARNING: possible circular locking dependency detected
Apr 27 20:32:08 testnode1 kernel: 7.1.0-rc1+ #3 Not tainted
Apr 27 20:32:08 testnode1 kernel: ------------------------------------------------------
Apr 27 20:32:08 testnode1 kernel: nvme/1171 is trying to acquire lock:
Apr 27 20:32:08 testnode1 kernel: ffff888121e8bb98 (set->srcu){.+.+}-{0:0}, at: __synchronize_srcu+0x21/0x2b0
Apr 27 20:32:08 testnode1 kernel: 
                                  but task is already holding lock:
Apr 27 20:32:08 testnode1 kernel: ffff88812ab7bd68 (&q->elevator_lock){+.+.}-{4:4}, at: elevator_change+0x188/0x4f0
Apr 27 20:32:08 testnode1 kernel: 
                                  which lock already depends on the new lock.
Apr 27 20:32:08 testnode1 kernel: 
                                  the existing dependency chain (in reverse order) is:
Apr 27 20:32:08 testnode1 kernel: 
                                  -> #5 (&q->elevator_lock){+.+.}-{4:4}:
Apr 27 20:32:08 testnode1 kernel:        __mutex_lock+0x1ae/0x2600
Apr 27 20:32:08 testnode1 kernel:        elevator_change+0x188/0x4f0
Apr 27 20:32:08 testnode1 kernel:        elv_iosched_store+0x308/0x390
Apr 27 20:32:08 testnode1 kernel:        queue_attr_store+0x23b/0x360
Apr 27 20:32:08 testnode1 kernel:        kernfs_fop_write_iter+0x3d6/0x5e0
Apr 27 20:32:08 testnode1 kernel:        vfs_write+0x52c/0xf80
Apr 27 20:32:08 testnode1 kernel:        ksys_write+0xfb/0x200
Apr 27 20:32:08 testnode1 kernel:        do_syscall_64+0xdd/0x14c0
Apr 27 20:32:08 testnode1 kernel:        entry_SYSCALL_64_after_hwframe+0x76/0x7e
Apr 27 20:32:08 testnode1 kernel: 
                                  -> #4 (&q->q_usage_counter(io)){++++}-{0:0}:
Apr 27 20:32:08 testnode1 kernel:        blk_alloc_queue+0x5b3/0x730
Apr 27 20:32:08 testnode1 kernel:        blk_mq_alloc_queue+0x13f/0x250
Apr 27 20:32:08 testnode1 kernel:        scsi_alloc_sdev+0x84e/0xca0
Apr 27 20:32:08 testnode1 kernel:        scsi_probe_and_add_lun+0x63f/0xc30
Apr 27 20:32:08 testnode1 kernel:        __scsi_add_device+0x1be/0x1f0
Apr 27 20:32:08 testnode1 kernel:        ata_scsi_scan_host+0x139/0x3a0
Apr 27 20:32:08 testnode1 kernel:        async_run_entry_fn+0x93/0x550
Apr 27 20:32:08 testnode1 kernel:        process_one_work+0x8b4/0x1640
Apr 27 20:32:08 testnode1 kernel:        worker_thread+0x606/0xff0
Apr 27 20:32:08 testnode1 kernel:        kthread+0x368/0x460
Apr 27 20:32:08 testnode1 kernel:        ret_from_fork+0x653/0x9d0
Apr 27 20:32:08 testnode1 kernel:        ret_from_fork_asm+0x1a/0x30
Apr 27 20:32:08 testnode1 kernel: 
                                  -> #3 (fs_reclaim){+.+.}-{0:0}:
Apr 27 20:32:08 testnode1 kernel:        fs_reclaim_acquire+0xd5/0x120
Apr 27 20:32:08 testnode1 kernel:        __kmalloc_cache_node_noprof+0x51/0x740
Apr 27 20:32:08 testnode1 kernel:        create_worker+0xfb/0x710
Apr 27 20:32:08 testnode1 kernel:        workqueue_prepare_cpu+0x87/0xe0
Apr 27 20:32:08 testnode1 kernel:        cpuhp_invoke_callback+0x2a7/0x1230
Apr 27 20:32:08 testnode1 kernel:        __cpuhp_invoke_callback_range+0xbd/0x1f0
Apr 27 20:32:08 testnode1 kernel:        _cpu_up+0x2ec/0x700
Apr 27 20:32:08 testnode1 kernel:        cpu_up+0x111/0x190
Apr 27 20:32:08 testnode1 kernel:        cpuhp_bringup_mask+0xd3/0x110
Apr 27 20:32:08 testnode1 kernel:        bringup_nonboot_cpus+0x139/0x170
Apr 27 20:32:08 testnode1 kernel:        smp_init+0x27/0xe0
Apr 27 20:32:08 testnode1 kernel:        kernel_init_freeable+0x445/0x6f0
Apr 27 20:32:08 testnode1 kernel:        kernel_init+0x18/0x150
Apr 27 20:32:08 testnode1 kernel:        ret_from_fork+0x653/0x9d0
Apr 27 20:32:08 testnode1 kernel:        ret_from_fork_asm+0x1a/0x30
Apr 27 20:32:08 testnode1 kernel: 
                                  -> #2 (cpu_hotplug_lock){++++}-{0:0}:
Apr 27 20:32:08 testnode1 kernel:        cpus_read_lock+0x3c/0xe0
Apr 27 20:32:08 testnode1 kernel:        static_key_disable+0x12/0x30
Apr 27 20:32:08 testnode1 kernel:        __inet_hash_connect+0x10f7/0x1a50
Apr 27 20:32:08 testnode1 kernel:        tcp_v4_connect+0xcb0/0x18b0
Apr 27 20:32:08 testnode1 kernel:        __inet_stream_connect+0x349/0xf00
Apr 27 20:32:08 testnode1 kernel:        inet_stream_connect+0x55/0xb0
Apr 27 20:32:08 testnode1 kernel:        kernel_connect+0xdf/0x140
Apr 27 20:32:08 testnode1 kernel:        nvme_tcp_alloc_queue+0xa48/0x1b60 [nvme_tcp]
Apr 27 20:32:08 testnode1 kernel:        nvme_tcp_alloc_admin_queue+0xff/0x440 [nvme_tcp]
Apr 27 20:32:08 testnode1 kernel:        nvme_tcp_setup_ctrl+0x8a/0x830 [nvme_tcp]
Apr 27 20:32:08 testnode1 kernel:        nvme_tcp_create_ctrl+0x834/0xb90 [nvme_tcp]
Apr 27 20:32:08 testnode1 kernel:        nvmf_dev_write+0x3e3/0x800 [nvme_fabrics]
Apr 27 20:32:08 testnode1 kernel:        vfs_write+0x1cc/0xf80
Apr 27 20:32:08 testnode1 kernel:        ksys_write+0xfb/0x200
Apr 27 20:32:08 testnode1 kernel:        do_syscall_64+0xdd/0x14c0
Apr 27 20:32:08 testnode1 kernel:        entry_SYSCALL_64_after_hwframe+0x76/0x7e
Apr 27 20:32:08 testnode1 kernel: 
                                  -> #1 (sk_lock-AF_INET-NVME){+.+.}-{0:0}:
Apr 27 20:32:08 testnode1 kernel:        lock_sock_nested+0x32/0xf0
Apr 27 20:32:08 testnode1 kernel:        tcp_sendmsg+0x1c/0x50
Apr 27 20:32:08 testnode1 kernel:        sock_sendmsg+0x2bd/0x370
Apr 27 20:32:08 testnode1 kernel:        nvme_tcp_try_send_cmd_pdu+0x57f/0xbd0 [nvme_tcp]
Apr 27 20:32:08 testnode1 kernel:        nvme_tcp_try_send+0x1b3/0x9c0 [nvme_tcp]
Apr 27 20:32:08 testnode1 kernel:        nvme_tcp_queue_rq+0xf77/0x1970 [nvme_tcp]
Apr 27 20:32:08 testnode1 kernel:        blk_mq_dispatch_rq_list+0x39b/0x2340
Apr 27 20:32:08 testnode1 kernel:        __blk_mq_sched_dispatch_requests+0x1dd/0x1510
Apr 27 20:32:08 testnode1 kernel:        blk_mq_sched_dispatch_requests+0xa8/0x150
Apr 27 20:32:08 testnode1 kernel:        blk_mq_run_work_fn+0x127/0x2c0
Apr 27 20:32:08 testnode1 kernel:        process_one_work+0x8b4/0x1640
Apr 27 20:32:08 testnode1 kernel:        worker_thread+0x606/0xff0
Apr 27 20:32:08 testnode1 kernel:        kthread+0x368/0x460
Apr 27 20:32:08 testnode1 kernel:        ret_from_fork+0x653/0x9d0
Apr 27 20:32:08 testnode1 kernel:        ret_from_fork_asm+0x1a/0x30
Apr 27 20:32:08 testnode1 kernel: 
                                  -> #0 (set->srcu){.+.+}-{0:0}:
Apr 27 20:32:08 testnode1 kernel:        __lock_acquire+0x14a6/0x2230
Apr 27 20:32:08 testnode1 kernel:        lock_sync+0xbd/0x120
Apr 27 20:32:08 testnode1 kernel:        __synchronize_srcu+0xa1/0x2b0
Apr 27 20:32:08 testnode1 kernel:        elevator_switch+0x2a5/0x680
Apr 27 20:32:08 testnode1 kernel:        elevator_change+0x2d8/0x4f0
Apr 27 20:32:08 testnode1 kernel:        elevator_set_none+0x87/0xd0
Apr 27 20:32:08 testnode1 kernel:        blk_unregister_queue+0x13f/0x2b0
Apr 27 20:32:08 testnode1 kernel:        __del_gendisk+0x263/0x9e0
Apr 27 20:32:08 testnode1 kernel:        del_gendisk+0x102/0x190
Apr 27 20:32:08 testnode1 kernel:        nvme_ns_remove+0x32a/0x900 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:        nvme_remove_namespaces+0x263/0x3b0 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:        nvme_do_delete_ctrl+0xf5/0x160 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:        nvme_delete_ctrl_sync.cold+0x8/0xd [nvme_core]
Apr 27 20:32:08 testnode1 kernel:        nvme_sysfs_delete+0x96/0xc0 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:        kernfs_fop_write_iter+0x3d6/0x5e0
Apr 27 20:32:08 testnode1 kernel:        vfs_write+0x52c/0xf80
Apr 27 20:32:08 testnode1 kernel:        ksys_write+0xfb/0x200
Apr 27 20:32:08 testnode1 kernel:        do_syscall_64+0xdd/0x14c0
Apr 27 20:32:08 testnode1 kernel:        entry_SYSCALL_64_after_hwframe+0x76/0x7e
Apr 27 20:32:08 testnode1 kernel: 
                                  other info that might help us debug this:
Apr 27 20:32:08 testnode1 kernel: Chain exists of:
                                    set->srcu --> &q->q_usage_counter(io) --> &q->elevator_lock
Apr 27 20:32:08 testnode1 kernel:  Possible unsafe locking scenario:
Apr 27 20:32:08 testnode1 kernel:        CPU0                    CPU1
Apr 27 20:32:08 testnode1 kernel:        ----                    ----
Apr 27 20:32:08 testnode1 kernel:   lock(&q->elevator_lock);
Apr 27 20:32:08 testnode1 kernel:                                lock(&q->q_usage_counter(io));
Apr 27 20:32:08 testnode1 kernel:                                lock(&q->elevator_lock);
Apr 27 20:32:08 testnode1 kernel:   sync(set->srcu);
Apr 27 20:32:08 testnode1 kernel: 
                                   *** DEADLOCK ***
Apr 27 20:32:08 testnode1 kernel: 5 locks held by nvme/1171:
Apr 27 20:32:08 testnode1 kernel:  #0: ffff88810868e410 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0xfb/0x200
Apr 27 20:32:08 testnode1 kernel:  #1: ffff88814e03f080 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x257/0x5e0
Apr 27 20:32:08 testnode1 kernel:  #2: ffff88814e3f84b8 (kn->active#140){++++}-{0:0}, at: sysfs_remove_file_self+0x61/0xb0
Apr 27 20:32:08 testnode1 kernel:  #3: ffff8881073281c8 (&set->update_nr_hwq_lock){++++}-{4:4}, at: del_gendisk+0xfa/0x190
Apr 27 20:32:08 testnode1 kernel:  #4: ffff88812ab7bd68 (&q->elevator_lock){+.+.}-{4:4}, at: elevator_change+0x188/0x4f0
Apr 27 20:32:08 testnode1 kernel: 
                                  stack backtrace:
Apr 27 20:32:08 testnode1 kernel: CPU: 3 UID: 0 PID: 1171 Comm: nvme Not tainted 7.1.0-rc1+ #3 PREEMPT(full) 
Apr 27 20:32:08 testnode1 kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-9.fc43 06/10/2025
Apr 27 20:32:08 testnode1 kernel: Call Trace:
Apr 27 20:32:08 testnode1 kernel:  <TASK>
Apr 27 20:32:08 testnode1 kernel:  dump_stack_lvl+0x6a/0x90
Apr 27 20:32:08 testnode1 kernel:  print_circular_bug.cold+0x185/0x1d0
Apr 27 20:32:08 testnode1 kernel:  check_noncircular+0x148/0x170
Apr 27 20:32:08 testnode1 kernel:  __lock_acquire+0x14a6/0x2230
Apr 27 20:32:08 testnode1 kernel:  lock_sync+0xbd/0x120
Apr 27 20:32:08 testnode1 kernel:  ? __synchronize_srcu+0x21/0x2b0
Apr 27 20:32:08 testnode1 kernel:  ? __synchronize_srcu+0x21/0x2b0
Apr 27 20:32:08 testnode1 kernel:  __synchronize_srcu+0xa1/0x2b0
Apr 27 20:32:08 testnode1 kernel:  ? __pfx___synchronize_srcu+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? kvm_clock_get_cycles+0x14/0x30
Apr 27 20:32:08 testnode1 kernel:  ? ktime_get_mono_fast_ns+0x193/0x490
Apr 27 20:32:08 testnode1 kernel:  ? lockdep_hardirqs_on+0x88/0x130
Apr 27 20:32:08 testnode1 kernel:  ? _raw_spin_unlock_irqrestore+0x4c/0x60
Apr 27 20:32:08 testnode1 kernel:  elevator_switch+0x2a5/0x680
Apr 27 20:32:08 testnode1 kernel:  elevator_change+0x2d8/0x4f0
Apr 27 20:32:08 testnode1 kernel:  elevator_set_none+0x87/0xd0
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_elevator_set_none+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? kobject_put+0x5a/0x4e0
Apr 27 20:32:08 testnode1 kernel:  blk_unregister_queue+0x13f/0x2b0
Apr 27 20:32:08 testnode1 kernel:  __del_gendisk+0x263/0x9e0
Apr 27 20:32:08 testnode1 kernel:  ? down_read+0x13b/0x480
Apr 27 20:32:08 testnode1 kernel:  ? __pfx___del_gendisk+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_down_read+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? up_write+0x294/0x510
Apr 27 20:32:08 testnode1 kernel:  del_gendisk+0x102/0x190
Apr 27 20:32:08 testnode1 kernel:  nvme_ns_remove+0x32a/0x900 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:  nvme_remove_namespaces+0x263/0x3b0 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_nvme_remove_namespaces+0x10/0x10 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:  nvme_do_delete_ctrl+0xf5/0x160 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:  nvme_delete_ctrl_sync.cold+0x8/0xd [nvme_core]
Apr 27 20:32:08 testnode1 kernel:  nvme_sysfs_delete+0x96/0xc0 [nvme_core]
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_sysfs_kf_write+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  kernfs_fop_write_iter+0x3d6/0x5e0
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_kernfs_fop_write_iter+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  vfs_write+0x52c/0xf80
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_vfs_write+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? kasan_save_free_info+0x37/0x70
Apr 27 20:32:08 testnode1 kernel:  ? __kasan_slab_free+0x67/0x80
Apr 27 20:32:08 testnode1 kernel:  ? kmem_cache_free+0x14c/0x670
Apr 27 20:32:08 testnode1 kernel:  ? do_sys_openat2+0xfd/0x170
Apr 27 20:32:08 testnode1 kernel:  ? __x64_sys_openat+0x10a/0x210
Apr 27 20:32:08 testnode1 kernel:  ksys_write+0xfb/0x200
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_ksys_write+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  do_syscall_64+0xdd/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? kasan_quarantine_put+0xff/0x220
Apr 27 20:32:08 testnode1 kernel:  ? lockdep_hardirqs_on+0x88/0x130
Apr 27 20:32:08 testnode1 kernel:  ? kasan_quarantine_put+0xff/0x220
Apr 27 20:32:08 testnode1 kernel:  ? kasan_quarantine_put+0xff/0x220
Apr 27 20:32:08 testnode1 kernel:  ? do_sys_openat2+0xfd/0x170
Apr 27 20:32:08 testnode1 kernel:  ? kmem_cache_free+0x14c/0x670
Apr 27 20:32:08 testnode1 kernel:  ? do_sys_openat2+0xfd/0x170
Apr 27 20:32:08 testnode1 kernel:  ? __pfx_do_sys_openat2+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? kmem_cache_free+0x14c/0x670
Apr 27 20:32:08 testnode1 kernel:  ? __x64_sys_openat+0x10a/0x210
Apr 27 20:32:08 testnode1 kernel:  ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
Apr 27 20:32:08 testnode1 kernel:  ? __pfx___x64_sys_openat+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? rcu_is_watching+0x11/0xb0
Apr 27 20:32:08 testnode1 kernel:  ? do_syscall_64+0x1ea/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? lockdep_hardirqs_on+0x88/0x130
Apr 27 20:32:08 testnode1 kernel:  ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
Apr 27 20:32:08 testnode1 kernel:  ? do_syscall_64+0x208/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? __pfx___x64_sys_openat+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? __pfx___x64_sys_openat+0x10/0x10
Apr 27 20:32:08 testnode1 kernel:  ? rcu_is_watching+0x11/0xb0
Apr 27 20:32:08 testnode1 kernel:  ? do_syscall_64+0x1ea/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? lockdep_hardirqs_on+0x88/0x130
Apr 27 20:32:08 testnode1 kernel:  ? do_syscall_64+0x208/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? do_syscall_64+0x32/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? preempt_count_add+0x7f/0x190
Apr 27 20:32:08 testnode1 kernel:  ? do_syscall_64+0x5d/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? do_syscall_64+0x8d/0x14c0
Apr 27 20:32:08 testnode1 kernel:  ? irqentry_exit+0xf1/0x720
Apr 27 20:32:08 testnode1 kernel:  entry_SYSCALL_64_after_hwframe+0x76/0x7e
Apr 27 20:32:08 testnode1 kernel: RIP: 0033:0x7f245cf99c5e
Apr 27 20:32:08 testnode1 kernel: Code: 4d 89 d8 e8 34 bd 00 00 4c 8b 5d f8 41 8b 93 08 03 00 00 59 5e 48 83 f8 fc 74 11 c9 c3 0f 1f 80 00 00 00 00 48 8b 45 10 0f 05 <c9> c3 83 e2 39 83 fa 08 75 e7 e8 13 ff ff ff 0f 1f 00 f3 0f 1e fa
Apr 27 20:32:08 testnode1 kernel: RSP: 002b:00007ffca6d9f6a0 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
Apr 27 20:32:08 testnode1 kernel: RAX: ffffffffffffffda RBX: 00007f245d1639a6 RCX: 00007f245cf99c5e
Apr 27 20:32:08 testnode1 kernel: RDX: 0000000000000001 RSI: 00007f245d1639a6 RDI: 0000000000000003
Apr 27 20:32:08 testnode1 kernel: RBP: 00007ffca6d9f6b0 R08: 0000000000000000 R09: 0000000000000000
Apr 27 20:32:08 testnode1 kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 000000003d0f6860
Apr 27 20:32:08 testnode1 kernel: R13: 000000003d0f8580 R14: 000000003d0f6680 R15: 0000000000000000
Apr 27 20:32:08 testnode1 kernel:  </TASK>

  parent reply	other threads:[~2026-04-27 11:50 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-11 20:35 [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework Chaitanya Kulkarni
2026-02-12  7:52 ` Daniel Wagner
2026-02-12  7:57   ` Johannes Thumshirn
2026-02-13 17:30     ` Bart Van Assche
2026-02-13 17:35       ` James Bottomley
2026-02-13 11:23   ` Shinichiro Kawasaki
2026-02-13 14:18     ` Haris Iqbal
2026-02-15 18:38     ` Nilay Shroff
2026-04-21  6:19       ` Shin'ichiro Kawasaki
2026-04-23  8:05         ` Nilay Shroff
2026-04-23  9:36           ` Daniel Wagner
2026-04-27 11:50           ` Shin'ichiro Kawasaki [this message]
2026-02-15 21:18     ` Haris Iqbal
2026-02-16  0:33       ` Chaitanya Kulkarni
2026-02-23  7:44       ` Johannes Thumshirn
2026-02-25 10:15         ` Haris Iqbal
2026-04-21  6:05         ` Shin'ichiro Kawasaki
2026-02-23 17:08       ` Bart Van Assche
2026-02-25  2:55         ` Chaitanya Kulkarni
2026-02-25 10:07         ` Haris Iqbal
2026-02-25 16:29           ` Bart Van Assche
2026-04-21  6:37     ` Shin'ichiro Kawasaki
  -- strict thread matches above, loose matches on Subject: below --
2024-01-09  6:30 Chaitanya Kulkarni
2024-01-09 21:31 ` Bart Van Assche
2024-01-09 22:01   ` Chaitanya Kulkarni
2024-01-09 22:08     ` Bart Van Assche
2024-01-17  8:50 ` Daniel Wagner
2024-01-23 15:07   ` Daniel Wagner
2024-02-14  7:32     ` Shinichiro Kawasaki
2024-02-21 18:32     ` Luis Chamberlain
2024-02-22  9:31       ` Daniel Wagner
2024-02-22 15:54         ` Luis Chamberlain
2024-02-22 16:16           ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ae9KM8mYjUTvlu31@shinmob \
    --to=shinichiro.kawasaki@wdc.com \
    --cc=Johannes.Thumshirn@wdc.com \
    --cc=amir73il@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=brauner@kernel.org \
    --cc=bvanassche@acm.org \
    --cc=chaitanyak@nvidia.com \
    --cc=dlemoal@kernel.org \
    --cc=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=javier@javigon.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=martin.petersen@oracle.com \
    --cc=nilay@linux.ibm.com \
    --cc=sagi@grimberg.me \
    --cc=tytso@mit.edu \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox