Linux block layer
 help / color / mirror / Atom feed
From: Daniel Wagner <dwagner@suse.de>
To: Keith Busch <kbusch@meta.com>
Cc: linux-block@vger.kernel.org, axboe@kernel.dk,
	linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me,
	Keith Busch <kbusch@kernel.org>
Subject: Re: [PATCH 0/3] nvme fabrics polling fixes
Date: Wed, 22 Mar 2023 09:48:31 +0100	[thread overview]
Message-ID: <20230322084831.qwdarhmrs3uv5y43@carbon.lan> (raw)
In-Reply-To: <20230322002350.4038048-1-kbusch@meta.com>

On Tue, Mar 21, 2023 at 05:23:47PM -0700, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> I couldn't test the existing tcp or rdma options, so I had to make a
> loop poll option. The last patch fixes the polling queues when used with
> fabrics.
> 
> Note, this depends on patch I sent earlier today that I should have just
> included in this series:
> 
>   https://lore.kernel.org/linux-block/20230321215001.2655451-1-kbusch@meta.com/T/#u

I've tested this series with

  https://github.com/igaw/blktests/tree/queue-counts

and while for rdma all is good I got a lockdep warning for tcp:


 ======================================================
 WARNING: possible circular locking dependency detected
 6.3.0-rc1+ #15 Tainted: G        W
 ------------------------------------------------------
 kworker/6:0/54 is trying to acquire lock:
 ffff888121d88030 ((work_completion)(&queue->io_work)){+.+.}-{0:0}, at: __flush_work+0xb9/0x170

 but task is already holding lock:
 ffff888100b0fd20 ((work_completion)(&queue->release_work)){+.+.}-{0:0}, at: process_one_work+0x707/0xbc0

 which lock already depends on the new lock.


 the existing dependency chain (in reverse order) is:

 -> #2 ((work_completion)(&queue->release_work)){+.+.}-{0:0}:
        lock_acquire+0x13a/0x310
        process_one_work+0x728/0xbc0
        worker_thread+0x97a/0x1480
        kthread+0x228/0x2b0
        ret_from_fork+0x1f/0x30

 -> #1 ((wq_completion)nvmet-wq){+.+.}-{0:0}:
        lock_acquire+0x13a/0x310
        __flush_workqueue+0x185/0x14e0
        nvmet_tcp_install_queue+0x63/0x270 [nvmet_tcp]
        nvmet_install_queue+0x2b1/0x6a0 [nvmet]
        nvmet_execute_admin_connect+0x381/0x880 [nvmet]
        nvmet_tcp_io_work+0x15e8/0x8f60 [nvmet_tcp]
        process_one_work+0x756/0xbc0
        worker_thread+0x97a/0x1480
        kthread+0x228/0x2b0
        ret_from_fork+0x1f/0x30

 -> #0 ((work_completion)(&queue->io_work)){+.+.}-{0:0}:
        validate_chain+0x19f1/0x6d50
        __lock_acquire+0x122d/0x1e90
        lock_acquire+0x13a/0x310
        __flush_work+0xd5/0x170
        __cancel_work_timer+0x36b/0x470
        nvmet_tcp_release_queue_work+0x25c/0x1000 [nvmet_tcp]
        process_one_work+0x756/0xbc0
        worker_thread+0x97a/0x1480
        kthread+0x228/0x2b0
        ret_from_fork+0x1f/0x30

 other info that might help us debug this:

 Chain exists of:
   (work_completion)(&queue->io_work) --> (wq_completion)nvmet-wq --> (work_completion)(&queue->release_work)

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock((work_completion)(&queue->release_work));
                                lock((wq_completion)nvmet-wq);
                                lock((work_completion)(&queue->release_work));
   lock((work_completion)(&queue->io_work));

  *** DEADLOCK ***

 2 locks held by kworker/6:0/54:
  #0: ffff888109ff6d48 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: process_one_work+0x6c8/0xbc0
  #1: ffff888100b0fd20 ((work_completion)(&queue->release_work)){+.+.}-{0:0}, at: process_one_work+0x707/0xbc0

 stack backtrace:
 CPU: 6 PID: 54 Comm: kworker/6:0 Tainted: G        W          6.3.0-rc1+ #15 f4d05de834b07d62567d33b70ec70fb0fa06f103
 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
 Workqueue: nvmet-wq nvmet_tcp_release_queue_work [nvmet_tcp]
 Call Trace:
  <TASK>
  dump_stack_lvl+0x5a/0x80
  check_noncircular+0x2c8/0x390
  ? add_chain_block+0x5e0/0x5e0
  ? ret_from_fork+0x1f/0x30
  ? lockdep_lock+0xd3/0x260
  ? stack_trace_save+0x10a/0x1e0
  ? stack_trace_snprint+0x100/0x100
  ? check_noncircular+0x1a6/0x390
  validate_chain+0x19f1/0x6d50
  ? lockdep_unlock+0x9e/0x1f0
  ? validate_chain+0x15b2/0x6d50
  ? reacquire_held_locks+0x510/0x510
  ? reacquire_held_locks+0x510/0x510
  ? reacquire_held_locks+0x510/0x510
  ? add_lock_to_list+0xbf/0x2c0
  ? lockdep_unlock+0x9e/0x1f0
  ? validate_chain+0x15b2/0x6d50
  ? reacquire_held_locks+0x510/0x510
  ? reacquire_held_locks+0x510/0x510
  ? xfs_buf_find_lock+0xb0/0x430 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? reacquire_held_locks+0x510/0x510
  ? validate_chain+0x176/0x6d50
  ? trace_lock_acquired+0x7b/0x180
  ? lock_is_held_type+0x8b/0x110
  ? lock_is_held_type+0x8b/0x110
  ? rcu_read_lock_sched_held+0x34/0x70
  ? reacquire_held_locks+0x510/0x510
  ? xfs_buf_get_map+0xd72/0x11a0 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? lock_is_held_type+0x8b/0x110
  ? rcu_read_lock_sched_held+0x34/0x70
  ? trace_xfs_buf_read+0x7c/0x180 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_buf_read_map+0x111/0x700 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? lock_is_held_type+0x8b/0x110
  ? lock_is_held_type+0x8b/0x110
  ? xfs_btree_read_buf_block+0x205/0x300 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? rcu_read_lock_sched_held+0x34/0x70
  ? trace_xfs_trans_read_buf+0x79/0x170 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_btree_read_buf_block+0x205/0x300 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_trans_read_buf_map+0x303/0x4f0 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? trace_xfs_trans_getsb+0x170/0x170 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_verify_fsbno+0x74/0x130 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_btree_ptr_to_daddr+0x19b/0x660 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_buf_set_ref+0x1d/0x50 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_btree_read_buf_block+0x233/0x300 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? mark_lock+0x8f/0x320
  ? xfs_btree_readahead+0x250/0x250 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_verify_fsbno+0x74/0x130 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_btree_ptr_to_daddr+0x19b/0x660 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_dio_write_end_io+0x32f/0x3f0 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_dio_write_end_io+0x32f/0x3f0 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_dio_write_end_io+0x32f/0x3f0 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_dio_write_end_io+0x32f/0x3f0 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? __module_address+0x86/0x1e0
  ? ret_from_fork+0x1f/0x30
  ? deref_stack_reg+0x17f/0x210
  ? ret_from_fork+0x1f/0x30
  ? unwind_next_frame+0x16b/0x2240
  ? ret_from_fork+0x1f/0x30
  ? stack_trace_save+0x1e0/0x1e0
  ? arch_stack_walk+0xb7/0xf0
  ? lock_is_held_type+0x8b/0x110
  ? find_busiest_group+0x104e/0x2480
  ? load_balance+0x2540/0x2540
  ? stack_trace_save+0x10a/0x1e0
  ? mark_lock+0x8f/0x320
  ? __lock_acquire+0x122d/0x1e90
  ? lock_is_held_type+0x8b/0x110
  ? rcu_lock_acquire+0x30/0x30
  ? xfs_buf_ioend+0x248/0x450 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? xfs_buf_ioend+0x248/0x450 [xfs e56ce85f3b18232dbd061be3c73dc29bed4ad37b]
  ? __module_address+0x86/0x1e0
  ? ret_from_fork+0x1f/0x30
  ? deref_stack_reg+0x17f/0x210
  ? ret_from_fork+0x1f/0x30
  ? unwind_next_frame+0x16b/0x2240
  ? stack_trace_save+0x10a/0x1e0
  ? deref_stack_reg+0x17f/0x210
  ? look_up_lock_class+0x65/0x130
  ? register_lock_class+0x5d/0x860
  ? mark_lock+0x8f/0x320
  __lock_acquire+0x122d/0x1e90
  lock_acquire+0x13a/0x310
  ? __flush_work+0xb9/0x170
  ? read_lock_is_recursive+0x10/0x10
  ? lock_is_held_type+0x8b/0x110
  ? rcu_lock_acquire+0x30/0x30
  __flush_work+0xd5/0x170
  ? __flush_work+0xb9/0x170
  ? flush_work+0x10/0x10
  ? lock_is_held_type+0x8b/0x110
  ? __lock_acquire+0x1e90/0x1e90
  ? do_raw_spin_unlock+0x112/0x890
  ? mark_lock+0x8f/0x320
  ? lockdep_hardirqs_on_prepare+0x2d5/0x610
  __cancel_work_timer+0x36b/0x470
  ? cancel_work_sync+0x10/0x10
  ? mark_lock+0x8f/0x320
  ? lockdep_hardirqs_on_prepare+0x2d5/0x610
  ? nvmet_tcp_release_queue_work+0x24d/0x1000 [nvmet_tcp f61749ac066e0812c28869697bc2623872f02bd4]
  ? datagram_poll+0x380/0x380
  nvmet_tcp_release_queue_work+0x25c/0x1000 [nvmet_tcp f61749ac066e0812c28869697bc2623872f02bd4]
  process_one_work+0x756/0xbc0
  ? rescuer_thread+0x13f0/0x13f0
  ? lock_acquired+0x2f2/0x930
  ? worker_thread+0xf55/0x1480
  worker_thread+0x97a/0x1480
  ? rcu_lock_release+0x20/0x20
  kthread+0x228/0x2b0
  ? rcu_lock_release+0x20/0x20
  ? kthread_blkcg+0xa0/0xa0
  ret_from_fork+0x1f/0x30
  </TASK>

  parent reply	other threads:[~2023-03-22  8:48 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-22  0:23 [PATCH 0/3] nvme fabrics polling fixes Keith Busch
2023-03-22  0:23 ` [PATCH 1/3] nvme-fabrics: add queue setup helpers Keith Busch
2023-03-22  1:46   ` Chaitanya Kulkarni
2023-03-22  4:38   ` kernel test robot
2023-03-22  5:21     ` Chaitanya Kulkarni
2023-03-22  7:35   ` Sagi Grimberg
2023-03-22  8:27     ` Christoph Hellwig
2023-03-22  9:07       ` Sagi Grimberg
2023-03-22  9:25   ` kernel test robot
2023-03-22  0:23 ` [PATCH 2/3] nvme: add polling options for loop target Keith Busch
2023-03-22  1:47   ` Chaitanya Kulkarni
2023-03-22  7:44   ` Sagi Grimberg
2023-03-22  8:23   ` Christoph Hellwig
2023-03-22  8:46     ` Daniel Wagner
2023-03-22 13:52       ` Christoph Hellwig
2023-03-22 14:06         ` Daniel Wagner
2023-03-22 14:20           ` Christoph Hellwig
2023-03-22 14:30     ` Keith Busch
2023-03-22  0:23 ` [PATCH 3/3] blk-mq: directly poll requests Keith Busch
2023-03-22  7:36   ` Sagi Grimberg
2023-03-22  8:23   ` Christoph Hellwig
2023-03-22  9:08     ` Sagi Grimberg
2023-03-22  8:37   ` Daniel Wagner
2023-03-22 18:16     ` Chaitanya Kulkarni
2023-03-31  7:57   ` Shinichiro Kawasaki
2023-03-22  7:31 ` [PATCH 0/3] nvme fabrics polling fixes Sagi Grimberg
2023-03-22  8:48 ` Daniel Wagner [this message]
2023-03-22 13:24   ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230322084831.qwdarhmrs3uv5y43@carbon.lan \
    --to=dwagner@suse.de \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kbusch@meta.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox