Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Bart Van Assche <bvanassche@acm.org>
Cc: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: Recursive locking complaint with nvme-5.13 branch
Date: Thu, 1 Apr 2021 16:37:22 +0100	[thread overview]
Message-ID: <20210401153722.GA1501960@infradead.org> (raw)
In-Reply-To: <f8288fe3-d8d2-5cfe-6aef-410bcd6e6231@acm.org>

On Wed, Mar 31, 2021 at 09:03:36PM -0700, Bart Van Assche wrote:
> Hi,
> 
> If I boot a VM with the nvme-5.13 branch (commit 24e238c92186
> ("nvme: warn of unhandled effects only once")) then the complaint
> shown below is reported. Is this a known issue?

This looks like someone is trying to open a nvme device as the backing
device for pktcdvd?  In that case this is a different bd_mutex.  But
I'm really curious why systemd would do that.

> 
> Thanks,
> 
> Bart.
> 
> 
> ============================================
> WARNING: possible recursive locking detected
> 5.12.0-rc3-dbg+ #6 Not tainted
> --------------------------------------------
> systemd-udevd/299 is trying to acquire lock:
> ffff88811b1e80a0 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x85/0x350
> 
> but task is already holding lock:
> ffff8881134100a0 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x1a9/0x350
> 
> other info that might help us debug this:
>  Possible unsafe locking scenario:
> 
>        CPU0
>        ----
>   lock(&bdev->bd_mutex);
>   lock(&bdev->bd_mutex);
> 
>  *** DEADLOCK ***
> 
>  May be due to missing lock nesting notation
> 
> 3 locks held by systemd-udevd/299:
>  #0: ffff8881134100a0 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x1a9/0x350
>  #1: ffffffffa10269c8 (pktcdvd_mutex){+.+.}-{3:3}, at: pkt_open+0x22/0x15a [pktcdvd]
>  #2: ffffffffa1025788 (&ctl_mutex#2){+.+.}-{3:3}, at: pkt_open+0x30/0x15a [pktcdvd]
> 
> stack backtrace:
> CPU: 6 PID: 299 Comm: systemd-udevd Not tainted 5.12.0-rc3-dbg+ #6
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a-rebuilt.opensuse.org 04/01/2014
> Call Trace:
>  show_stack+0x52/0x58
>  dump_stack+0x9d/0xcf
>  print_deadlock_bug.cold+0x131/0x136
>  validate_chain+0x6d3/0xc70
>  ? check_prev_add+0x11d0/0x11d0
>  __lock_acquire+0x500/0x920
>  ? start_flush_work+0x375/0x510
>  ? __this_cpu_preempt_check+0x13/0x20
>  lock_acquire.part.0+0x117/0x210
>  ? blkdev_get_by_dev+0x85/0x350
>  ? rcu_read_unlock+0x50/0x50
>  ? __this_cpu_preempt_check+0x13/0x20
>  ? lock_is_held_type+0xdb/0x130
>  lock_acquire+0x9b/0x1a0
>  ? blkdev_get_by_dev+0x85/0x350
>  __mutex_lock+0x117/0xb60
>  ? blkdev_get_by_dev+0x85/0x350
>  ? blkdev_get_by_dev+0x85/0x350
>  ? mutex_lock_io_nested+0xa70/0xa70
>  ? __kasan_check_write+0x14/0x20
>  ? __mutex_unlock_slowpath+0xa7/0x290
>  ? __ww_mutex_check_kill+0x160/0x160
>  ? trace_hardirqs_on+0x2b/0x130
>  ? mutex_unlock+0x12/0x20
>  ? disk_block_events+0x92/0xc0
>  mutex_lock_nested+0x1b/0x20
>  blkdev_get_by_dev+0x85/0x350
>  ? __mutex_lock+0x49c/0xb60
>  pkt_open_dev+0x7f/0x370 [pktcdvd]
>  ? pkt_open_write+0x120/0x120 [pktcdvd]
>  ? __ww_mutex_check_kill+0x160/0x160
>  pkt_open+0xfd/0x15a [pktcdvd]
>  __blkdev_get+0xa3/0x450
>  blkdev_get_by_dev+0x1b4/0x350
>  ? __kasan_check_read+0x11/0x20
>  blkdev_open+0xa4/0x120
>  do_dentry_open+0x27d/0x690
>  ? blkdev_get_by_dev+0x350/0x350
>  vfs_open+0x58/0x60
>  do_open+0x316/0x4a0
>  path_openat+0x1b8/0x260
>  ? do_tmpfile+0x160/0x160
>  ? __this_cpu_preempt_check+0x13/0x20
>  do_filp_open+0x12d/0x240
>  ? may_open_dev+0x60/0x60
>  ? __kasan_check_read+0x11/0x20
>  ? do_raw_spin_unlock+0x98/0xf0
>  ? preempt_count_sub+0x18/0xc0
>  ? _raw_spin_unlock+0x2d/0x50
>  do_sys_openat2+0xe9/0x260
>  ? build_open_flags+0x2a0/0x2a0
>  __x64_sys_openat+0xd3/0x130
>  ? __ia32_sys_open+0x110/0x110
>  ? __secure_computing+0x74/0x140
>  ? syscall_trace_enter.constprop.0+0x71/0x230
>  do_syscall_64+0x32/0x80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
---end quoted text---

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-04-01 15:45 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-01  4:03 Recursive locking complaint with nvme-5.13 branch Bart Van Assche
2021-04-01 15:37 ` Christoph Hellwig [this message]
2021-04-01 16:09   ` Bart Van Assche
2021-04-01 16:12     ` Christoph Hellwig
2021-04-01 17:20       ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210401153722.GA1501960@infradead.org \
    --to=hch@infradead.org \
    --cc=bvanassche@acm.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox