public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: hare@kernel.org, Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org
Subject: Re: [PATCH] nvme-multipath: fix lockdep warning on shutdown
Date: Fri, 24 Jan 2025 09:29:32 +0100	[thread overview]
Message-ID: <70dcbf7e-d8ef-41b8-8692-3dca2f350cec@suse.de> (raw)
In-Reply-To: <20250124071439.106663-1-hare@kernel.org>

On 1/24/25 08:14, hare@kernel.org wrote:
> From: Hannes Reinecke <hare@kernel.org>
> 
> During shutdown of multipath devices lockdep complained about a
> potential circular locking:
> 
> WARNING: possible circular locking dependency detected
> (udev-worker)/2792 is trying to acquire lock:
> ffff8881012a4348 ((wq_completion)kblockd){+.+.}-{0:0}, at: touch_wq_lockdep_map+0
> x26/0x90
> 
> but task is already holding lock:
> ffff88811e4b7cc8 (&disk->open_mutex){+.+.}-{4:4}, at: bdev_release+0x61/0x1a0
> which lock already depends on the new lock.
> 
> the existing dependency chain (in reverse order) is:
> -> #2 (&disk->open_mutex){+.+.}-{4:4}:
>          __mutex_lock+0xa5/0xe00
>         nvme_partition_scan_work+0x31/0x60
>          process_scheduled_works+0x37c/0x6f0
> -> #1 ((work_completion)(&head->partition_scan_work)){+.+.}-{0:0}:
>          process_scheduled_works+0x348/0x6f0
>          worker_thread+0x127/0x2a0
> -> #0 ((wq_completion)kblockd){+.+.}-{0:0}:
>          __lock_acquire+0x11f9/0x1790
>          lock_acquire+0x245/0x2d0
>          touch_wq_lockdep_map+0x3b/0x90
>          __flush_work+0x240/0x4b0
>          nvme_mpath_remove_disk+0x2b/0x50
>          nvme_free_ns_head+0x19/0x90
> 
> So the problem is that nvme_mpath_remove_disk() is called with the
> disk->open_mutex held, hence calling flush_work on partition_scan_work
> (which also will try to lock disk->open_mutex) will deadlock.
> Fix this by checking for NVME_NSHEAD_DISK_LIVE before trying to lock
> disk->open_mutex.
> 
> Fixes: 1f021341eef4 ("nvme-multipath: defer partition scanning")
> 
> Signed-off-by: Hannes Reinecke <hare@kernel.org>
> ---
>   block/blk-ioprio.c                |  6 ++++-
>   drivers/nvme/host/multipath.c     |  2 ++
>   drivers/nvme/target/core.c        | 42 +++++++++++++++----------------
>   drivers/nvme/target/io-cmd-bdev.c |  9 +++++++
>   4 files changed, 37 insertions(+), 22 deletions(-)
> 
Nice analysis, but wrong patch.

Please ignore.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


      reply	other threads:[~2025-01-24  8:29 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-24  7:14 [PATCH] nvme-multipath: fix lockdep warning on shutdown hare
2025-01-24  8:29 ` Hannes Reinecke [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=70dcbf7e-d8ef-41b8-8692-3dca2f350cec@suse.de \
    --to=hare@suse.de \
    --cc=hare@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox