linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Bart Van Assche <bvanassche@acm.org>
Cc: Ming Lei <ming.lei@redhat.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] block: model freeze & enter queue as rwsem for supporting lockdep
Date: Tue, 22 Oct 2024 08:26:26 +0200	[thread overview]
Message-ID: <20241022062626.GB10573@lst.de> (raw)
In-Reply-To: <46493f6f-850e-459e-a4be-116deb5d3ca0@acm.org>

On Fri, Oct 18, 2024 at 09:57:12AM -0700, Bart Van Assche wrote:
> Thank you for having reported this issue and for having proposed a
> patch. I think the following is missing from the patch description:
> (a) An analysis of which code causes the inconsistent nested lock order.
> (b) A discussion of potential alternatives.
>
> It seems unavoidable to me that some code freezes request queue(s)
> before the limits are updated. Additionally, there is code that freezes
> queues first (sd_revalidate_disk()), executes commands and next updates
> limits. Hence the inconsistent order of freezing queues and obtaining
> limits_lock.
>
> The alternative (entirely untested) solution below has the following
> advantages:
> * No additional information has to be provided to lockdep about the
>   locking order.
> * No new flags are required (SKIP_FREEZE_LOCKDEP).
> * No exceptions are necessary for blk_queue_exit() nor for the NVMe
>   driver.

As mentioned by Ming fixing the lockdep reports is a bit different
from adding it first.  But I'll chime in for your proposal to fix
the report anyway.

I think requiring the queue to be frozen before we start the queue
limits update is reasonable in general, but a major pain for
sd_revalidate_disk, and unfortunately I don't think your proposed patch
works there as it doesn't protect against concurrent updates from
say sysfs, and also doesn't scale to adding new limits.

One option to deal with this would be to add an update sequence to
the queue_limits.  sd_revalidate_disk would sample it where it
currently starts the update, then drop the lock and query the
device, then takes the limits lock again, checks for the sequence.
If it matches it commits the update, else it retries.


  parent reply	other threads:[~2024-10-22  6:26 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-18  1:35 [PATCH] block: model freeze & enter queue as rwsem for supporting lockdep Ming Lei
2024-10-18 16:57 ` Bart Van Assche
2024-10-21  2:20   ` Ming Lei
2024-10-22  6:26   ` Christoph Hellwig [this message]
2024-10-18 18:45 ` kernel test robot
2024-10-19 22:46 ` Jens Axboe
2024-10-21 11:17   ` Ming Lei
2024-10-22  6:18 ` Christoph Hellwig
2024-10-22  7:19   ` Peter Zijlstra
2024-10-22  7:21     ` Christoph Hellwig
2024-10-23  3:22   ` Ming Lei
2024-10-23  6:07     ` Christoph Hellwig
2024-10-22 15:05 ` Bart Van Assche
2024-10-23  7:59   ` Ming Lei
2024-10-23 18:05     ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241022062626.GB10573@lst.de \
    --to=hch@lst.de \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=linux-block@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).