From: Damien Le Moal <dlemoal@kernel.org>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Ming Lei <ming.lei@redhat.com>,
Nilay Shroff <nilay@linux.ibm.com>,
linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
nbd@other.debian.org, virtualization@lists.linux.dev,
linux-scsi@vger.kernel.org, usb-storage@lists.one-eyed-alien.net
Subject: Re: [PATCH 05/10] block: don't update BLK_FEAT_POLL in __blk_mq_update_nr_hw_queues
Date: Mon, 6 Jan 2025 19:55:30 +0900 [thread overview]
Message-ID: <1538d5e9-eb59-49a7-90c8-77a290f3a420@kernel.org> (raw)
In-Reply-To: <20250106100645.850445-6-hch@lst.de>
On 1/6/25 7:06 PM, Christoph Hellwig wrote:
> When __blk_mq_update_nr_hw_queues changes the number of tag sets, it
> might have to disable poll queues. Currently it does so by adjusting
> the BLK_FEAT_POLL, which is a bit against the intent of features that
> describe hardware / driver capabilities, but more importantly causes
> nasty lock order problems with the broadly held freeze when updating the
> number of hardware queues and the limits lock. Fix this by leaving
> BLK_FEAT_POLL alone, and instead check for the number of sets and poll
> queues in the bio submission and poll handler. While this adds extra
> work to the fast path, the variables are in cache lines used by these
> operations anyway, so it should be cheap enough.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/blk-core.c | 14 +++++++++++---
> block/blk-mq.c | 19 +------------------
> block/blk-mq.h | 6 ++++++
> 3 files changed, 18 insertions(+), 21 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 666efe8fa202..483c14a50d9f 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -753,6 +753,15 @@ static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
> return BLK_STS_OK;
> }
>
> +static bool bdev_can_poll(struct block_device *bdev)
> +{
> + struct request_queue *q = bdev_get_queue(bdev);
> +
> + if (queue_is_mq(q))
> + return blk_mq_can_poll(q->tag_set);
> + return q->limits.features & BLK_FEAT_POLL;
> +}
> +
> /**
> * submit_bio_noacct - re-submit a bio to the block device layer for I/O
> * @bio: The bio describing the location in memory and on the device.
> @@ -805,8 +814,7 @@ void submit_bio_noacct(struct bio *bio)
> }
> }
>
> - if (!(q->limits.features & BLK_FEAT_POLL) &&
> - (bio->bi_opf & REQ_POLLED)) {
> + if ((bio->bi_opf & REQ_POLLED) && !bdev_can_poll(bdev)) {
> bio_clear_polled(bio);
> goto not_supported;
> }
> @@ -935,7 +943,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
> return 0;
>
> q = bdev_get_queue(bdev);
> - if (cookie == BLK_QC_T_NONE || !(q->limits.features & BLK_FEAT_POLL))
> + if (cookie == BLK_QC_T_NONE || !bdev_can_poll(bdev))
> return 0;
>
> blk_flush_plug(current->plug, false);
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 17f10683d640..0a7f059735fa 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -4321,12 +4321,6 @@ void blk_mq_release(struct request_queue *q)
> blk_mq_sysfs_deinit(q);
> }
>
> -static bool blk_mq_can_poll(struct blk_mq_tag_set *set)
> -{
> - return set->nr_maps > HCTX_TYPE_POLL &&
> - set->map[HCTX_TYPE_POLL].nr_queues;
> -}
> -
> struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
> struct queue_limits *lim, void *queuedata)
> {
> @@ -4336,9 +4330,7 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
>
> if (!lim)
> lim = &default_lim;
> - lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
> - if (blk_mq_can_poll(set))
> - lim->features |= BLK_FEAT_POLL;
> + lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
Why set BLK_FEAT_POLL unconditionally ? This is changing the current default
for many devices, no ?
>
> q = blk_alloc_queue(lim, set->numa_node);
> if (IS_ERR(q))
> @@ -5025,8 +5017,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
> fallback:
> blk_mq_update_queue_map(set);
> list_for_each_entry(q, &set->tag_list, tag_set_list) {
> - struct queue_limits lim;
> -
> blk_mq_realloc_hw_ctxs(set, q);
>
> if (q->nr_hw_queues != set->nr_hw_queues) {
> @@ -5040,13 +5030,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
> set->nr_hw_queues = prev_nr_hw_queues;
> goto fallback;
> }
> - lim = queue_limits_start_update(q);
> - if (blk_mq_can_poll(set))
> - lim.features |= BLK_FEAT_POLL;
> - else
> - lim.features &= ~BLK_FEAT_POLL;
> - if (queue_limits_commit_update(q, &lim) < 0)
> - pr_warn("updating the poll flag failed\n");
> blk_mq_map_swqueue(q);
> }
>
> diff --git a/block/blk-mq.h b/block/blk-mq.h
> index 89a20fffa4b1..ecd7bd7ec609 100644
> --- a/block/blk-mq.h
> +++ b/block/blk-mq.h
> @@ -111,6 +111,12 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q,
> return ctx->hctxs[blk_mq_get_hctx_type(opf)];
> }
>
> +static inline bool blk_mq_can_poll(struct blk_mq_tag_set *set)
> +{
> + return set->nr_maps > HCTX_TYPE_POLL &&
> + set->map[HCTX_TYPE_POLL].nr_queues;
> +}
> +
> /*
> * sysfs helpers
> */
--
Damien Le Moal
Western Digital Research
next prev parent reply other threads:[~2025-01-06 10:56 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-06 10:06 RFC: fix queue freeze and limit locking order (alt take) Christoph Hellwig
2025-01-06 10:06 ` [PATCH 01/10] block: fix docs for freezing of queue limits updates Christoph Hellwig
2025-01-06 10:39 ` Damien Le Moal
2025-01-06 10:06 ` [PATCH 02/10] block: add a queue_limits_commit_update_frozen helper Christoph Hellwig
2025-01-06 10:41 ` Damien Le Moal
2025-01-06 10:06 ` [PATCH 03/10] block: add a store_limit operations for sysfs entries Christoph Hellwig
2025-01-06 10:46 ` Damien Le Moal
2025-01-06 10:57 ` Christoph Hellwig
2025-01-06 10:06 ` [PATCH 04/10] block: use queue_limits_commit_update in queue_attr_store Christoph Hellwig
2025-01-06 10:48 ` Damien Le Moal
2025-01-06 10:06 ` [PATCH 05/10] block: don't update BLK_FEAT_POLL in __blk_mq_update_nr_hw_queues Christoph Hellwig
2025-01-06 10:55 ` Damien Le Moal [this message]
2025-01-06 10:59 ` Christoph Hellwig
2025-01-06 11:01 ` Nilay Shroff
2025-01-06 11:05 ` Christoph Hellwig
2025-01-06 12:06 ` Nilay Shroff
2025-01-06 15:27 ` Christoph Hellwig
2025-01-07 7:06 ` Nilay Shroff
2025-01-06 10:06 ` [PATCH 06/10] virtio_blk: use queue_limits_commit_update_frozen in cache_type_store Christoph Hellwig
2025-01-06 10:56 ` Damien Le Moal
2025-01-06 10:59 ` Christoph Hellwig
2025-01-06 13:09 ` Damien Le Moal
2025-01-06 15:27 ` Christoph Hellwig
2025-01-06 10:06 ` [PATCH 07/10] usb-storage: use queue_limits_commit_update_frozen in max_sectors_store Christoph Hellwig
2025-01-06 10:06 ` [PATCH 08/10] nvme: freeze queue after taking limits lock Christoph Hellwig
2025-01-06 10:06 ` [PATCH 09/10] loop: document why loop_clear_limits updates queue limits without freezing Christoph Hellwig
2025-01-06 10:06 ` [PATCH 10/10] nbd: use queue_limits_commit_update_frozen in nbd_set_size Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1538d5e9-eb59-49a7-90c8-77a290f3a420@kernel.org \
--to=dlemoal@kernel.org \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=nbd@other.debian.org \
--cc=nilay@linux.ibm.com \
--cc=usb-storage@lists.one-eyed-alien.net \
--cc=virtualization@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox