From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 28247E7719B for ; Mon, 6 Jan 2025 10:57:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:Cc:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hbzfPYLJfJyKbvv5tDGvMlhxbWl9VhjdsljtKkIy+uY=; b=OE31Wfkvc8kKR4mXlFFXBh/EqT ZTBDHjNaHMxD+JwO1a1+sYOlQAv3TzSG3kFGulLdXWIY76im65wPtvj/wdGGdGpFIgkhRL7ik6h9c mNWGghjUwVMKcOo62liLtdPCeZ4a8iJVimb6gsxW/VIZ5snOMu4JH1iHl9GynGp7YoaCvG1n6Ftlf uQpM8mQswfUaRxowT5rPQ6efDiykxe9k0dRVU/w6qeCuSpusWMGmYkRb1YVezRCylXrVblfbA9EZs x6XLYBulssS1j4IbsN1/lB7Pw30fuEDKmjfoHboYW0NLqbmuJZRqyYBM9MS7bQqb1g0nn5PLOEBq+ 3PWvwS2Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tUkna-00000000wQN-28u3; Mon, 06 Jan 2025 10:57:30 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tUkmP-00000000wAr-0mvn for linux-nvme@lists.infradead.org; Mon, 06 Jan 2025 10:56:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 93513A4143E; Mon, 6 Jan 2025 10:54:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 048D9C4CEE3; Mon, 6 Jan 2025 10:56:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736160976; bh=lQotIkeEUoFgFIz4s/eV75qL4Ulg6WK2/agu4lpDPDM=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=G43Ua4hyG7KZ0Nb2KRkA+JEHIjWVPC06tyqi4g+CtLCOqw364rYjDUbGA/arXz8bG elsevEYuVTQXiWDFkWwdAVoLx0GukEm8XSu7319rc2XyHNMuf4UpSKdkrIlXOVbjeh 0qSrHV1MigyqCJxIv4z1e7ZiG7XY5vTIpC5GspISFDr+xWsqS51Cl5Ji3eDtzYIiZa /2Rtm9jMecdP1re8XIxKimmFuguUVRuZmre4lJgaODEt+YeH9iWJfUZDpbCVa9Fqt4 eN7uiprRf7/yJkO3VZNZJbxT5eBj1wprdlDfsRjNfDo00fw5wLqVblLEX7I5fHuGl8 M/DDCS+M+hFvw== Message-ID: <1538d5e9-eb59-49a7-90c8-77a290f3a420@kernel.org> Date: Mon, 6 Jan 2025 19:55:30 +0900 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 05/10] block: don't update BLK_FEAT_POLL in __blk_mq_update_nr_hw_queues To: Christoph Hellwig , Jens Axboe Cc: Ming Lei , Nilay Shroff , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, nbd@other.debian.org, virtualization@lists.linux.dev, linux-scsi@vger.kernel.org, usb-storage@lists.one-eyed-alien.net References: <20250106100645.850445-1-hch@lst.de> <20250106100645.850445-6-hch@lst.de> From: Damien Le Moal Content-Language: en-US Organization: Western Digital Research In-Reply-To: <20250106100645.850445-6-hch@lst.de> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250106_025617_368028_43BDFD6C X-CRM114-Status: GOOD ( 26.71 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On 1/6/25 7:06 PM, Christoph Hellwig wrote: > When __blk_mq_update_nr_hw_queues changes the number of tag sets, it > might have to disable poll queues. Currently it does so by adjusting > the BLK_FEAT_POLL, which is a bit against the intent of features that > describe hardware / driver capabilities, but more importantly causes > nasty lock order problems with the broadly held freeze when updating the > number of hardware queues and the limits lock. Fix this by leaving > BLK_FEAT_POLL alone, and instead check for the number of sets and poll > queues in the bio submission and poll handler. While this adds extra > work to the fast path, the variables are in cache lines used by these > operations anyway, so it should be cheap enough. > > Signed-off-by: Christoph Hellwig > --- > block/blk-core.c | 14 +++++++++++--- > block/blk-mq.c | 19 +------------------ > block/blk-mq.h | 6 ++++++ > 3 files changed, 18 insertions(+), 21 deletions(-) > > diff --git a/block/blk-core.c b/block/blk-core.c > index 666efe8fa202..483c14a50d9f 100644 > --- a/block/blk-core.c > +++ b/block/blk-core.c > @@ -753,6 +753,15 @@ static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q, > return BLK_STS_OK; > } > > +static bool bdev_can_poll(struct block_device *bdev) > +{ > + struct request_queue *q = bdev_get_queue(bdev); > + > + if (queue_is_mq(q)) > + return blk_mq_can_poll(q->tag_set); > + return q->limits.features & BLK_FEAT_POLL; > +} > + > /** > * submit_bio_noacct - re-submit a bio to the block device layer for I/O > * @bio: The bio describing the location in memory and on the device. > @@ -805,8 +814,7 @@ void submit_bio_noacct(struct bio *bio) > } > } > > - if (!(q->limits.features & BLK_FEAT_POLL) && > - (bio->bi_opf & REQ_POLLED)) { > + if ((bio->bi_opf & REQ_POLLED) && !bdev_can_poll(bdev)) { > bio_clear_polled(bio); > goto not_supported; > } > @@ -935,7 +943,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags) > return 0; > > q = bdev_get_queue(bdev); > - if (cookie == BLK_QC_T_NONE || !(q->limits.features & BLK_FEAT_POLL)) > + if (cookie == BLK_QC_T_NONE || !bdev_can_poll(bdev)) > return 0; > > blk_flush_plug(current->plug, false); > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 17f10683d640..0a7f059735fa 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -4321,12 +4321,6 @@ void blk_mq_release(struct request_queue *q) > blk_mq_sysfs_deinit(q); > } > > -static bool blk_mq_can_poll(struct blk_mq_tag_set *set) > -{ > - return set->nr_maps > HCTX_TYPE_POLL && > - set->map[HCTX_TYPE_POLL].nr_queues; > -} > - > struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, > struct queue_limits *lim, void *queuedata) > { > @@ -4336,9 +4330,7 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set, > > if (!lim) > lim = &default_lim; > - lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT; > - if (blk_mq_can_poll(set)) > - lim->features |= BLK_FEAT_POLL; > + lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL; Why set BLK_FEAT_POLL unconditionally ? This is changing the current default for many devices, no ? > > q = blk_alloc_queue(lim, set->numa_node); > if (IS_ERR(q)) > @@ -5025,8 +5017,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, > fallback: > blk_mq_update_queue_map(set); > list_for_each_entry(q, &set->tag_list, tag_set_list) { > - struct queue_limits lim; > - > blk_mq_realloc_hw_ctxs(set, q); > > if (q->nr_hw_queues != set->nr_hw_queues) { > @@ -5040,13 +5030,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, > set->nr_hw_queues = prev_nr_hw_queues; > goto fallback; > } > - lim = queue_limits_start_update(q); > - if (blk_mq_can_poll(set)) > - lim.features |= BLK_FEAT_POLL; > - else > - lim.features &= ~BLK_FEAT_POLL; > - if (queue_limits_commit_update(q, &lim) < 0) > - pr_warn("updating the poll flag failed\n"); > blk_mq_map_swqueue(q); > } > > diff --git a/block/blk-mq.h b/block/blk-mq.h > index 89a20fffa4b1..ecd7bd7ec609 100644 > --- a/block/blk-mq.h > +++ b/block/blk-mq.h > @@ -111,6 +111,12 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, > return ctx->hctxs[blk_mq_get_hctx_type(opf)]; > } > > +static inline bool blk_mq_can_poll(struct blk_mq_tag_set *set) > +{ > + return set->nr_maps > HCTX_TYPE_POLL && > + set->map[HCTX_TYPE_POLL].nr_queues; > +} > + > /* > * sysfs helpers > */ -- Damien Le Moal Western Digital Research