From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFD99C2BB85 for ; Sat, 11 Apr 2020 12:14:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9286320644 for ; Sat, 11 Apr 2020 12:14:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586607270; bh=pZKyy5V/dyvl9aN9yRiWhzx5+CIPNaBdMMhXTjYbJL8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=CBUfo+G5mnVzu07d9lw5t+tx9W9iPUsmVfI9L33qHHokcVQxLBI04XdLfLV9C3eZa 5B5QVNgqLQoHpkxGgE/OYqRHSnT5b7hgPs5q1GwBH3aEr1OhmEybvKhpJWVMPuIfDF u/vNbZcV3tip/dmaYDZvKS5t8JKI7nusS8T2q56M= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727939AbgDKMO2 (ORCPT ); Sat, 11 Apr 2020 08:14:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:47828 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727052AbgDKMO0 (ORCPT ); Sat, 11 Apr 2020 08:14:26 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9716D20692; Sat, 11 Apr 2020 12:14:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586607267; bh=pZKyy5V/dyvl9aN9yRiWhzx5+CIPNaBdMMhXTjYbJL8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ye28ex9YIZnVicSYKYtrA98IfCCcI4t6rxubezMZNDsvwHV9jYjp74iIiP1ctR+fE 7id2R8drCojLp756WMCY0FGd5sjRYrCFVRUSh5lXE8B2UM9jwWZQinyX+zQAX8LBeg peCkcPksZ7WfT8TFQtRLmAtuTz9LqwAwyjEr4xVI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jianchao Wang , Ming Lei , Jens Axboe , Giuliano Procida Subject: [PATCH 4.14 09/38] blk-mq: sync the update nr_hw_queues with blk_mq_queue_tag_busy_iter Date: Sat, 11 Apr 2020 14:08:53 +0200 Message-Id: <20200411115438.939741034@linuxfoundation.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200411115437.795556138@linuxfoundation.org> References: <20200411115437.795556138@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jianchao Wang commit f5bbbbe4d63577026f908a809f22f5fd5a90ea1f upstream. For blk-mq, part_in_flight/rw will invoke blk_mq_in_flight/rw to account the inflight requests. It will access the queue_hw_ctx and nr_hw_queues w/o any protection. When updating nr_hw_queues and blk_mq_in_flight/rw occur concurrently, panic comes up. Before update nr_hw_queues, the q will be frozen. So we could use q_usage_counter to avoid the race. percpu_ref_is_zero is used here so that we will not miss any in-flight request. The access to nr_hw_queues and queue_hw_ctx in blk_mq_queue_tag_busy_iter are under rcu critical section, __blk_mq_update_nr_hw_queues could use synchronize_rcu to ensure the zeroed q_usage_counter to be globally visible. Signed-off-by: Jianchao Wang Reviewed-by: Ming Lei Signed-off-by: Jens Axboe Cc: Giuliano Procida Signed-off-by: Greg Kroah-Hartman --- block/blk-mq-tag.c | 14 +++++++++++++- block/blk-mq.c | 4 ++++ 2 files changed, 17 insertions(+), 1 deletion(-) --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -334,6 +334,18 @@ void blk_mq_queue_tag_busy_iter(struct r struct blk_mq_hw_ctx *hctx; int i; + /* + * __blk_mq_update_nr_hw_queues will update the nr_hw_queues and + * queue_hw_ctx after freeze the queue. So we could use q_usage_counter + * to avoid race with it. __blk_mq_update_nr_hw_queues will users + * synchronize_rcu to ensure all of the users go out of the critical + * section below and see zeroed q_usage_counter. + */ + rcu_read_lock(); + if (percpu_ref_is_zero(&q->q_usage_counter)) { + rcu_read_unlock(); + return; + } queue_for_each_hw_ctx(q, hctx, i) { struct blk_mq_tags *tags = hctx->tags; @@ -349,7 +361,7 @@ void blk_mq_queue_tag_busy_iter(struct r bt_for_each(hctx, &tags->breserved_tags, fn, priv, true); bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false); } - + rcu_read_unlock(); } static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth, --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2748,6 +2748,10 @@ static void __blk_mq_update_nr_hw_queues list_for_each_entry(q, &set->tag_list, tag_set_list) blk_mq_unfreeze_queue(q); + /* + * Sync with blk_mq_queue_tag_busy_iter. + */ + synchronize_rcu(); } void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues)