From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:55572 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1425045AbeCBJDs (ORCPT ); Fri, 2 Mar 2018 04:03:48 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Wen Xiong , "chenxiang (M)" , Mauricio Faria de Oliveira , Ming Lei , Jens Axboe , Sasha Levin Subject: [PATCH 4.14 088/115] block: drain queue before waiting for q_usage_counter becoming zero Date: Fri, 2 Mar 2018 09:51:31 +0100 Message-Id: <20180302084507.412212701@linuxfoundation.org> In-Reply-To: <20180302084503.856536800@linuxfoundation.org> References: <20180302084503.856536800@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ming Lei [ Upstream commit 454be724f6f99cc7e7bbf15067128be9868186c6 ] Now we track legacy requests with .q_usage_counter in commit 055f6e18e08f ("block: Make q_usage_counter also track legacy requests"), but that commit never runs and drains legacy queue before waiting for this counter becoming zero, then IO hang is caused in the test of pulling disk during IO. This patch fixes the issue by draining requests before waiting for q_usage_counter becoming zero, both Mauricio and chenxiang reported this issue, and observed that it can be fixed by this patch. Link: https://marc.info/?l=linux-block&m=151192424731797&w=2 Fixes: 055f6e18e08f("block: Make q_usage_counter also track legacy requests") Cc: Wen Xiong Tested-by: "chenxiang (M)" Tested-by: Mauricio Faria de Oliveira Signed-off-by: Ming Lei Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- block/blk-core.c | 9 +++++++-- block/blk-mq.c | 2 ++ block/blk.h | 2 ++ 3 files changed, 11 insertions(+), 2 deletions(-) --- a/block/blk-core.c +++ b/block/blk-core.c @@ -531,6 +531,13 @@ static void __blk_drain_queue(struct req } } +void blk_drain_queue(struct request_queue *q) +{ + spin_lock_irq(q->queue_lock); + __blk_drain_queue(q, true); + spin_unlock_irq(q->queue_lock); +} + /** * blk_queue_bypass_start - enter queue bypass mode * @q: queue of interest @@ -655,8 +662,6 @@ void blk_cleanup_queue(struct request_qu */ blk_freeze_queue(q); spin_lock_irq(lock); - if (!q->mq_ops) - __blk_drain_queue(q, true); queue_flag_set(QUEUE_FLAG_DEAD, q); spin_unlock_irq(lock); --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -159,6 +159,8 @@ void blk_freeze_queue(struct request_que * exported to drivers as the only user for unfreeze is blk_mq. */ blk_freeze_queue_start(q); + if (!q->mq_ops) + blk_drain_queue(q); blk_mq_freeze_queue_wait(q); } --- a/block/blk.h +++ b/block/blk.h @@ -362,4 +362,6 @@ static inline void blk_queue_bounce(stru } #endif /* CONFIG_BOUNCE */ +extern void blk_drain_queue(struct request_queue *q); + #endif /* BLK_INTERNAL_H */