From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAD65C433E1 for ; Mon, 18 May 2020 18:18:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 901FB20671 for ; Mon, 18 May 2020 18:18:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589825887; bh=U8uXab8S/0VqSIOvJlQ4HEIViMdPlfMG23RGCtqDmaY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=T91smcCE/UAvqfBS8hIe0dmWBwyGcbDDF44Fakh5rC9kosSc6/b1yfvUDNe96ANz6 Vk3iiJCb07OjOZP0hscB/o0GiuYaAPZ99Mn4SHjE96ilTOxncLKvDFgmyNJRHWv67/ wyGqhnwX4cgMoXmw7WJU8H3r/6MXtTjNBpRnNfAs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732208AbgERSSE (ORCPT ); Mon, 18 May 2020 14:18:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:34288 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730827AbgERR4Z (ORCPT ); Mon, 18 May 2020 13:56:25 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 82C4D20674; Mon, 18 May 2020 17:56:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589824585; bh=U8uXab8S/0VqSIOvJlQ4HEIViMdPlfMG23RGCtqDmaY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=crmHxqxlitOHpR4L1vMpllAiQ+2J8qOzZpxja8ECPMo7xryyWsOaTKPTFhpTnpJVz lXESSos97vpEfyETkcpUKkGQIEwFqOtgR1ZU9QeVej8Of9f/8m45vSp0JFvXtPPJ8Y JwCooww/RqQMeSiASF9s4CqJHTyvOlTYnU7cn8us= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sahitya Tummala , Sarthak Garg , Adrian Hunter , Ulf Hansson , Sasha Levin Subject: [PATCH 5.4 069/147] mmc: core: Fix recursive locking issue in CQE recovery path Date: Mon, 18 May 2020 19:36:32 +0200 Message-Id: <20200518173522.620037806@linuxfoundation.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200518173513.009514388@linuxfoundation.org> References: <20200518173513.009514388@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Sarthak Garg [ Upstream commit 39a22f73744d5baee30b5f134ae2e30b668b66ed ] Consider the following stack trace -001|raw_spin_lock_irqsave -002|mmc_blk_cqe_complete_rq -003|__blk_mq_complete_request(inline) -003|blk_mq_complete_request(rq) -004|mmc_cqe_timed_out(inline) -004|mmc_mq_timed_out mmc_mq_timed_out acquires the queue_lock for the first time. The mmc_blk_cqe_complete_rq function also tries to acquire the same queue lock resulting in recursive locking where the task is spinning for the same lock which it has already acquired leading to watchdog bark. Fix this issue with the lock only for the required critical section. Cc: Fixes: 1e8e55b67030 ("mmc: block: Add CQE support") Suggested-by: Sahitya Tummala Signed-off-by: Sarthak Garg Acked-by: Adrian Hunter Link: https://lore.kernel.org/r/1588868135-31783-1-git-send-email-vbadigan@codeaurora.org Signed-off-by: Ulf Hansson Signed-off-by: Sasha Levin --- drivers/mmc/core/queue.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 9edc08685e86d..4d1e468d39823 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -107,7 +107,7 @@ static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req) case MMC_ISSUE_DCMD: if (host->cqe_ops->cqe_timeout(host, mrq, &recovery_needed)) { if (recovery_needed) - __mmc_cqe_recovery_notifier(mq); + mmc_cqe_recovery_notifier(mrq); return BLK_EH_RESET_TIMER; } /* No timeout (XXX: huh? comment doesn't make much sense) */ @@ -125,18 +125,13 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req, struct request_queue *q = req->q; struct mmc_queue *mq = q->queuedata; unsigned long flags; - int ret; + bool ignore_tout; spin_lock_irqsave(&mq->lock, flags); - - if (mq->recovery_needed || !mq->use_cqe) - ret = BLK_EH_RESET_TIMER; - else - ret = mmc_cqe_timed_out(req); - + ignore_tout = mq->recovery_needed || !mq->use_cqe; spin_unlock_irqrestore(&mq->lock, flags); - return ret; + return ignore_tout ? BLK_EH_RESET_TIMER : mmc_cqe_timed_out(req); } static void mmc_mq_recovery_handler(struct work_struct *work) -- 2.20.1