From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96DDB13C1E8; Wed, 7 Feb 2024 21:23:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707340995; cv=none; b=a6ceU1r5aosLB4UDBxqBet8FspPMJvpRSRXLJtHwOZW2f39QXPYPk7o7zASoJ2aPZlzxheSj9Axh5L+Mj/EHip957d8ATXdv63P4xPyrbnQY3Bm1sbMlqsRn6NM2ie/oZ9GGw+CobW47K8hTcd4Qzp4scWfi4aRavXW1TMeS8Us= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707340995; c=relaxed/simple; bh=YUrriIj8IUorFTPCUS0UsBOLhr+2M2SvMIakzyffmi4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=K7USoxW2bc0F1y2ZL9MwiFvq2Ary2Ibw1tAON70jrxn3lMBbhayBzB1k7ZN3Ax8Zzx4+WIt2vzUa5d0pIspsUhja+blKKoMHu+mB0irxL0+1n1vYtWDU/P/NrLnjJXIcFqTreiCViwrydOqvUqW6x3ZbbsjksTkDZh0V3PD568M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UplLWjTY; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UplLWjTY" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD465C43399; Wed, 7 Feb 2024 21:23:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1707340995; bh=YUrriIj8IUorFTPCUS0UsBOLhr+2M2SvMIakzyffmi4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UplLWjTYui0qw+uGCs3p39gQlvj2v8n+9gEwTkiTF5P2ofidJ3OKx5HJphit6kOLs bMYhbDSkF28lcTC/3c0zR+riYCUGvYURVH1sZioFh7Mw5AxhBKfPR7+MZg1eVoVauC HZ23G7lJSkTFy4ynJ1bFLZiSUZd0NelZS1g/pgi1VWKt2Ob8qZ9nz2BT6mBE8zcrUz 2KylXMYG9ergd69uVtjitdQI9hb0mx9c6OP4iR83X/dpByhsU62GdDBEJQd5pNhpvo Av8aDl9cWxlNRypjYkyBpA3OEJDTfn22bxsou8MSULXCi/tMHDosYGYs9A+bffccRV RyGwZWrfyQEXQ== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Daniel Wagner , Christoph Hellwig , Keith Busch , Sasha Levin , james.smart@broadcom.com, sagi@grimberg.me, kch@nvidia.com, linux-nvme@lists.infradead.org Subject: [PATCH AUTOSEL 6.7 39/44] nvmet-fc: avoid deadlock on delete association path Date: Wed, 7 Feb 2024 16:21:06 -0500 Message-ID: <20240207212142.1399-39-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240207212142.1399-1-sashal@kernel.org> References: <20240207212142.1399-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.7.4 Content-Transfer-Encoding: 8bit From: Daniel Wagner [ Upstream commit 710c69dbaccdac312e32931abcb8499c1525d397 ] When deleting an association the shutdown path is deadlocking because we try to flush the nvmet_wq nested. Avoid this by deadlock by deferring the put work into its own work item. Reviewed-by: Christoph Hellwig Signed-off-by: Daniel Wagner Signed-off-by: Keith Busch Signed-off-by: Sasha Levin --- drivers/nvme/target/fc.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index 1eda4fa0ae06..4793d1c64401 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -111,6 +111,8 @@ struct nvmet_fc_tgtport { struct nvmet_fc_port_entry *pe; struct kref ref; u32 max_sg_cnt; + + struct work_struct put_work; }; struct nvmet_fc_port_entry { @@ -247,6 +249,13 @@ static int nvmet_fc_tgt_a_get(struct nvmet_fc_tgt_assoc *assoc); static void nvmet_fc_tgt_q_put(struct nvmet_fc_tgt_queue *queue); static int nvmet_fc_tgt_q_get(struct nvmet_fc_tgt_queue *queue); static void nvmet_fc_tgtport_put(struct nvmet_fc_tgtport *tgtport); +static void nvmet_fc_put_tgtport_work(struct work_struct *work) +{ + struct nvmet_fc_tgtport *tgtport = + container_of(work, struct nvmet_fc_tgtport, put_work); + + nvmet_fc_tgtport_put(tgtport); +} static int nvmet_fc_tgtport_get(struct nvmet_fc_tgtport *tgtport); static void nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport, struct nvmet_fc_fcp_iod *fod); @@ -358,7 +367,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop) if (!lsop->req_queued) { spin_unlock_irqrestore(&tgtport->lock, flags); - goto out_puttgtport; + goto out_putwork; } list_del(&lsop->lsreq_list); @@ -371,8 +380,8 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop) (lsreq->rqstlen + lsreq->rsplen), DMA_BIDIRECTIONAL); -out_puttgtport: - nvmet_fc_tgtport_put(tgtport); +out_putwork: + queue_work(nvmet_wq, &tgtport->put_work); } static int @@ -1397,6 +1406,7 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo, kref_init(&newrec->ref); ida_init(&newrec->assoc_cnt); newrec->max_sg_cnt = template->max_sgl_segments; + INIT_WORK(&newrec->put_work, nvmet_fc_put_tgtport_work); ret = nvmet_fc_alloc_ls_iodlist(newrec); if (ret) { -- 2.43.0