From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F975C433F5 for ; Sat, 2 Apr 2022 14:32:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239878AbiDBOd7 (ORCPT ); Sat, 2 Apr 2022 10:33:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356507AbiDBOc7 (ORCPT ); Sat, 2 Apr 2022 10:32:59 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8D5D4C420 for ; Sat, 2 Apr 2022 07:31:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 56AB5615C4 for ; Sat, 2 Apr 2022 14:31:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B87FC340EC; Sat, 2 Apr 2022 14:31:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1648909866; bh=rsgAac8e8bbUxshF7Oiajn+JUEYSZS7ELLNe8SSbuSk=; h=Subject:To:Cc:From:Date:From; b=ZXUiFvX1yZouO+5br6/2AOABOTx0cDlO6fDsubXMJnhVhFk6pL6I/VUJxvTDkbZg+ kczUzeW8A/HTxDo0/Yq+x9Ai4tStqGPUdpVFOjUnTE7f1lgEDlIKYhkCL6S+/yhYc5 F75OUPLNQdlpdypK5jddCuPoUR0ByNaXzQgf6Gy4= Subject: FAILED: patch "[PATCH] scsi: qla2xxx: Fix loss of NVMe namespaces after driver" failed to apply to 5.17-stable tree To: aeasi@marvell.com, himanshu.madhani@oracle.com, martin.petersen@oracle.com, mpatalan@redhat.com, njavali@marvell.com Cc: From: Date: Sat, 02 Apr 2022 16:30:06 +0200 Message-ID: <1648909806128211@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch below does not apply to the 5.17-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From db212f2eb3fb7f546366777e93c8f54614d39269 Mon Sep 17 00:00:00 2001 From: Arun Easi Date: Thu, 10 Mar 2022 01:25:54 -0800 Subject: [PATCH] scsi: qla2xxx: Fix loss of NVMe namespaces after driver reload test Driver registration of localport can race when it happens at the remote port discovery time. Fix this by calling the registration under a mutex. Link: https://lore.kernel.org/r/20220310092604.22950-4-njavali@marvell.com Fixes: e84067d74301 ("scsi: qla2xxx: Add FC-NVMe F/W initialization and transport registration") Cc: stable@vger.kernel.org Reported-by: Marco Patalano Tested-by: Marco Patalano Reviewed-by: Himanshu Madhani Signed-off-by: Arun Easi Signed-off-by: Nilesh Javali Signed-off-by: Martin K. Petersen diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c index 5723082d94d6..3bf5cbd754a7 100644 --- a/drivers/scsi/qla2xxx/qla_nvme.c +++ b/drivers/scsi/qla2xxx/qla_nvme.c @@ -782,8 +782,6 @@ int qla_nvme_register_hba(struct scsi_qla_host *vha) ha = vha->hw; tmpl = &qla_nvme_fc_transport; - WARN_ON(vha->nvme_local_port); - if (ql2xnvme_queues < MIN_NVME_HW_QUEUES || ql2xnvme_queues > MAX_NVME_HW_QUEUES) { ql_log(ql_log_warn, vha, 0xfffd, "ql2xnvme_queues=%d is out of range(MIN:%d - MAX:%d). Resetting ql2xnvme_queues to:%d\n", @@ -797,7 +795,7 @@ int qla_nvme_register_hba(struct scsi_qla_host *vha) (uint8_t)(ha->max_qpairs ? ha->max_qpairs : 1)); ql_log(ql_log_info, vha, 0xfffb, - "Number of NVME queues used for this port: %d\n", + "Number of NVME queues used for this port: %d\n", qla_nvme_fc_transport.max_hw_queues); pinfo.node_name = wwn_to_u64(vha->node_name); @@ -805,13 +803,25 @@ int qla_nvme_register_hba(struct scsi_qla_host *vha) pinfo.port_role = FC_PORT_ROLE_NVME_INITIATOR; pinfo.port_id = vha->d_id.b24; - ql_log(ql_log_info, vha, 0xffff, - "register_localport: host-traddr=nn-0x%llx:pn-0x%llx on portID:%x\n", - pinfo.node_name, pinfo.port_name, pinfo.port_id); - qla_nvme_fc_transport.dma_boundary = vha->host->dma_boundary; - - ret = nvme_fc_register_localport(&pinfo, tmpl, - get_device(&ha->pdev->dev), &vha->nvme_local_port); + mutex_lock(&ha->vport_lock); + /* + * Check again for nvme_local_port to see if any other thread raced + * with this one and finished registration. + */ + if (!vha->nvme_local_port) { + ql_log(ql_log_info, vha, 0xffff, + "register_localport: host-traddr=nn-0x%llx:pn-0x%llx on portID:%x\n", + pinfo.node_name, pinfo.port_name, pinfo.port_id); + qla_nvme_fc_transport.dma_boundary = vha->host->dma_boundary; + + ret = nvme_fc_register_localport(&pinfo, tmpl, + get_device(&ha->pdev->dev), + &vha->nvme_local_port); + mutex_unlock(&ha->vport_lock); + } else { + mutex_unlock(&ha->vport_lock); + return 0; + } if (ret) { ql_log(ql_log_warn, vha, 0xffff, "register_localport failed: ret=%x\n", ret);