From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A5E6C35674 for ; Sun, 23 Feb 2020 02:22:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 234F821D56 for ; Sun, 23 Feb 2020 02:22:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582424560; bh=lxmqWaZgEEovvSNa32y3/wF6kvfIdlSz5WWFXV6+F1E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=t6X16jR1D4lilMxM1Jc4E63cGMBJIseNWkDiHm6XWjsia+1899XDPoVE0wtakRnlG Z2MZYsc/bvq2add770JP5d3HQiddfbVZNDdUCJWZj8MMxxjBneCt4/OwIylwBxvqgQ oUkWiZGinAPaWl6rnNF+O2DWx15mJxIJRnjUDRLs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728225AbgBWCWj (ORCPT ); Sat, 22 Feb 2020 21:22:39 -0500 Received: from mail.kernel.org ([198.145.29.99]:51544 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728097AbgBWCWb (ORCPT ); Sat, 22 Feb 2020 21:22:31 -0500 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AB13724656; Sun, 23 Feb 2020 02:22:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582424550; bh=lxmqWaZgEEovvSNa32y3/wF6kvfIdlSz5WWFXV6+F1E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dbVqZsvHZBuNRWwdS6d1YLwVpAJkS2WLZJbbfPPysqkqekUIjJismkdtpD3N+uI3Q 0zmEuoLTUwL+YUNsicrE/pYzmQOF7lpQeDPhXCV42rFosn6NJmNTF2QBIWJTDCjYYK SrDqNfJG8RggQDnz+L9DH8hMy/iSEjQU2eFVL9AU= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Keith Busch , Sagi Grimberg , Christoph Hellwig , Jens Axboe , Sasha Levin , linux-nvme@lists.infradead.org Subject: [PATCH AUTOSEL 5.5 58/58] nvme/pci: move cqe check after device shutdown Date: Sat, 22 Feb 2020 21:21:19 -0500 Message-Id: <20200223022119.707-58-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200223022119.707-1-sashal@kernel.org> References: <20200223022119.707-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Keith Busch [ Upstream commit fa46c6fb5d61b1f17b06d7c6ef75478b576304c7 ] Many users have reported nvme triggered irq_startup() warnings during shutdown. The driver uses the nvme queue's irq to synchronize scanning for completions, and enabling an interrupt affined to only offline CPUs triggers the alarming warning. Move the final CQE check to after disabling the device and all registered interrupts have been torn down so that we do not have any IRQ to synchronize. Link: https://bugzilla.kernel.org/show_bug.cgi?id=206509 Reviewed-by: Sagi Grimberg Reviewed-by: Christoph Hellwig Signed-off-by: Keith Busch Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/nvme/host/pci.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 365a2ddbeaa76..094c5924a6835 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1407,6 +1407,23 @@ static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown) nvme_poll_irqdisable(nvmeq, -1); } +/* + * Called only on a device that has been disabled and after all other threads + * that can check this device's completion queues have synced. This is the + * last chance for the driver to see a natural completion before + * nvme_cancel_request() terminates all incomplete requests. + */ +static void nvme_reap_pending_cqes(struct nvme_dev *dev) +{ + u16 start, end; + int i; + + for (i = dev->ctrl.queue_count - 1; i > 0; i--) { + nvme_process_cq(&dev->queues[i], &start, &end, -1); + nvme_complete_cqes(&dev->queues[i], start, end); + } +} + static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues, int entry_size) { @@ -2242,11 +2259,6 @@ static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) if (timeout == 0) return false; - /* handle any remaining CQEs */ - if (opcode == nvme_admin_delete_cq && - !test_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags)) - nvme_poll_irqdisable(nvmeq, -1); - sent--; if (nr_queues) goto retry; @@ -2435,6 +2447,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) nvme_suspend_io_queues(dev); nvme_suspend_queue(&dev->queues[0]); nvme_pci_disable(dev); + nvme_reap_pending_cqes(dev); blk_mq_tagset_busy_iter(&dev->tagset, nvme_cancel_request, &dev->ctrl); blk_mq_tagset_busy_iter(&dev->admin_tagset, nvme_cancel_request, &dev->ctrl); -- 2.20.1