From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753841AbdJCMbF (ORCPT ); Tue, 3 Oct 2017 08:31:05 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:34162 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753820AbdJCMbA (ORCPT ); Tue, 3 Oct 2017 08:31:00 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christoph Hellwig , Keith Busch Subject: [PATCH 4.13 006/110] nvme-pci: propagate (some) errors from host memory buffer setup Date: Tue, 3 Oct 2017 14:28:28 +0200 Message-Id: <20171003114241.659026699@linuxfoundation.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171003114241.408583531@linuxfoundation.org> References: <20171003114241.408583531@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.13-stable review patch. If anyone has any objections, please let me know. ------------------ From: Christoph Hellwig commit 9620cfba97a8b88ae91f0e275e8ff110b578bb6e upstream. We want to catch command execution errors when resetting the device, so propagate errors from the Set Features when setting up the host memory buffer. We keep ignoring memory allocation failures, as the spec clearly says that the controller must work without a host memory buffer. Signed-off-by: Christoph Hellwig Reviewed-by: Keith Busch Signed-off-by: Greg Kroah-Hartman --- drivers/nvme/host/pci.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1690,12 +1690,13 @@ static int nvme_alloc_host_mem(struct nv return -ENOMEM; } -static void nvme_setup_host_mem(struct nvme_dev *dev) +static int nvme_setup_host_mem(struct nvme_dev *dev) { u64 max = (u64)max_host_mem_size_mb * SZ_1M; u64 preferred = (u64)dev->ctrl.hmpre * 4096; u64 min = (u64)dev->ctrl.hmmin * 4096; u32 enable_bits = NVME_HOST_MEM_ENABLE; + int ret = 0; preferred = min(preferred, max); if (min > max) { @@ -1703,7 +1704,7 @@ static void nvme_setup_host_mem(struct n "min host memory (%lld MiB) above limit (%d MiB).\n", min >> ilog2(SZ_1M), max_host_mem_size_mb); nvme_free_host_mem(dev); - return; + return 0; } /* @@ -1720,7 +1721,7 @@ static void nvme_setup_host_mem(struct n if (nvme_alloc_host_mem(dev, min, preferred)) { dev_warn(dev->ctrl.device, "failed to allocate host memory buffer.\n"); - return; + return 0; /* controller must work without HMB */ } dev_info(dev->ctrl.device, @@ -1728,8 +1729,10 @@ static void nvme_setup_host_mem(struct n dev->host_mem_size >> ilog2(SZ_1M)); } - if (nvme_set_host_mem(dev, enable_bits)) + ret = nvme_set_host_mem(dev, enable_bits); + if (ret) nvme_free_host_mem(dev); + return ret; } static int nvme_setup_io_queues(struct nvme_dev *dev) @@ -2173,8 +2176,11 @@ static void nvme_reset_work(struct work_ "unable to allocate dma for dbbuf\n"); } - if (dev->ctrl.hmpre) - nvme_setup_host_mem(dev); + if (dev->ctrl.hmpre) { + result = nvme_setup_host_mem(dev); + if (result < 0) + goto out; + } result = nvme_setup_io_queues(dev); if (result)