From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3B211C03 for ; Mon, 20 Mar 2023 15:07:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41F2DC433EF; Mon, 20 Mar 2023 15:07:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1679324829; bh=LMAnPrFomDHfoN75DI0ASHVFH4E7xDUXHNhI76XWoro=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wQAxt26jZxT8OpHJcnNa1Dhr0mPtNrG3SwfExqH6cimoRJGN7ySVa4/Yz3J2NKofL lfjcjo7Y8BxyoygUDeNPRimKEncHUQiIx+rEns04nHzwKlmAzs0Hx4HvwKlwhDhkts 0Lz7k8urPTEK2N3d5wHOXrtPxM5PxoIzVjmFnqG8= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Ming Lei , Chaitanya Kulkarni , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.15 038/115] nvme: fix handling single range discard request Date: Mon, 20 Mar 2023 15:54:10 +0100 Message-Id: <20230320145451.062526339@linuxfoundation.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230320145449.336983711@linuxfoundation.org> References: <20230320145449.336983711@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Ming Lei [ Upstream commit 37f0dc2ec78af0c3f35dd05578763de059f6fe77 ] When investigating one customer report on warning in nvme_setup_discard, we observed the controller(nvme/tcp) actually exposes queue_max_discard_segments(req->q) == 1. Obviously the current code can't handle this situation, since contiguity merge like normal RW request is taken. Fix the issue by building range from request sector/nr_sectors directly. Fixes: b35ba01ea697 ("nvme: support ranged discard requests") Signed-off-by: Ming Lei Reviewed-by: Chaitanya Kulkarni Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 06750f3d52745..ef9d7a795b007 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -853,16 +853,26 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, range = page_address(ns->ctrl->discard_page); } - __rq_for_each_bio(bio, req) { - u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector); - u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift; - - if (n < segments) { - range[n].cattr = cpu_to_le32(0); - range[n].nlb = cpu_to_le32(nlb); - range[n].slba = cpu_to_le64(slba); + if (queue_max_discard_segments(req->q) == 1) { + u64 slba = nvme_sect_to_lba(ns, blk_rq_pos(req)); + u32 nlb = blk_rq_sectors(req) >> (ns->lba_shift - 9); + + range[0].cattr = cpu_to_le32(0); + range[0].nlb = cpu_to_le32(nlb); + range[0].slba = cpu_to_le64(slba); + n = 1; + } else { + __rq_for_each_bio(bio, req) { + u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector); + u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift; + + if (n < segments) { + range[n].cattr = cpu_to_le32(0); + range[n].nlb = cpu_to_le32(nlb); + range[n].slba = cpu_to_le64(slba); + } + n++; } - n++; } if (WARN_ON_ONCE(n != segments)) { -- 2.39.2