From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qk1-f228.google.com (mail-qk1-f228.google.com [209.85.222.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2932255F5E for ; Sat, 21 Feb 2026 03:33:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.228 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771644789; cv=none; b=W1tRQhOvGGcyPnEGES71txzUs8skpZ7DGTIL85c2IW/bNqSt+H+cb9QWSuhUXnVFg/I9qbZJxtg1jDASwPkLtNdGbpl9O6NASFJ9qiSy72JFcH6p16AL8+3C54m/QhVNNUUE6kmuF4h8AQZ22XY+bKUmrt7Vqj8V28KHLrood8w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771644789; c=relaxed/simple; bh=P8egThV3IYf0SuKnd82q7N7fziyz776R7rOqIcpr/pw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sHBP9jZ20MJppjszQsLeZrshi1biTWc9YVMdZZEHBKeKISSom5dUsswiQwKbHmGm1JsqaSsJDOTImVHRxl2JQ9/5dBBab4cScF7n5TMhu+j7/Z/BsN5PqMKRUaxK0wMjEzGQy7/1AV8kawBhSwtJI8wHJ6RqKQkaSFsflNTEMqo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=H2ColCIM; arc=none smtp.client-ip=209.85.222.228 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="H2ColCIM" Received: by mail-qk1-f228.google.com with SMTP id af79cd13be357-8cb5977d25eso38978785a.1 for ; Fri, 20 Feb 2026 19:33:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1771644787; x=1772249587; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=egZynsCm9n4B9np5fFEbCgBHoufsRNA7TK4tbUKZ+oI=; b=H2ColCIM+4JZsnNExnk2m4NTO+LEsA3S+1el+O35tJnzqbI4dMpyAOz++J8H1urqdU qt3VvN7PhNzTT3zPsLH3MiSjQ53Xy70gPE+QF9GtoVBVeeWUmpBGrNmis0v3bWcBzVjz 8zMc+FWu03hbOEPs3yTAEBLiwpzr1emOaBcvH68rX+haoThB7yE+QOis7IjnZsXnUmfE u83PPLqHhUpzWVcpJMRBWflYwwEORm7OrlGVZy9z3t+IYN17uqaUklmBnjhUZLBsxRSC DPpeM7TkSqLX510DtsbS9fgX6PMIkLaSpynuXpQT32xIYNk2D5/omb2OJKZAD3+9r83I b3zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771644787; x=1772249587; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=egZynsCm9n4B9np5fFEbCgBHoufsRNA7TK4tbUKZ+oI=; b=PVbSk2+ZmJPZ2SFtJFPkbNqUJZMls+PA3771fmtHM28s8NzuQEGiHWe0zg+Llc8xCO UYaHK7Ei31HH0/IsgwfR/JvXP+14vuOCvILwB4drNhyqcoE8CNGqy9fNuMPzQe6wpEHa G+PyZntC1yoBfm5wGuWdB4S6V7GyTvkdcvQtiQzsv4PrUyu8TPb0vwjfxLKfBShf/prp AaUS3QfbNix2V7Dbotz6JYNdjnMYTpHvpjchOKbOFA+zeXCTiqjoAZpZQ4qg9XX5XbcI Dt9Q+5KlQhLD2M5poR0T/TX9TIwQeBiKDCvm0UaQEKxpmQkD1z8kFjwPEyr+8sNoqQrm ekdw== X-Forwarded-Encrypted: i=1; AJvYcCWhd2Mx9UCN5OFmPQwvXP5GD1IUf1BJeUW2XNiAEmvt0hF3OP/v1hXQIzs2EehcwC0mcSbY5DvD8/IWeus=@vger.kernel.org X-Gm-Message-State: AOJu0YzqkcBSvJHBSNB/Iit4VodtUkUmb4eqeXCiCJ0fJzVhyNaljyvZ jVl8/ieO7y6eYS7sXiX33/lFAXyFROQYIHG7FDAZIQbdPEPHj5cW2z8wN2XiVicdd409MzM1U0i IhwbaLysy5+NGVLs1sek8AQNMMvIU7ausPBmN X-Gm-Gg: AZuq6aK4Pnc3xrdp6rMNkHz8imslFckZuZmtIjIH+5KG50ccpXPitJPdXdaVSQ+Y6gY 09c/r8ezBGxVucWqP8xkSb5B7kBhKO+47+mTUHvR2be+5x837FnxZjrZnFrdiz8XjTPzBMEde2w OEDV07uEURqwEszRUILqmTafcVzMzYUzq+wRXLYRL0wurtxjUqZPOqeo9pQTT8AuJ+XR2rCpe5A S9WwNb+Jt1psch/xbSg6E5mCDZFVhD9UQ0TaP16BogR9ZY99M7y/9iNc3kcfMkfbhiYvd1g3DbM FiPrwLXbRA/UN/eAqX2hbJa5/OJUHNcLjN4dZ0DH7I4C9HPh3sDJOdK4lcB8VMMcKYiyeONO0T1 wxy9MRag5u36lvI2Kzw4e5t6jbzSX+vXyNUh/iKDpM16sRrvuYa97SA== X-Received: by 2002:a05:6214:8088:b0:894:9d9b:84f9 with SMTP id 6a1803df08f44-89979c55249mr23184136d6.1.1771644786593; Fri, 20 Feb 2026 19:33:06 -0800 (PST) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id 6a1803df08f44-8997c4887b7sm1385526d6.6.2026.02.20.19.33.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Feb 2026 19:33:06 -0800 (PST) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.112.29.101]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id EE4493409F2; Fri, 20 Feb 2026 20:33:05 -0700 (MST) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id E4857E427CB; Fri, 20 Feb 2026 20:33:05 -0700 (MST) From: Caleb Sander Mateos To: Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v2 5/7] nvme: set discard_granularity from NPDG/NPDA Date: Fri, 20 Feb 2026 20:33:00 -0700 Message-ID: <20260221033302.1451669-6-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20260221033302.1451669-1-csander@purestorage.com> References: <20260221033302.1451669-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Currently, nvme_config_discard() always sets the discard_granularity queue limit to the logical block size. However, NVMe namespaces can advertise a larger preferred discard granularity in the NPDG or NPDA field of the Identify Namespace structure or the NPDGL or NPDAL fields of the I/O Command Set Specific Identify Namespace structure. Use these fields to compute the discard_granularity limit. The logic is somewhat involved. First, the fields are optional. NPDG is only reported if the low bit of OPTPERF is set in NSFEAT. NPDA is reported if any bit of OPTPERF is set. And NPDGL and NPDAL are reported if the high bit of OPTPERF is set. NPDGL and NPDAL can also each be set to 0 to opt out of reporting a limit. I/O Command Set Specific Identify Namespace may also not be supported by older NVMe controllers. Another complication is that multiple values may be reported among NPDG, NPDGL, NPDA, and NPDAL. The spec says to prefer the values reported in the L variants. The spec says NPDG should be a multiple of NPDA and NPDGL should be a multiple of NPDAL, but it doesn't specify a relationship between NPDG and NPDAL or NPDGL and NPDA. So use the maximum of the reported NPDG(L) and NPDA(L) values as the discard_granularity. Signed-off-by: Caleb Sander Mateos --- drivers/nvme/host/core.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 2b433478f328..35309dec1334 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2055,16 +2055,17 @@ static void nvme_set_ctrl_limits(struct nvme_ctrl *ctrl, lim->max_segment_size = UINT_MAX; lim->dma_alignment = 3; } static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id, - struct queue_limits *lim) + struct nvme_id_ns_nvm *nvm, struct queue_limits *lim) { struct nvme_ns_head *head = ns->head; struct nvme_ctrl *ctrl = ns->ctrl; u32 bs = 1U << head->lba_shift; u32 atomic_bs, phys_bs, io_opt = 0; + u32 npdg = 1, npda = 1; bool valid = true; u8 optperf; /* * The block layer can't support LBA sizes larger than the page size @@ -2113,11 +2114,27 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id, else if (ctrl->oncs & NVME_CTRL_ONCS_DSM) lim->max_hw_discard_sectors = UINT_MAX; else lim->max_hw_discard_sectors = 0; - lim->discard_granularity = lim->logical_block_size; + if (ctrl->dmrsl && ctrl->dmrsl <= nvme_sect_to_lba(ns->head, UINT_MAX)) + lim->max_hw_discard_sectors = + nvme_lba_to_sect(ns->head, ctrl->dmrsl); + else if (ctrl->oncs & NVME_CTRL_ONCS_DSM) + lim->max_hw_discard_sectors = UINT_MAX; + else + lim->max_hw_discard_sectors = 0; + + if (optperf & 0x2 && nvm && nvm->npdgl) + npdg = le32_to_cpu(nvm->npdgl); + else if (optperf & 0x1) + npdg = (u32)le16_to_cpu(id->npdg) + 1; + if (optperf & 0x2 && nvm && nvm->npdal) + npda = le32_to_cpu(nvm->npdal); + else if (optperf) + npda = (u32)le16_to_cpu(id->npda) + 1; + lim->discard_granularity = max(npdg, npda) * lim->logical_block_size; if (ctrl->dmrl) lim->max_discard_segments = ctrl->dmrl; else lim->max_discard_segments = NVME_DSM_MAX_RANGES; @@ -2380,11 +2397,11 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns, ns->head->nuse = le64_to_cpu(id->nuse); capacity = nvme_lba_to_sect(ns->head, le64_to_cpu(id->nsze)); nvme_set_ctrl_limits(ns->ctrl, &lim, false); nvme_configure_metadata(ns->ctrl, ns->head, id, nvm, info); nvme_set_chunk_sectors(ns, id, &lim); - if (!nvme_update_disk_info(ns, id, &lim)) + if (!nvme_update_disk_info(ns, id, nvm, &lim)) capacity = 0; if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && ns->head->ids.csi == NVME_CSI_ZNS) nvme_update_zone_info(ns, &lim, &zi); -- 2.45.2