From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Christoph Hellwig <hch@lst.de>, Hector Martin <marcan@marcan.st>,
Sven Peter <sven@svenpeter.dev>, Keith Busch <kbusch@kernel.org>,
Sagi Grimberg <sagi@grimberg.me>,
James Smart <james.smart@broadcom.com>,
Chaitanya Kulkarni <kch@nvidia.com>
Cc: Alyssa Rosenzweig <alyssa@rosenzweig.io>,
asahi@lists.linux.dev, linux-nvme@lists.infradead.org
Subject: Re: [PATCH 03/21] nvme: set max_hw_sectors unconditionally
Date: Thu, 29 Feb 2024 12:46:31 +0200 [thread overview]
Message-ID: <ccd5efc1-c06a-4971-8efa-e7d59b859966@nvidia.com> (raw)
In-Reply-To: <20240228181215.873854-4-hch@lst.de>
On 28/02/2024 20:11, Christoph Hellwig wrote:
> All transports set a max_hw_sectors value in the nvme_ctrl, so make
> the code using it unconditional and clean it up using a little helper.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/nvme/host/core.c | 16 ++++++++--------
> 1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index eed3e22e24d913..74cd384ca5fc73 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1944,19 +1944,19 @@ static int nvme_configure_metadata(struct nvme_ctrl *ctrl,
> return 0;
> }
>
> +static u32 nvme_max_drv_segments(struct nvme_ctrl *ctrl)
> +{
> + return ctrl->max_hw_sectors / (NVME_CTRL_PAGE_SIZE >> SECTOR_SHIFT) + 1;
> +}
> +
> static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
> struct request_queue *q)
> {
> bool vwc = ctrl->vwc & NVME_CTRL_VWC_PRESENT;
>
> - if (ctrl->max_hw_sectors) {
> - u32 max_segments =
> - (ctrl->max_hw_sectors / (NVME_CTRL_PAGE_SIZE >> 9)) + 1;
> -
> - max_segments = min_not_zero(max_segments, ctrl->max_segments);
> - blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors);
> - blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
> - }
> + blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors);
> + blk_queue_max_segments(q, min_t(u32, USHRT_MAX,
> + min_not_zero(nvme_max_drv_segments(ctrl), ctrl->max_segments)));
> blk_queue_virt_boundary(q, NVME_CTRL_PAGE_SIZE - 1);
> blk_queue_dma_alignment(q, 3);
> blk_queue_write_cache(q, vwc, vwc);
Looks good,
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
next prev parent reply other threads:[~2024-02-29 10:46 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-28 18:11 convert nvme to atomic queue limits updates Christoph Hellwig
2024-02-28 18:11 ` [PATCH 01/21] block: add a queue_limits_set helper Christoph Hellwig
2024-02-28 18:11 ` [PATCH 02/21] block: add a queue_limits_stack_bdev helper Christoph Hellwig
2024-02-28 18:11 ` [PATCH 03/21] nvme: set max_hw_sectors unconditionally Christoph Hellwig
2024-02-29 10:46 ` Max Gurtovoy [this message]
2024-02-28 18:11 ` [PATCH 04/21] nvme: move NVME_QUIRK_DEALLOCATE_ZEROES out of nvme_config_discard Christoph Hellwig
2024-02-29 10:48 ` Max Gurtovoy
2024-02-28 18:11 ` [PATCH 05/21] nvme: remove nvme_revalidate_zones Christoph Hellwig
2024-02-28 23:47 ` Damien Le Moal
2024-02-28 18:12 ` [PATCH 06/21] nvme: move max_integrity_segments handling out of nvme_init_integrity Christoph Hellwig
2024-02-29 10:58 ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 07/21] nvme: cleanup the nvme_init_integrity calling conventions Christoph Hellwig
2024-02-29 12:33 ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 08/21] nvme: move blk_integrity_unregister into nvme_init_integrity Christoph Hellwig
2024-02-29 12:36 ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 09/21] nvme: don't use nvme_update_disk_info for the multipath disk Christoph Hellwig
2024-02-29 12:47 ` Max Gurtovoy
2024-02-29 13:02 ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 10/21] nvme: move a few things out of nvme_update_disk_info Christoph Hellwig
2024-02-28 18:12 ` [PATCH 11/21] nvme: move setting the write cache flags out of nvme_set_queue_limits Christoph Hellwig
2024-02-29 13:11 ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 12/21] nvme: move common logic into nvme_update_ns_info Christoph Hellwig
2024-02-29 13:30 ` Max Gurtovoy
2024-02-29 13:40 ` Christoph Hellwig
2024-02-28 18:12 ` [PATCH 13/21] nvme: split out a nvme_identify_ns_nvm helper Christoph Hellwig
2024-02-29 13:52 ` Max Gurtovoy
2024-02-29 13:53 ` Christoph Hellwig
2024-02-28 18:12 ` [PATCH 14/21] nvme: don't query identify data in configure_metadata Christoph Hellwig
2024-02-28 18:12 ` [PATCH 15/21] nvme: cleanup nvme_configure_metadata Christoph Hellwig
2024-02-28 18:12 ` [PATCH 16/21] nvme-rdma: initialize max_hw_sectors earlier Christoph Hellwig
2024-02-28 18:12 ` [PATCH 17/21] nvme-loop: " Christoph Hellwig
2024-02-28 18:12 ` [PATCH 18/21] nvme-fc: " Christoph Hellwig
2024-02-28 18:12 ` [PATCH 19/21] nvme-apple: " Christoph Hellwig
2024-02-28 18:12 ` [PATCH 20/21] nvme: use the atomic queue limits update API Christoph Hellwig
2024-02-28 18:12 ` [PATCH 21/21] nvme-multipath: pass queue_limits to blk_alloc_disk Christoph Hellwig
2024-03-01 16:20 ` convert nvme to atomic queue limits updates Keith Busch
2024-03-02 13:59 ` Christoph Hellwig
2024-03-02 23:21 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ccd5efc1-c06a-4971-8efa-e7d59b859966@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=alyssa@rosenzweig.io \
--cc=asahi@lists.linux.dev \
--cc=hch@lst.de \
--cc=james.smart@broadcom.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=marcan@marcan.st \
--cc=sagi@grimberg.me \
--cc=sven@svenpeter.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox