public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
	hch@lst.de, kbusch@kernel.org, axboe@fb.com,
	chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Boris Pismenny <borisp@nvidia.com>,
	aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
	ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
	mgurtovoy@nvidia.com, brauner@kernel.org
Subject: Re: [PATCH v23 05/20] nvme-tcp: Add DDP offload control path
Date: Thu, 07 Mar 2024 17:43:24 +0200	[thread overview]
Message-ID: <253plw6ujxf.fsf@nvidia.com> (raw)
In-Reply-To: <7a2c3491-bd2a-4104-8371-f5b98bbd7355@grimberg.me>

Sagi Grimberg <sagi@grimberg.me> writes:
>> +
>> +static int nvme_tcp_offload_socket(struct nvme_tcp_queue *queue)
>> +{
>> +     struct ulp_ddp_config config = {.type = ULP_DDP_NVME};
>> +     int ret;
>> +
>> +     config.nvmeotcp.pfv = NVME_TCP_PFV_1_0;
>> +     config.nvmeotcp.cpda = 0;
>> +     config.nvmeotcp.dgst =
>> +             queue->hdr_digest ? NVME_TCP_HDR_DIGEST_ENABLE : 0;
>> +     config.nvmeotcp.dgst |=
>> +             queue->data_digest ? NVME_TCP_DATA_DIGEST_ENABLE : 0;
>> +     config.nvmeotcp.queue_size = queue->ctrl->ctrl.sqsize + 1;
>> +     config.nvmeotcp.queue_id = nvme_tcp_queue_id(queue);
>
> I forget, why is the queue_id needed? it does not travel the wire outside
> of the connect cmd.

You're right it is not needed, we will remove it.

>> +static void nvme_tcp_ddp_apply_limits(struct nvme_tcp_ctrl *ctrl)
>> +{
>> +     ctrl->ctrl.max_segments = ctrl->ddp_limits.max_ddp_sgl_len;
>> +     ctrl->ctrl.max_hw_sectors =
>> +             ctrl->ddp_limits.max_ddp_sgl_len << (ilog2(SZ_4K) - SECTOR_SHIFT);
>
> I think you can use NVME_CTRL_PAGE_SHIFT instead of ilog2(SZ_4K)?

Yes both seems to be 12. We will use NVME_CTRL_PAGE_SHIFT.

Thanks


  reply	other threads:[~2024-03-07 15:43 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-28 12:57 [PATCH v23 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-03-07  9:06   ` Sagi Grimberg
2024-03-07 15:43     ` Aurelien Aptel [this message]
2024-02-28 12:57 ` [PATCH v23 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-03-07  9:11   ` Sagi Grimberg
2024-03-07 15:44     ` Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-03-07  9:18   ` Sagi Grimberg
2024-03-07 15:44     ` Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-02-29 17:32 ` [PATCH v23 00/20] nvme-tcp receive offloads Jakub Kicinski
2024-03-01 12:09   ` Aurelien Aptel
2024-03-01 17:03     ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=253plw6ujxf.fsf@nvidia.com \
    --to=aaptel@nvidia.com \
    --cc=aurelien.aptel@gmail.com \
    --cc=axboe@fb.com \
    --cc=borisp@nvidia.com \
    --cc=brauner@kernel.org \
    --cc=chaitanyak@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=galshalom@nvidia.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@nvidia.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@nvidia.com \
    --cc=yorayz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox