From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
hch@lst.de, kbusch@kernel.org, axboe@fb.com,
chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Boris Pismenny <borisp@nvidia.com>,
aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
mgurtovoy@nvidia.com
Subject: Re: [PATCH v20 06/20] nvme-tcp: Add DDP data-path
Date: Wed, 29 Nov 2023 15:55:29 +0200 [thread overview]
Message-ID: <253msuwirzi.fsf@nvidia.com> (raw)
In-Reply-To: <84efdc69-364f-43fc-9c7a-0fbcab47571b@grimberg.me>
Sagi Grimberg <sagi@grimberg.me> writes:
>> +static void nvme_tcp_complete_request(struct request *rq,
>> + __le16 status,
>> + union nvme_result result,
>> + __u16 command_id)
>> +{
>> +#ifdef CONFIG_ULP_DDP
>> + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
>> +
>> + if (req->offloaded) {
>> + req->ddp_status = status;
>
> unless this is really a ddp_status, don't name it as such. afiact
> it is the nvme status, so lets stay consistent with the naming.
>
> btw, for making the code simpler we can promote the request
> status/result capture out of CONFIG_ULP_DDP to the general logic
> and then I think the code will look slightly simpler.
>
> This will be consistent with what we do in nvme-rdma and PI.
Ok, we will rename satuts to nvme_status and move it and result out of
the ifdef.
>> @@ -1283,6 +1378,9 @@ static int nvme_tcp_try_send_cmd_pdu(struct nvme_tcp_request *req)
>> else
>> msg.msg_flags |= MSG_EOR;
>>
>> + if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags))
>> + nvme_tcp_setup_ddp(queue, blk_mq_rq_from_pdu(req));
>> +
>
> We keep coming back to this. Why isn't setup done at setup time?
Sorry, this is a left-over from previous tests, we will move it as we
agreed last time [1].
1: https://lore.kernel.org/all/ef66595c-95cd-94c4-7f51-d3d7683a188a@grimberg.me/
next prev parent reply other threads:[~2023-11-29 13:55 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-22 13:48 [PATCH v20 00/20] nvme-tcp receive offloads Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2023-11-23 16:21 ` Jiri Pirko
2023-11-27 13:17 ` Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2023-11-28 10:31 ` Sagi Grimberg
2023-11-29 13:52 ` Aurelien Aptel
2023-12-11 17:29 ` Max Gurtovoy
2023-12-13 9:49 ` Sagi Grimberg
2023-11-22 13:48 ` [PATCH v20 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2023-11-28 10:40 ` Sagi Grimberg
2023-11-29 13:55 ` Aurelien Aptel [this message]
2023-11-22 13:48 ` [PATCH v20 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2023-11-28 10:42 ` Sagi Grimberg
2023-11-29 13:56 ` Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2023-11-28 10:44 ` Sagi Grimberg
2023-11-22 13:48 ` [PATCH v20 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2023-11-22 13:48 ` [PATCH v20 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=253msuwirzi.fsf@nvidia.com \
--to=aaptel@nvidia.com \
--cc=aurelien.aptel@gmail.com \
--cc=axboe@fb.com \
--cc=borisp@nvidia.com \
--cc=chaitanyak@nvidia.com \
--cc=davem@davemloft.net \
--cc=galshalom@nvidia.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=malin1024@gmail.com \
--cc=mgurtovoy@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@nvidia.com \
--cc=sagi@grimberg.me \
--cc=smalin@nvidia.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).