netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
	hch@lst.de, kbusch@kernel.org, axboe@fb.com,
	chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Boris Pismenny <borisp@nvidia.com>,
	aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
	ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
	mgurtovoy@nvidia.com
Subject: Re: [PATCH v23 06/20] nvme-tcp: Add DDP data-path
Date: Thu, 07 Mar 2024 17:44:13 +0200	[thread overview]
Message-ID: <253msraujw2.fsf@nvidia.com> (raw)
In-Reply-To: <40a01a90-b91f-4526-a404-462de3ffa38a@grimberg.me>

Sagi Grimberg <sagi@grimberg.me> writes:
>> +static void nvme_tcp_complete_request(struct request *rq,
>> +                                   __le16 status,
>> +                                   union nvme_result result,
>> +                                   __u16 command_id)
>> +{
>> +     struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
>> +
>> +     if (nvme_tcp_is_ddp_offloaded(req)) {
>> +             req->nvme_status = status;
>
> this can just be called req->status I think.

Since req->status already exists, we have checked whether it can be
safely used instead of adding nvme_status and it seems to be ok.

We will remove nvme_status.

>> +             req->result = result;
> I think it will be cleaner to always capture req->result and req->status
> regardless of ddp offload.

Sure, we will set status and result in the function before the offload
check:

static void nvme_tcp_complete_request(struct request *rq,
                                      __le16 status,
                                      union nvme_result result,
                                      __u16 command_id)
{
        struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);

        req->status = status;
        req->result = result;

        if (nvme_tcp_is_ddp_offloaded(req)) {
                /* complete when teardown is confirmed to be done */
                nvme_tcp_teardown_ddp(req->queue, rq);
                return;
        }

        if (!nvme_try_complete_req(rq, status, result))
                nvme_complete_rq(rq);
}

  reply	other threads:[~2024-03-07 15:44 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-28 12:57 [PATCH v23 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-03-07  9:06   ` Sagi Grimberg
2024-03-07 15:43     ` Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-03-07  9:11   ` Sagi Grimberg
2024-03-07 15:44     ` Aurelien Aptel [this message]
2024-02-28 12:57 ` [PATCH v23 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-03-07  9:18   ` Sagi Grimberg
2024-03-07 15:44     ` Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-02-28 12:57 ` [PATCH v23 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-02-29 17:32 ` [PATCH v23 00/20] nvme-tcp receive offloads Jakub Kicinski
2024-03-01 12:09   ` Aurelien Aptel
2024-03-01 17:03     ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=253msraujw2.fsf@nvidia.com \
    --to=aaptel@nvidia.com \
    --cc=aurelien.aptel@gmail.com \
    --cc=axboe@fb.com \
    --cc=borisp@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=galshalom@nvidia.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@nvidia.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@nvidia.com \
    --cc=yorayz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).