From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
hch@lst.de, kbusch@kernel.org, axboe@fb.com,
chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Boris Pismenny <borisp@nvidia.com>,
aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
mgurtovoy@nvidia.com
Subject: Re: [PATCH v15 06/20] nvme-tcp: Add DDP data-path
Date: Wed, 20 Sep 2023 11:39:24 +0300 [thread overview]
Message-ID: <253v8c5fdc3.fsf@nvidia.com> (raw)
In-Reply-To: <5b0fcc27-04aa-3ebd-e82a-8df39ed3ef5d@grimberg.me>
Sagi Grimberg <sagi@grimberg.me> writes:
> Can you please explain why? sk_incoming_cpu is updated from the network
> recv path while you are arguing that the timing matters before you even
> send the pdu. I don't understand why should that matter.
Sorry, the original answer was misleading.
The problem is not about the timing but only about which CPU the code is
running on. If we move setup_ddp() earlier as you suggested, it can
result it running on the wrong CPU.
Calling setup_ddp() in nvme_tcp_setup_cmd_pdu() will not guarantee we
are on running on the queue->io_cpu. It's only during
nvme_tcp_queue_request() that we either know we are running on
queue->io_cpu, or dispatch it to run on queue->io_cpu.
As it is only a performance optimization for the non-likely case, we can
move it to nvme_tcp_setup_cmd_pdu() as you suggested and re-consider in
the future if it will be needed.
Thanks
next prev parent reply other threads:[~2023-09-20 8:39 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-12 9:59 [PATCH v15 00/20] nvme-tcp receive offloads Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2023-09-12 16:17 ` David Ahern
2023-09-21 7:43 ` Aurelien Aptel
2023-09-21 12:33 ` David Ahern
2023-09-21 13:02 ` Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2023-09-12 13:24 ` Sagi Grimberg
2023-09-13 9:10 ` Aurelien Aptel
2023-09-13 10:46 ` Sagi Grimberg
2023-09-18 12:53 ` Aurelien Aptel
2023-09-13 10:49 ` Sagi Grimberg
2023-09-18 18:30 ` Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2023-09-13 10:51 ` Sagi Grimberg
2023-09-18 18:26 ` Aurelien Aptel
2023-09-19 7:04 ` Sagi Grimberg
2023-09-20 8:39 ` Aurelien Aptel [this message]
2023-09-20 10:11 ` Sagi Grimberg
2023-09-20 16:04 ` Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2023-09-12 9:59 ` [PATCH v15 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=253v8c5fdc3.fsf@nvidia.com \
--to=aaptel@nvidia.com \
--cc=aurelien.aptel@gmail.com \
--cc=axboe@fb.com \
--cc=borisp@nvidia.com \
--cc=chaitanyak@nvidia.com \
--cc=davem@davemloft.net \
--cc=galshalom@nvidia.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=malin1024@gmail.com \
--cc=mgurtovoy@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@nvidia.com \
--cc=sagi@grimberg.me \
--cc=smalin@nvidia.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).