From: Sagi Grimberg <sagi@grimberg.me>
To: Christoph Hellwig <hch@lst.de>
Cc: Jakub Kicinski <kuba@kernel.org>,
Aurelien Aptel <aaptel@nvidia.com>,
linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
kbusch@kernel.org, axboe@fb.com, chaitanyak@nvidia.com,
davem@davemloft.net
Subject: Re: [PATCH v25 00/20] nvme-tcp receive offloads
Date: Tue, 11 Jun 2024 14:01:32 +0300 [thread overview]
Message-ID: <6d53cf9e-a731-402c-8fc1-6dfe476bc35c@grimberg.me> (raw)
In-Reply-To: <20240611064132.GA6727@lst.de>
On 11/06/2024 9:41, Christoph Hellwig wrote:
> On Mon, Jun 10, 2024 at 05:30:34PM +0300, Sagi Grimberg wrote:
>>> efficient header splitting in the NIC, either hard coded or even
>>> better downloadable using something like eBPF.
>> From what I understand, this is what this offload is trying to do. It uses
>> the nvme command_id similar to how the read_stag is used in iwarp,
>> it tracks the NVMe/TCP pdus to split pdus from data transfers, and maps
>> the command_id to an internal MR for dma purposes.
>>
>> What I think you don't like about this is the interface that the offload
>> exposes
>> to the TCP ulp driver (nvme-tcp in our case)?
> I don't see why a memory registration is needed at all.
I don't see how you can do it without memory registration.
>
> The by far biggest painpoint when doing storage protocols (including
> file systems) over IP based storage is the data copy on the receive
> path because the payload is not aligned to a page boundary.
>
> So we need to figure out a way that is as stateless as possible that
> allows aligning the actual data payload on a page boundary in an
> otherwise normal IP receive path.
But the device gets payload from the network, and needs a buffer
to dma to. In order to dma to the "correct" buffer it needs some
sort of pre-registration expressed with a tag, that the device can
infer by some sort of stream inspection. The socket recv call from
the ulp happens at a later stage.
I am not sure I understand the alignment assurance help the NIC
to dma payload from the network to the "correct" buffer
(i.e. userspace doing O_DIRECT read).
next prev parent reply other threads:[~2024-06-11 11:01 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-29 16:00 [PATCH v25 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-05-31 1:39 ` [PATCH v25 00/20] nvme-tcp receive offloads Jakub Kicinski
2024-05-31 6:11 ` Christoph Hellwig
2024-06-03 7:09 ` Sagi Grimberg
2024-06-10 12:29 ` Christoph Hellwig
2024-06-10 14:30 ` Sagi Grimberg
2024-06-11 6:41 ` Christoph Hellwig
2024-06-11 11:01 ` Sagi Grimberg [this message]
2024-06-15 21:34 ` David Laight
2024-06-10 10:07 ` Sagi Grimberg
2024-06-10 13:23 ` Aurelien Aptel
2024-06-26 15:21 ` Aurelien Aptel
2024-06-26 15:50 ` Jakub Kicinski
2024-06-26 19:34 ` Chaitanya Kulkarni
2024-06-26 19:43 ` Jakub Kicinski
2024-06-26 20:10 ` Chaitanya Kulkarni
2024-06-26 20:14 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6d53cf9e-a731-402c-8fc1-6dfe476bc35c@grimberg.me \
--to=sagi@grimberg.me \
--cc=aaptel@nvidia.com \
--cc=axboe@fb.com \
--cc=chaitanyak@nvidia.com \
--cc=davem@davemloft.net \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).