netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Christoph Hellwig <hch@lst.de>
Cc: Jakub Kicinski <kuba@kernel.org>,
	Aurelien Aptel <aaptel@nvidia.com>,
	linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
	kbusch@kernel.org, axboe@fb.com, chaitanyak@nvidia.com,
	davem@davemloft.net
Subject: Re: [PATCH v25 00/20] nvme-tcp receive offloads
Date: Mon, 10 Jun 2024 17:30:34 +0300	[thread overview]
Message-ID: <9a03d3bf-c48f-4758-9d7f-a5e7920ec68f@grimberg.me> (raw)
In-Reply-To: <20240610122939.GA21899@lst.de>



On 10/06/2024 15:29, Christoph Hellwig wrote:
> On Mon, Jun 03, 2024 at 10:09:26AM +0300, Sagi Grimberg wrote:
>>> IETF has standardized a generic data placement protocol, which is
>>> part of iWarp.  Even if folks don't like RDMA it exists to solve
>>> exactly these kinds of problems of data placement.
>> iWARP changes the wire protocol.
> Compared to plain NVMe over TCP that's a bit of an understatement :)

Yes :) the comment was that people want to use NVMe/TCP, and adding
DDP awareness inspired by iWARP would change the existing NVMe/TCP wire 
protocol.

This offload, does not.

>
>> Is your comment to just go make people
>> use iWARP instead of TCP? or extending NVMe/TCP to natively support DDP?
> I don't know to be honest.  In many ways just using RDMA instead of
> NVMe/TCP would solve all the problems this is trying to solve, but
> there are enough big customers that have religious concerns about
> the use of RDMA.
>
> So if people want to use something that looks non-RDMA but have the
> same benefits we have to reinvent it quite similarly under a different
> name.  Looking at DDP and what we can learn from it without bringing
> the Verbs API along might be one way to do that.
>
> Another would be to figure out what amount of similarity and what
> amount of state we need in an on the wire protocol to have an
> efficient header splitting in the NIC, either hard coded or even
> better downloadable using something like eBPF.

 From what I understand, this is what this offload is trying to do. It uses
the nvme command_id similar to how the read_stag is used in iwarp,
it tracks the NVMe/TCP pdus to split pdus from data transfers, and maps
the command_id to an internal MR for dma purposes.

What I think you don't like about this is the interface that the offload 
exposes
to the TCP ulp driver (nvme-tcp in our case)?

>
>> That would be great, but what does a "vendor independent without hooks"
>> look like from
>> your perspective? I'd love having this translate to standard (and some new)
>> socket operations,
>> but I could not find a way that this can be done given the current
>> architecture.
> Any amount of calls into NIC/offload drivers from NVMe is a nogo.
>

Not following you here...
*something* needs to program a buffer for DDP, *something* needs to
invalidate this buffer, *something* needs to declare a TCP stream as DDP 
capable.

Unless I interpret what you're saying is that the interface needs to be 
generalized to
extend the standard socket operations (i.e. 
[s|g]etsockopt/recvmsg/cmsghdr etc) ?

  reply	other threads:[~2024-06-10 14:30 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-29 16:00 [PATCH v25 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-05-29 16:00 ` [PATCH v25 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-05-31  1:39 ` [PATCH v25 00/20] nvme-tcp receive offloads Jakub Kicinski
2024-05-31  6:11   ` Christoph Hellwig
2024-06-03  7:09     ` Sagi Grimberg
2024-06-10 12:29       ` Christoph Hellwig
2024-06-10 14:30         ` Sagi Grimberg [this message]
2024-06-11  6:41           ` Christoph Hellwig
2024-06-11 11:01             ` Sagi Grimberg
2024-06-15 21:34             ` David Laight
2024-06-10 10:07   ` Sagi Grimberg
2024-06-10 13:23     ` Aurelien Aptel
2024-06-26 15:21       ` Aurelien Aptel
2024-06-26 15:50         ` Jakub Kicinski
2024-06-26 19:34           ` Chaitanya Kulkarni
2024-06-26 19:43             ` Jakub Kicinski
2024-06-26 20:10               ` Chaitanya Kulkarni
2024-06-26 20:14                 ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9a03d3bf-c48f-4758-9d7f-a5e7920ec68f@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=aaptel@nvidia.com \
    --cc=axboe@fb.com \
    --cc=chaitanyak@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).