From: Jakub Kicinski <kuba@kernel.org>
To: Aurelien Aptel <aaptel@nvidia.com>
Cc: linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
sagi@grimberg.me, hch@lst.de, kbusch@kernel.org, axboe@fb.com,
chaitanyak@nvidia.com, davem@davemloft.net,
aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
ogerlitz@nvidia.com, yorayz@nvidia.com, borisp@nvidia.com,
galshalom@nvidia.com, mgurtovoy@nvidia.com, edumazet@google.com
Subject: Re: [PATCH v24 00/20] nvme-tcp receive offloads
Date: Fri, 5 Apr 2024 22:45:04 -0700 [thread overview]
Message-ID: <20240405224504.4cb620de@kernel.org> (raw)
In-Reply-To: <20240404123717.11857-1-aaptel@nvidia.com>
Doesn't apply, again, unfortunately.
On Thu, 4 Apr 2024 12:36:57 +0000 Aurelien Aptel wrote:
> Testing
> =======
> This series was tested on ConnectX-7 HW using various configurations
> of IO sizes, queue depths, MTUs, and with both the SPDK and kernel NVMe-TCP
> targets.
About testing, what do you have in terms of a testing setup?
As you said this is similar to the TLS offload:
> Note:
> These offloads are similar in nature to the packet-based NIC TLS offloads,
> which are already upstream (see net/tls/tls_device.c).
> You can read more about TLS offload here:
> https://www.kernel.org/doc/html/latest/networking/tls-offload.html
and our experience trying to maintain and extend the very much used SW
kernel TLS implementation in presence of the device offload is mixed :(
We can't test it, we break it, get CVEs for it :( In all fairness
the inline offload is probably not as bad as the crypto accelerator
path, but still it breaks. So assuming that this gets all the necessary
acks can we expect you to plug some testing into the netdev CI so that
we see whether any incoming change is breaking this offload?
next prev parent reply other threads:[~2024-04-06 5:45 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-04 12:36 [PATCH v24 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-04-04 12:36 ` [PATCH v24 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-04-21 11:47 ` Sagi Grimberg
2024-04-26 7:21 ` Aurelien Aptel
2024-04-28 8:15 ` Sagi Grimberg
2024-04-29 11:35 ` Aurelien Aptel
2024-04-30 11:54 ` Sagi Grimberg
2024-05-02 7:04 ` Aurelien Aptel
2024-05-03 7:31 ` Sagi Grimberg
2024-05-06 12:28 ` Aurelien Aptel
2024-04-04 12:36 ` [PATCH v24 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-04-15 14:28 ` Max Gurtovoy
2024-04-16 20:30 ` David Laight
2024-04-18 8:22 ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-04-21 11:45 ` Sagi Grimberg
2024-04-04 12:37 ` [PATCH v24 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-04-07 22:08 ` Sagi Grimberg
2024-04-10 6:31 ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-04-07 22:08 ` Sagi Grimberg
2024-04-10 6:31 ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-04-09 8:49 ` Bagas Sanjaya
2024-04-04 12:37 ` [PATCH v24 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-04-06 5:45 ` Jakub Kicinski [this message]
2024-04-07 22:21 ` [PATCH v24 00/20] nvme-tcp receive offloads Sagi Grimberg
2024-04-09 22:35 ` Chaitanya Kulkarni
2024-04-09 22:59 ` Jakub Kicinski
2024-04-18 8:29 ` Chaitanya Kulkarni
2024-04-18 15:28 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240405224504.4cb620de@kernel.org \
--to=kuba@kernel.org \
--cc=aaptel@nvidia.com \
--cc=aurelien.aptel@gmail.com \
--cc=axboe@fb.com \
--cc=borisp@nvidia.com \
--cc=chaitanyak@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=galshalom@nvidia.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=malin1024@gmail.com \
--cc=mgurtovoy@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@nvidia.com \
--cc=sagi@grimberg.me \
--cc=smalin@nvidia.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).