netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Aurelien Aptel <aaptel@nvidia.com>
To: Jiri Pirko <jiri@resnulli.us>
Cc: linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
	sagi@grimberg.me, hch@lst.de, kbusch@kernel.org, axboe@fb.com,
	chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org,
	Boris Pismenny <borisp@nvidia.com>,
	aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
	ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
	mgurtovoy@nvidia.com, edumazet@google.com, pabeni@redhat.com,
	dsahern@kernel.org, imagedong@tencent.com, ast@kernel.org,
	jacob.e.keller@intel.com
Subject: Re: [PATCH v17 01/20] net: Introduce direct data placement tcp offload
Date: Fri, 27 Oct 2023 12:07:58 +0300	[thread overview]
Message-ID: <253jzr8juvl.fsf@nvidia.com> (raw)
In-Reply-To: <ZTfSOv0F7licIO6Y@nanopsycho>

Jiri Pirko <jiri@resnulli.us> writes:
>>@@ -2134,6 +2146,9 @@ struct net_device {
>>       netdev_features_t       mpls_features;
>>       netdev_features_t       gso_partial_features;
>>
>>+#ifdef CONFIG_ULP_DDP
>>+      struct ulp_ddp_netdev_caps ulp_ddp_caps;
>
> Why can't you have this inside the driver? You have set_caps/get_stats
> ops. Try to avoid netdev struct pollution.

Ok, we will move ulp_ddp_caps to the driver and add a get_caps() operation.

>>+struct netlink_ulp_ddp_stats {
> There is nothing "netlink" about this. Just stats. Exposed over netlink,
> yes, but that does not need the prefix.

Ok, we will remove the netlink prefix.

>>+enum {
>>+      ULP_DDP_C_NVME_TCP_BIT,
>>+      ULP_DDP_C_NVME_TCP_DDGST_RX_BIT,
>>+
>>+      /*
>>+       * add capabilities above and keep in sync with
>>+       * Documentation/netlink/specs/ulp_ddp.yaml
>
> Wait what? Why do you need this at all? Just use the uapi enum.

The generated enum does not define a "count" (ULP_DDP_C_COUNT) which we
need to know how big the bitfield should be. Maybe the code generator
can be patched to add a #define with the number of values in the enum?

  reply	other threads:[~2023-10-27  9:08 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-24 12:54 [PATCH v17 00/20] nvme-tcp receive offloads Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2023-10-24 14:18   ` Jiri Pirko
2023-10-27  9:07     ` Aurelien Aptel [this message]
2023-10-27  9:30       ` Jiri Pirko
2023-10-24 12:54 ` [PATCH v17 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2023-10-24 13:58   ` Jiri Pirko
2023-10-24 14:59     ` Jakub Kicinski
2023-10-27  9:11     ` Aurelien Aptel
2023-10-27  9:28       ` Jiri Pirko
2023-10-24 12:54 ` [PATCH v17 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2023-10-24 12:54 ` [PATCH v17 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=253jzr8juvl.fsf@nvidia.com \
    --to=aaptel@nvidia.com \
    --cc=ast@kernel.org \
    --cc=aurelien.aptel@gmail.com \
    --cc=axboe@fb.com \
    --cc=borisp@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=galshalom@nvidia.com \
    --cc=hch@lst.de \
    --cc=imagedong@tencent.com \
    --cc=jacob.e.keller@intel.com \
    --cc=jiri@resnulli.us \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@nvidia.com \
    --cc=pabeni@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@nvidia.com \
    --cc=yorayz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).