From: Simon Horman <horms@kernel.org>
To: Aurelien Aptel <aaptel@nvidia.com>
Cc: linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
sagi@grimberg.me, hch@lst.de, kbusch@kernel.org, axboe@fb.com,
chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org,
aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
ogerlitz@nvidia.com, yorayz@nvidia.com, borisp@nvidia.com,
galshalom@nvidia.com, mgurtovoy@nvidia.com
Subject: Re: [PATCH v22 16/20] net/mlx5e: NVMEoTCP, queue init/teardown
Date: Sat, 23 Dec 2023 17:48:45 +0000 [thread overview]
Message-ID: <20231223174845.GJ201037@kernel.org> (raw)
In-Reply-To: <20231221213358.105704-17-aaptel@nvidia.com>
On Thu, Dec 21, 2023 at 09:33:54PM +0000, Aurelien Aptel wrote:
> From: Ben Ben-Ishay <benishay@nvidia.com>
>
> Adds the ddp ops of sk_add, sk_del and offload limits.
>
> When nvme-tcp establishes new queue/connection, the sk_add op is called.
> We allocate a hardware context to offload operations for this queue:
> - use a steering rule based on the connection 5-tuple to mark packets
> of this queue/connection with a flow-tag in their completion (cqe)
> - use a dedicated TIR to identify the queue and maintain the HW context
> - use a dedicated ICOSQ to maintain the HW context by UMR postings
> - use a dedicated tag buffer for buffer registration
> - maintain static and progress HW contexts by posting the proper WQEs.
>
> When nvme-tcp teardowns a queue/connection, the sk_del op is called.
> We teardown the queue and free the corresponding contexts.
>
> The offload limits we advertise deal with the max SG supported.
>
> [Re-enabled calling open/close icosq out of en_main.c]
>
> Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
> Signed-off-by: Boris Pismenny <borisp@nvidia.com>
> Signed-off-by: Or Gerlitz <ogerlitz@nvidia.com>
> Signed-off-by: Yoray Zack <yorayz@nvidia.com>
> Signed-off-by: Aurelien Aptel <aaptel@nvidia.com>
> Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
...
> +static int
> +mlx5e_nvmeotcp_build_icosq(struct mlx5e_nvmeotcp_queue *queue, struct mlx5e_priv *priv, int io_cpu)
> +{
> + u16 max_sgl, max_klm_per_wqe, max_umr_per_ccid, sgl_rest, wqebbs_rest;
> + struct mlx5e_channel *c = priv->channels.c[queue->channel_ix];
> + struct mlx5e_sq_param icosq_param = {};
> + struct mlx5e_create_cq_param ccp = {};
> + struct dim_cq_moder icocq_moder = {};
> + struct mlx5e_icosq *icosq;
> + int err = -ENOMEM;
> + u16 log_icosq_sz;
> + u32 max_wqebbs;
> +
> + icosq = &queue->sq;
> + max_sgl = mlx5e_get_max_sgl(priv->mdev);
> + max_klm_per_wqe = queue->max_klms_per_wqe;
> + max_umr_per_ccid = max_sgl / max_klm_per_wqe;
> + sgl_rest = max_sgl % max_klm_per_wqe;
> + wqebbs_rest = sgl_rest ? MLX5E_KLM_UMR_WQEBBS(sgl_rest) : 0;
> + max_wqebbs = (MLX5E_KLM_UMR_WQEBBS(max_klm_per_wqe) *
> + max_umr_per_ccid + wqebbs_rest) * queue->size;
> + log_icosq_sz = order_base_2(max_wqebbs);
> +
> + mlx5e_build_icosq_param(priv->mdev, log_icosq_sz, &icosq_param);
> + ccp.napi = &queue->qh.napi;
> + ccp.ch_stats = &priv->channel_stats[queue->channel_ix]->ch;
> + ccp.node = cpu_to_node(io_cpu);
> + ccp.ix = queue->channel_ix;
> +
> + err = mlx5e_open_cq(priv, icocq_moder, &icosq_param.cqp, &ccp, &icosq->cq);
Hi Aurelien and Ben,
This doesn't seem to compile with gcc-13 with allmodconfig on x86_64:
.../nvmeotcp.c: In function 'mlx5e_nvmeotcp_build_icosq':
.../nvmeotcp.c:472:29: error: passing argument 1 of 'mlx5e_open_cq' from incompatible pointer type [-Werror=incompatible-pointer-types]
472 | err = mlx5e_open_cq(priv, icocq_moder, &icosq_param.cqp, &ccp, &icosq->cq);
| ^~~~
| |
| struct mlx5e_priv *
In file included from .../nvmeotcp.h:9,
from .../nvmeotcp.c:7:
....h:1065:41: note: expected 'struct mlx5_core_dev *' but argument is of type 'struct mlx5e_priv *'
1065 | int mlx5e_open_cq(struct mlx5_core_dev *mdev, struct dim_cq_moder moder,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~
cc1: all warnings being treated as errors
> + if (err)
> + goto err_nvmeotcp_sq;
> + err = mlx5e_open_icosq(c, &priv->channels.params, &icosq_param, icosq,
> + mlx5e_nvmeotcp_icosq_err_cqe_work);
> + if (err)
> + goto close_cq;
> +
> + spin_lock_init(&queue->sq_lock);
> + return 0;
> +
> +close_cq:
> + mlx5e_close_cq(&icosq->cq);
> +err_nvmeotcp_sq:
> + return err;
> +}
...
next prev parent reply other threads:[~2023-12-23 17:49 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-21 21:33 [PATCH v22 00/20] nvme-tcp receive offloads Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-01-02 10:08 ` Jiri Pirko
2023-12-21 21:33 ` [PATCH v22 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-01-09 9:34 ` Aurelien Aptel
2024-01-10 16:05 ` Sagi Grimberg
2024-01-17 10:03 ` Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-01-10 16:01 ` Sagi Grimberg
2024-01-17 10:05 ` Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2023-12-23 17:48 ` Simon Horman [this message]
2023-12-26 12:58 ` Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2023-12-21 21:33 ` [PATCH v22 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231223174845.GJ201037@kernel.org \
--to=horms@kernel.org \
--cc=aaptel@nvidia.com \
--cc=aurelien.aptel@gmail.com \
--cc=axboe@fb.com \
--cc=borisp@nvidia.com \
--cc=chaitanyak@nvidia.com \
--cc=davem@davemloft.net \
--cc=galshalom@nvidia.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=malin1024@gmail.com \
--cc=mgurtovoy@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@nvidia.com \
--cc=sagi@grimberg.me \
--cc=smalin@nvidia.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox