From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
hch@lst.de, kbusch@kernel.org, axboe@fb.com,
chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Yoray Zack <yorayz@nvidia.com>,
aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
ogerlitz@nvidia.com, borisp@nvidia.com, galshalom@nvidia.com,
mgurtovoy@nvidia.com
Subject: Re: [PATCH v12 09/26] nvme-tcp: RX DDGST offload
Date: Thu, 10 Aug 2023 17:48:58 +0300 [thread overview]
Message-ID: <253msyzvtph.fsf@nvidia.com> (raw)
In-Reply-To: <2a75b296-edff-3151-7c6e-22209f09a100@grimberg.me>
Sagi Grimberg <sagi@grimberg.me> writes:
> grr.. wondering if this is something we want to support (crc without
> ddp).
We agree, we don't want to support it. We will remove it and check it
doesn't happen in is_netdev_offload_active().
>> + req->ddp.sg_table.sgl = req->ddp.first_sgl;
> Why is this assignment needed? why not pass req->ddp.first_sgl ?
Correct, this assignment is not needed we will remove it.
>> static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
>> @@ -1047,7 +1126,8 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb,
>> size_t rcv_len = min_t(size_t, *len, queue->pdu_remaining);
>> int ret;
>>
>> - if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags))
>> + if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags) ||
>> + test_bit(NVME_TCP_Q_OFF_DDGST_RX, &queue->flags))
>
> This now becomes two atomic bitops to check for each capability, where
> its more likely that neighther are on...
>
> Is this really racing with anything? maybe just check with bitwise AND?
> or a local variable (or struct member)
> I don't think that we should add any more overhead for the normal path
> than we already have.
Are you sure test_bit() is atomic? The underlying definitions seems
non-atomic (constant_test_bit or const_test_bit), are we missing
anything?
We were also following a similar implementation to NVME_TCP_Q_POLLING
which was using test_bit(). Should we move to a regular bool flag like
queue->data_digest?
>> + if (queue->data_digest &&
>> + test_bit(NVME_TCP_Q_OFF_DDGST_RX, &queue->flags))
>
> And a third atomic bitop..
See above
>> + !test_bit(NVME_TCP_Q_OFF_DDGST_RX, &queue->flags))
> and a 4'th atomic bitop...
See above
>> + if (test_bit(NVME_TCP_Q_OFF_DDGST_RX, &queue->flags))
>> + nvme_tcp_ddp_ddgst_update(queue, skb);
>
> and a 5'th atomic bitop...
See above
>> + if (test_bit(NVME_TCP_Q_OFF_DDGST_RX, &queue->flags)) {
> and a 6'th... ok this is just spraying atomic bitops on the data
> path. Please find a better solution to this.
See above
Thanks
next prev parent reply other threads:[~2023-08-10 14:49 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-12 16:14 [PATCH v12 00/26] nvme-tcp receive offloads Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 01/26] net: Introduce direct data placement tcp offload Aurelien Aptel
2023-08-09 7:15 ` Sagi Grimberg
2023-08-10 14:46 ` Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 02/26] net/ethtool: add new stringset ETH_SS_ULP_DDP_{CAPS,STATS} Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 03/26] net/ethtool: add ULP_DDP_{GET,SET} operations for caps and stats Aurelien Aptel
2023-07-15 10:14 ` Simon Horman
2023-07-17 9:45 ` Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 04/26] Documentation: document netlink ULP_DDP_GET/SET messages Aurelien Aptel
2023-07-15 10:17 ` Simon Horman
2023-07-17 9:47 ` Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 05/26] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2023-08-16 0:24 ` Max Gurtovoy
2023-07-12 16:14 ` [PATCH v12 06/26] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 07/26] nvme-tcp: Add DDP offload control path Aurelien Aptel
2023-08-01 2:25 ` Chaitanya Kulkarni
2023-08-09 7:39 ` Sagi Grimberg
2023-08-11 5:28 ` Chaitanya Kulkarni
2023-08-16 0:50 ` Max Gurtovoy
2023-08-09 7:13 ` Sagi Grimberg
2023-08-14 16:11 ` Aurelien Aptel
2023-08-14 18:54 ` Sagi Grimberg
2023-08-16 12:30 ` Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 08/26] nvme-tcp: Add DDP data-path Aurelien Aptel
2023-08-09 7:35 ` Sagi Grimberg
2023-08-14 16:12 ` Aurelien Aptel
2023-08-14 19:01 ` Sagi Grimberg
2023-08-17 13:28 ` Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 09/26] nvme-tcp: RX DDGST offload Aurelien Aptel
2023-08-09 7:59 ` Sagi Grimberg
2023-08-10 14:48 ` Aurelien Aptel [this message]
2023-08-13 13:49 ` Sagi Grimberg
2023-07-12 16:14 ` [PATCH v12 10/26] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2023-08-09 8:00 ` Sagi Grimberg
2023-08-16 13:03 ` Aurelien Aptel
2023-08-16 14:10 ` Sagi Grimberg
2023-08-17 14:09 ` Aurelien Aptel
2023-08-20 10:50 ` Sagi Grimberg
2023-08-21 12:33 ` Aurelien Aptel
2023-07-12 16:14 ` [PATCH v12 11/26] nvme-tcp: Add modparam to control the ULP offload enablement Aurelien Aptel
2023-08-09 8:03 ` Sagi Grimberg
2023-08-10 14:50 ` Aurelien Aptel
2023-08-16 1:05 ` Max Gurtovoy
2023-07-12 16:14 ` [PATCH v12 12/26] nvme-tcp: Only enable offload with TLS if the driver supports it Aurelien Aptel
2023-08-09 8:05 ` Sagi Grimberg
2023-08-10 14:52 ` Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 13/26] Documentation: add ULP DDP offload documentation Aurelien Aptel
2023-07-15 10:32 ` Simon Horman
2023-07-17 9:48 ` Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 14/26] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 15/26] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 16/26] net/mlx5e: Have mdev pointer directly on the icosq structure Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 17/26] net/mlx5e: Refactor doorbell function to allow avoiding a completion Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 18/26] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 19/26] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 20/26] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 21/26] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 22/26] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 23/26] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 24/26] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 25/26] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2023-07-12 16:15 ` [PATCH v12 26/26] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=253msyzvtph.fsf@nvidia.com \
--to=aaptel@nvidia.com \
--cc=aurelien.aptel@gmail.com \
--cc=axboe@fb.com \
--cc=borisp@nvidia.com \
--cc=chaitanyak@nvidia.com \
--cc=davem@davemloft.net \
--cc=galshalom@nvidia.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=malin1024@gmail.com \
--cc=mgurtovoy@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@nvidia.com \
--cc=sagi@grimberg.me \
--cc=smalin@nvidia.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).