From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
hch@lst.de, kbusch@kernel.org, axboe@fb.com,
chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Boris Pismenny <borisp@nvidia.com>,
aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
mgurtovoy@nvidia.com, edumazet@google.com, pabeni@redhat.com,
dsahern@kernel.org, ast@kernel.org, jacob.e.keller@intel.com
Subject: Re: [PATCH v24 01/20] net: Introduce direct data placement tcp offload
Date: Thu, 02 May 2024 10:04:11 +0300 [thread overview]
Message-ID: <253frv0r8yc.fsf@nvidia.com> (raw)
In-Reply-To: <2d4f4468-343a-4706-8469-56990c287dba@grimberg.me>
Sagi Grimberg <sagi@grimberg.me> writes:
> Well, you cannot rely on the fact that the application will be pinned to a
> specific cpu core. That may be the case by accident, but you must not and
> cannot assume it.
Just to be clear, any CPU can read from the socket and benefit from the
offload but there will be an extra cost if the queue CPU is different
from the offload CPU. We use cfg->io_cpu as a hint.
> Even today, nvme-tcp has an option to run from an unbound wq context,
> where queue->io_cpu is set to WORK_CPU_UNBOUND. What are you going to
> do there?
When the CPU is not bound to a specific core, we will most likely always
have CPU misalignment and the extra cost that goes with it.
But when it is bound, which is still the default common case, we will
benefit from the alignment. To not lose that benefit for the default
most common case, we would like to keep cfg->io_cpu.
Could you clarify what are the advantages of running unbounded queues,
or to handle RX on a different cpu than the current io_cpu?
> nvme-tcp may handle rx side directly from .data_ready() in the future, what
> will the offload do in that case?
It is not clear to us what the benefit of handling rx in .data_ready()
will achieve. From our experiment, ->sk_data_ready() is called either
from queue->io_cpu, or sk->sk_incoming_cpu. Unless you enable aRFS,
sk_incoming_cpu will be constant for the whole connection. Can you
clarify would handling RX from data_ready() provide?
> io_cpu may or may not mean anything. You cannot rely on it, nor dictate it.
We are just interested in optimizing the bounded case, where io_cpu has
meaning.
> > - or we remove cfg->io_cpu, and we offload the socket from
> > nvme_tcp_io_work() where the io_cpu is implicitly going to be
> > the current CPU.
> What do you mean offload the socket from nvme_tcp_io_work? I do not
> understand what this means.
We meant setting up the offload from the io thread instead, by calling
nvme_tcp_offload_socket() from nvme_tcp_io_work(), and making sure it's
only called once. Something like this:
+ if (queue->ctrl->ddp_netdev && !nvme_tcp_admin_queue(queue) && !test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags)) {
+ int ret;
+
+ ret = nvme_tcp_offload_socket(queue);
+ if (ret) {
+ printk("XXX offload setup failed\n");
+ }
+ }
next prev parent reply other threads:[~2024-05-02 7:04 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-04 12:36 [PATCH v24 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-04-04 12:36 ` [PATCH v24 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-04-21 11:47 ` Sagi Grimberg
2024-04-26 7:21 ` Aurelien Aptel
2024-04-28 8:15 ` Sagi Grimberg
2024-04-29 11:35 ` Aurelien Aptel
2024-04-30 11:54 ` Sagi Grimberg
2024-05-02 7:04 ` Aurelien Aptel [this message]
2024-05-03 7:31 ` Sagi Grimberg
2024-05-06 12:28 ` Aurelien Aptel
2024-04-04 12:36 ` [PATCH v24 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-04-15 14:28 ` Max Gurtovoy
2024-04-16 20:30 ` David Laight
2024-04-18 8:22 ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-04-21 11:45 ` Sagi Grimberg
2024-04-04 12:37 ` [PATCH v24 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-04-07 22:08 ` Sagi Grimberg
2024-04-10 6:31 ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-04-07 22:08 ` Sagi Grimberg
2024-04-10 6:31 ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-04-09 8:49 ` Bagas Sanjaya
2024-04-04 12:37 ` [PATCH v24 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-04-06 5:45 ` [PATCH v24 00/20] nvme-tcp receive offloads Jakub Kicinski
2024-04-07 22:21 ` Sagi Grimberg
2024-04-09 22:35 ` Chaitanya Kulkarni
2024-04-09 22:59 ` Jakub Kicinski
2024-04-18 8:29 ` Chaitanya Kulkarni
2024-04-18 15:28 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=253frv0r8yc.fsf@nvidia.com \
--to=aaptel@nvidia.com \
--cc=ast@kernel.org \
--cc=aurelien.aptel@gmail.com \
--cc=axboe@fb.com \
--cc=borisp@nvidia.com \
--cc=chaitanyak@nvidia.com \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=galshalom@nvidia.com \
--cc=hch@lst.de \
--cc=jacob.e.keller@intel.com \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=malin1024@gmail.com \
--cc=mgurtovoy@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@nvidia.com \
--cc=pabeni@redhat.com \
--cc=sagi@grimberg.me \
--cc=smalin@nvidia.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).