From: Boris Pismenny <borispismenny@gmail.com>
To: David Ahern <dsahern@gmail.com>,
Boris Pismenny <borisp@mellanox.com>,
kuba@kernel.org, davem@davemloft.net, saeedm@nvidia.com,
hch@lst.de, sagi@grimberg.me, axboe@fb.com, kbusch@kernel.org,
viro@zeniv.linux.org.uk, edumazet@google.com, smalin@marvell.com
Cc: boris.pismenny@gmail.com, linux-nvme@lists.infradead.org,
netdev@vger.kernel.org, benishay@nvidia.com, ogerlitz@nvidia.com,
yorayz@nvidia.com, Ben Ben-Ishay <benishay@mellanox.com>,
Or Gerlitz <ogerlitz@mellanox.com>,
Yoray Zack <yorayz@mellanox.com>
Subject: Re: [PATCH v2 net-next 06/21] nvme-tcp: Add DDP offload control path
Date: Sun, 31 Jan 2021 09:51:12 +0200 [thread overview]
Message-ID: <c9d06a90-4e7d-b5aa-eabb-63b557b8b5d0@gmail.com> (raw)
In-Reply-To: <37861060-9651-49c8-e583-2b070914361c@gmail.com>
On 19/01/2021 5:47, David Ahern wrote:
> On 1/14/21 8:10 AM, Boris Pismenny wrote:
>> +static
>> +int nvme_tcp_offload_socket(struct nvme_tcp_queue *queue)
>> +{
>> + struct net_device *netdev = get_netdev_for_sock(queue->sock->sk, true);
>> + struct nvme_tcp_ddp_config config = {};
>> + int ret;
>> +
>> + if (!netdev) {
>> + dev_info_ratelimited(queue->ctrl->ctrl.device, "netdev not found\n");
>> + return -ENODEV;
>> + }
>> +
>> + if (!(netdev->features & NETIF_F_HW_TCP_DDP)) {
>> + dev_put(netdev);
>> + return -EOPNOTSUPP;
>> + }
>> +
>> + config.cfg.type = TCP_DDP_NVME;
>> + config.pfv = NVME_TCP_PFV_1_0;
>> + config.cpda = 0;
>> + config.dgst = queue->hdr_digest ?
>> + NVME_TCP_HDR_DIGEST_ENABLE : 0;
>> + config.dgst |= queue->data_digest ?
>> + NVME_TCP_DATA_DIGEST_ENABLE : 0;
>> + config.queue_size = queue->queue_size;
>> + config.queue_id = nvme_tcp_queue_id(queue);
>> + config.io_cpu = queue->io_cpu;
>> +
>> + ret = netdev->tcp_ddp_ops->tcp_ddp_sk_add(netdev,
>> + queue->sock->sk,
>> + (struct tcp_ddp_config *)&config);
>
> typecast is not needed; tcp_ddp_config is an element of nvme_tcp_ddp_config
>
True, will fix, thanks!
>> + if (ret) {
>> + dev_put(netdev);
>> + return ret;
>> + }
>> +
>> + inet_csk(queue->sock->sk)->icsk_ulp_ddp_ops = &nvme_tcp_ddp_ulp_ops;
>> + if (netdev->features & NETIF_F_HW_TCP_DDP)
>> + set_bit(NVME_TCP_Q_OFF_DDP, &queue->flags);
>> +
>> + return ret;
>> +}
>> +
>> +static
>> +void nvme_tcp_unoffload_socket(struct nvme_tcp_queue *queue)
>> +{
>> + struct net_device *netdev = queue->ctrl->offloading_netdev;
>> +
>> + if (!netdev) {
>> + dev_info_ratelimited(queue->ctrl->ctrl.device, "netdev not found\n");
>> + return;
>> + }
>> +
>> + netdev->tcp_ddp_ops->tcp_ddp_sk_del(netdev, queue->sock->sk);
>> +
>> + inet_csk(queue->sock->sk)->icsk_ulp_ddp_ops = NULL;
>> + dev_put(netdev); /* put the queue_init get_netdev_for_sock() */
>
> have you validated the netdev reference counts? You have a put here, and ...
>
Yes, it does work for the cases we've tested: up/down,
connect/disconnect, and up/down during traffic. It is
unfortunate that it is not trivial to follow.
We'll add some comment to make it more clear. Also see
below.
>> +}
>> +
>> +static
>> +int nvme_tcp_offload_limits(struct nvme_tcp_queue *queue)
>> +{
>> + struct net_device *netdev = get_netdev_for_sock(queue->sock->sk, true);
>
> ... a get here ....
>
>> + struct tcp_ddp_limits limits;
>> + int ret = 0;
>> +
>> + if (!netdev) {
>> + dev_info_ratelimited(queue->ctrl->ctrl.device, "netdev not found\n");
>> + return -ENODEV;
>> + }
>> +
>> + if (netdev->features & NETIF_F_HW_TCP_DDP &&
>> + netdev->tcp_ddp_ops &&
>> + netdev->tcp_ddp_ops->tcp_ddp_limits)
>> + ret = netdev->tcp_ddp_ops->tcp_ddp_limits(netdev, &limits);
>> + else
>> + ret = -EOPNOTSUPP;
>> +
>> + if (!ret) {
>> + queue->ctrl->offloading_netdev = netdev;
>
>
> ... you have the device here, but then ...
>
>> + dev_dbg_ratelimited(queue->ctrl->ctrl.device,
>> + "netdev %s offload limits: max_ddp_sgl_len %d\n",
>> + netdev->name, limits.max_ddp_sgl_len);
>> + queue->ctrl->ctrl.max_segments = limits.max_ddp_sgl_len;
>> + queue->ctrl->ctrl.max_hw_sectors =
>> + limits.max_ddp_sgl_len << (ilog2(SZ_4K) - 9);
>> + } else {
>> + queue->ctrl->offloading_netdev = NULL;
>> + }
>> +
>> + dev_put(netdev);
>
> ... put here. And this is the limit checking function which seems like
> an odd place to set offloading_netdev vs nvme_tcp_offload_socket which
> sets no queue variable but yet hangs on to a netdev reference count.
>
> netdev reference count leaks are an absolute PITA to find. Code that
> takes and puts the counts should be clear and obvious as to when and
> why. The symmetry of offload and unoffload are clear when the offload
> saves the address in offloading_netdev. What you have now is dubious.
>
The idea here is to rely on offload and unoffload to hold the netdev
during offload. Get limits is not offloading anything; it only queries
device limits that are then applied to the queue by the caller.
We hold the device here for only to ensure that the function is still
there when it is called, and we release it once we are done with it,
as no context is established on the NIC, and no offload takes place.
next prev parent reply other threads:[~2021-01-31 7:52 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-14 15:10 [PATCH v2 net-next 00/21] nvme-tcp receive offloads Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 01/21] iov_iter: Introduce new procedures for copy to iter/pages Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 02/21] net: Introduce direct data placement tcp offload Boris Pismenny
2021-01-14 15:57 ` Eric Dumazet
2021-01-14 20:19 ` Boris Pismenny
2021-01-14 20:43 ` Eric Dumazet
2021-01-31 10:40 ` Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 03/21] net: Introduce crc offload for tcp ddp ulp Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 04/21] net: SKB copy(+hash) iterators for DDP offloads Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 05/21] net/tls: expose get_netdev_for_sock Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 06/21] nvme-tcp: Add DDP offload control path Boris Pismenny
2021-01-19 3:47 ` David Ahern
2021-01-31 7:51 ` Boris Pismenny [this message]
2021-01-14 15:10 ` [PATCH v2 net-next 07/21] nvme-tcp: Add DDP data-path Boris Pismenny
2021-01-19 4:18 ` David Ahern
2021-01-31 8:44 ` Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 08/21] nvme-tcp : Recalculate crc in the end of the capsule Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 09/21] nvme-tcp: Deal with netdevice DOWN events Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 10/21] net/mlx5: Header file changes for nvme-tcp offload Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 11/21] net/mlx5: Add 128B CQE for NVMEoTCP offload Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 12/21] net/mlx5e: TCP flow steering for nvme-tcp Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 13/21] net/mlx5e: NVMEoTCP offload initialization Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 14/21] net/mlx5e: KLM UMR helper macros Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 15/21] net/mlx5e: NVMEoTCP use KLM UMRs Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 16/21] net/mlx5e: NVMEoTCP queue init/teardown Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 17/21] net/mlx5e: NVMEoTCP async ddp invalidation Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 18/21] net/mlx5e: NVMEoTCP ddp setup and resync Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 19/21] net/mlx5e: NVMEoTCP, data-path for DDP offload Boris Pismenny
2021-01-16 4:57 ` David Ahern
2021-01-17 8:42 ` Boris Pismenny
2021-01-19 4:36 ` David Ahern
2021-01-31 9:27 ` Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 20/21] net/mlx5e: NVMEoTCP statistics Boris Pismenny
2021-01-14 15:10 ` [PATCH v2 net-next 21/21] Documentation: add TCP DDP offload documentation Boris Pismenny
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c9d06a90-4e7d-b5aa-eabb-63b557b8b5d0@gmail.com \
--to=borispismenny@gmail.com \
--cc=axboe@fb.com \
--cc=benishay@mellanox.com \
--cc=benishay@nvidia.com \
--cc=boris.pismenny@gmail.com \
--cc=borisp@mellanox.com \
--cc=davem@davemloft.net \
--cc=dsahern@gmail.com \
--cc=edumazet@google.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=ogerlitz@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=sagi@grimberg.me \
--cc=smalin@marvell.com \
--cc=viro@zeniv.linux.org.uk \
--cc=yorayz@mellanox.com \
--cc=yorayz@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).