netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Ahern <dsahern@kernel.org>
To: Eric Dumazet <edumazet@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>,
	"David S . Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, netdev <netdev@vger.kernel.org>,
	Coco Li <lixiaoyan@google.com>,
	Alexander Duyck <alexanderduyck@fb.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>
Subject: Re: [PATCH v2 net-next 14/14] mlx5: support BIG TCP packets
Date: Sat, 5 Mar 2022 09:36:51 -0700	[thread overview]
Message-ID: <66ea9048-3287-c0d5-6edc-bd4b7ec4bd70@kernel.org> (raw)
In-Reply-To: <CANn89iJKEV6Y+2mY1Gs_zJTrnm+TTXOHoW_D3AWYE0ELijrm+w@mail.gmail.com>

On 3/4/22 10:14 AM, Eric Dumazet wrote:
> On Thu, Mar 3, 2022 at 8:43 PM David Ahern <dsahern@kernel.org> wrote:
>>
>> On 3/3/22 11:16 AM, Eric Dumazet wrote:
>>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>>> index b2ed2f6d4a9208aebfd17fd0c503cd1e37c39ee1..1e51ce1d74486392a26568852c5068fe9047296d 100644
>>> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>>> @@ -4910,6 +4910,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
>>>
>>>       netdev->priv_flags       |= IFF_UNICAST_FLT;
>>>
>>> +     netif_set_tso_ipv6_max_size(netdev, 512 * 1024);
>>
>>
>> How does the ConnectX hardware handle fairness for such large packet
>> sizes? For 1500 MTU this means a single large TSO can cause the H/W to
>> generate 349 MTU sized packets. Even a 4k MTU means 128 packets. This
>> has an effect on the rate of packets hitting the next hop switch for
>> example.
> 
> I think ConnectX cards interleave packets from all TX queues, at least
> old CX3 have a parameter to control that.
> 
> Given that we already can send at line rate, from a single TX queue, I
> do not see why presenting larger TSO packets
> would change anything on the wire ?
> 
> Do you think ConnectX adds an extra gap on the wire at the end of a TSO train ?

It's not about 1 queue, my question was along several lines. e.g,
1. the inter-packet gap for TSO generated packets. With 512kB packets
the burst is 8x from what it is today.

2. the fairness within hardware as 1 queue has potentially many 512kB
packets and the impact on other queues (e.g., higher latency?) since it
will take longer to split the larger packets into MTU sized packets.

It is really about understanding the change this new default size is
going to have on users.

  reply	other threads:[~2022-03-05 16:36 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-03 18:15 [PATCH v2 net-next 00/14] tcp: BIG TCP implementation Eric Dumazet
2022-03-03 18:15 ` [PATCH v2 net-next 01/14] net: add netdev->tso_ipv6_max_size attribute Eric Dumazet
2022-03-03 18:15 ` [PATCH v2 net-next 02/14] ipv6: add dev->gso_ipv6_max_size Eric Dumazet
2022-03-03 18:15 ` [PATCH v2 net-next 03/14] tcp_cubic: make hystart_ack_delay() aware of BIG TCP Eric Dumazet
2022-03-03 18:15 ` [PATCH v2 net-next 04/14] ipv6: add struct hop_jumbo_hdr definition Eric Dumazet
2022-03-04 19:26   ` Alexander H Duyck
2022-03-04 19:28     ` Eric Dumazet
2022-03-03 18:15 ` [PATCH v2 net-next 05/14] ipv6/gso: remove temporary HBH/jumbo header Eric Dumazet
2022-03-03 18:15 ` [PATCH v2 net-next 06/14] ipv6/gro: insert " Eric Dumazet
2022-03-03 18:16 ` [PATCH v2 net-next 07/14] ipv6: add GRO_IPV6_MAX_SIZE Eric Dumazet
2022-03-04  4:37   ` David Ahern
2022-03-04 17:16     ` Eric Dumazet
2022-03-03 18:16 ` [PATCH v2 net-next 08/14] ipv6: Add hop-by-hop header to jumbograms in ip6_output Eric Dumazet
2022-03-04  4:33   ` David Ahern
2022-03-04 15:48     ` Alexander H Duyck
2022-03-04 17:09       ` Eric Dumazet
2022-03-04 19:00         ` Alexander H Duyck
2022-03-04 19:13           ` Eric Dumazet
2022-03-05 16:53             ` David Ahern
2022-03-04 17:47     ` Eric Dumazet
2022-03-05 16:46       ` David Ahern
2022-03-05 18:08         ` Eric Dumazet
2022-03-05 19:06           ` David Ahern
2022-03-05 16:55   ` David Ahern
2022-03-03 18:16 ` [PATCH v2 net-next 09/14] net: loopback: enable BIG TCP packets Eric Dumazet
2022-03-03 18:16 ` [PATCH v2 net-next 10/14] bonding: update dev->tso_ipv6_max_size Eric Dumazet
2022-03-03 18:16 ` [PATCH v2 net-next 11/14] macvlan: enable BIG TCP Packets Eric Dumazet
2022-03-03 18:16 ` [PATCH v2 net-next 12/14] ipvlan: " Eric Dumazet
2022-03-03 18:16 ` [PATCH v2 net-next 13/14] mlx4: support BIG TCP packets Eric Dumazet
2022-03-08 16:03   ` Tariq Toukan
2022-03-03 18:16 ` [PATCH v2 net-next 14/14] mlx5: " Eric Dumazet
2022-03-04  4:42   ` David Ahern
2022-03-04 17:14     ` Eric Dumazet
2022-03-05 16:36       ` David Ahern [this message]
2022-03-05 17:57         ` Eric Dumazet
2022-03-08 16:02   ` Tariq Toukan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=66ea9048-3287-c0d5-6edc-bd4b7ec4bd70@kernel.org \
    --to=dsahern@kernel.org \
    --cc=alexanderduyck@fb.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=lixiaoyan@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).