From: Saeed Mahameed <saeed@kernel.org>
To: Coco Li <lixiaoyan@google.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
David Ahern <dsahern@kernel.org>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Michael Chan <michael.chan@broadcom.com>,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC net-next v5 2/2] bnxt: Use generic HBH removal helper in tx path
Date: Wed, 7 Dec 2022 16:32:58 -0800 [thread overview]
Message-ID: <Y5EwunX89Nq59vf0@x130> (raw)
In-Reply-To: <20221207225435.1273226-2-lixiaoyan@google.com>
On 07 Dec 14:54, Coco Li wrote:
>Eric Dumazet implemented Big TCP that allowed bigger TSO/GRO packet sizes
>for IPv6 traffic. See patch series:
>'commit 89527be8d8d6 ("net: add IFLA_TSO_{MAX_SIZE|SEGS} attributes")'
>
>This reduces the number of packets traversing the networking stack and
>should usually improves performance. However, it also inserts a
>temporary Hop-by-hop IPv6 extension header.
>
>Using the HBH header removal method in the previous path, the extra header
^ patch
>be removed in bnxt drivers to allow it to send big TCP packets (bigger
>TSO packets) as well.
>
I think Eric didn't expose this function because it isn't efficient for
drivers who are already processing the headers separately from payload for
LSO packets .. the trick is to have an optimized copy method depending on
your driver xmit function, usually you would just memcpy the TCP header over
the HBH exactly at the point you copy/process those headers into the HW
descriptor.
>Tested:
>Compiled locally
>
>To further test functional correctness, update the GSO/GRO limit on the
>physical NIC:
>
>ip link set eth0 gso_max_size 181000
>ip link set eth0 gro_max_size 181000
>
>Note that if there are bonding or ipvan devices on top of the physical
>NIC, their GSO sizes need to be updated as well.
>
>Then, IPv6/TCP packets with sizes larger than 64k can be observed.
>
>Big TCP functionality is tested by Michael, feature checks not yet.
>
>Tested by Michael:
>I've confirmed with our hardware team that this is supported by our
>chips, and I've tested it up to gso_max_size of 524280. Thanks.
>
>Tested-by: Michael Chan <michael.chan@broadcom.com>
>Reviewed-by: Michael Chan <michael.chan@broadcom.com>
>Signed-off-by: Coco Li <lixiaoyan@google.com>
>---
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 26 ++++++++++++++++++++++-
> 1 file changed, 25 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>index 0fe164b42c5d..6ba1cd342a80 100644
>--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>@@ -389,6 +389,9 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
> return NETDEV_TX_BUSY;
> }
>
>+ if (unlikely(ipv6_hopopt_jumbo_remove(skb)))
>+ goto tx_free;
>+
> length = skb->len;
> len = skb_headlen(skb);
> last_frag = skb_shinfo(skb)->nr_frags;
>@@ -11315,6 +11318,7 @@ static bool bnxt_exthdr_check(struct bnxt *bp, struct sk_buff *skb, int nw_off,
> u8 **nextp)
> {
> struct ipv6hdr *ip6h = (struct ipv6hdr *)(skb->data + nw_off);
>+ struct hop_jumbo_hdr *jhdr;
> int hdr_count = 0;
> u8 *nexthdr;
> int start;
>@@ -11342,9 +11346,27 @@ static bool bnxt_exthdr_check(struct bnxt *bp, struct sk_buff *skb, int nw_off,
>
> if (hdrlen > 64)
> return false;
>+
>+ /* The ext header may be a hop-by-hop header inserted for
>+ * big TCP purposes. This will be removed before sending
>+ * from NIC, so do not count it.
>+ */
>+ if (*nexthdr == NEXTHDR_HOP) {
>+ if (likely(skb->len <= GRO_LEGACY_MAX_SIZE))
>+ goto increment_hdr;
>+
>+ jhdr = (struct hop_jumbo_hdr *)nexthdr;
>+ if (jhdr->tlv_type != IPV6_TLV_JUMBO || jhdr->hdrlen != 0 ||
>+ jhdr->nexthdr != IPPROTO_TCP)
>+ goto increment_hdr;
>+
>+ goto next_hdr;
>+ }
>+increment_hdr:
>+ hdr_count++;
>+next_hdr:
> nexthdr = &hp->nexthdr;
> start += hdrlen;
>- hdr_count++;
> }
> if (nextp) {
> /* Caller will check inner protocol */
>@@ -13657,6 +13679,8 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
> dev->features &= ~NETIF_F_LRO;
> dev->priv_flags |= IFF_UNICAST_FLT;
>
>+ netif_set_tso_max_size(dev, GSO_MAX_SIZE);
>+
> #ifdef CONFIG_BNXT_SRIOV
> init_waitqueue_head(&bp->sriov_cfg_wait);
> #endif
>--
>2.39.0.rc0.267.gcb52ba06e7-goog
>
next prev parent reply other threads:[~2022-12-08 0:33 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-07 22:54 [PATCH net-next v5 1/2] IPv6/GRO: generic helper to remove temporary HBH/jumbo header in driver Coco Li
2022-12-07 22:54 ` [RFC net-next v5 2/2] bnxt: Use generic HBH removal helper in tx path Coco Li
2022-12-08 0:32 ` Saeed Mahameed [this message]
2022-12-10 3:53 ` Coco Li
2022-12-08 19:54 ` Michael Chan
2022-12-08 2:21 ` [PATCH net-next v5 1/2] IPv6/GRO: generic helper to remove temporary HBH/jumbo header in driver Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y5EwunX89Nq59vf0@x130 \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lixiaoyan@google.com \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=yoshfuji@linux-ipv6.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).