From: Richard Gobert <richardbgobert@gmail.com>
To: Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, shuah@kernel.org, dsahern@kernel.org,
aduyck@mirantis.com, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org
Subject: Re: [PATCH net-next v6 5/6] net: gro: move L3 flush checks to tcp_gro_receive and udp_gro_receive_segment
Date: Thu, 11 Apr 2024 18:07:13 +0200 [thread overview]
Message-ID: <24daf0f8-1e81-4bdb-81f3-0f95bf4392f4@gmail.com> (raw)
In-Reply-To: <66174ec5bbd29_2d6bc629481@willemb.c.googlers.com.notmuch>
Willem de Bruijn wrote:
> Richard Gobert wrote:
>> {inet,ipv6}_gro_receive functions perform flush checks (ttl, flags,
>> iph->id, ...) against all packets in a loop. These flush checks are used
>> currently only in tcp flows in GRO.
>>
>> These checks need to be done only once in tcp_gro_receive and only against
>> the found p skb, since they only affect flush and not same_flow.
>
> I don't quite understand where the performance improvements arise.
> As inet_gro_receive will skip any p that does not match:
>
> if (!NAPI_GRO_CB(p)->same_flow)
> continue;
>
> iph2 = (struct iphdr *)(p->data + off);
> /* The above works because, with the exception of the top
> * (inner most) layer, we only aggregate pkts with the same
> * hdr length so all the hdrs we'll need to verify will start
> * at the same offset.
> */
> if ((iph->protocol ^ iph2->protocol) |
> ((__force u32)iph->saddr ^ (__force u32)iph2->saddr) |
> ((__force u32)iph->daddr ^ (__force u32)iph2->daddr)) {
> NAPI_GRO_CB(p)->same_flow = 0;
> continue;
> }
>
> So these checks are already only performed against a p that matches.
>
Thanks for the review!
flush/flush_id is calculated for all other p with same_flow = 1 (which is
not always determined to be 0 before inet_gro_receive) and same src/dst
addr in the bucket. Moving it to udp_gro_receive_segment/tcp_gro_receive
will make it run only once when a matching p is found.
In addition, UDP flows where skb_gro_receive_list is called -
flush/flush_id is not relevant and does not need to be calculated. In these
cases total CPU time in GRO should drop. I could post perf numbers for
this flow as well.
>> Leveraging the previous commit in the series, in which correct network
>> header offsets are saved for both outer and inner network headers -
>> allowing these checks to be done only once, in tcp_gro_receive. As a
>
> Comments should be updated to reflect both TCP and L4 UDP. Can
> generalize to transport callbacks.
>
>> result, NAPI_GRO_CB(p)->flush is not used at all. In addition, flush_id
>> checks are more declarative and contained in inet_gro_flush, thus removing
>> the need for flush_id in napi_gro_cb.
>>
>> This results in less parsing code for UDP flows and non-loop flush tests
>> for TCP flows.
>
> This moves network layer tests out of the network layer callbacks into
> helpers called from the transport layer callback. And then the helper
> has to look up the network layer header and demultiplex the protocol
> again:
>
> + if (((struct iphdr *)nh)->version == 6)
> + flush |= ipv6_gro_flush(nh, nh2);
> + else
> + flush |= inet_gro_flush(nh, nh2, p, i != encap_mark);
>
> That just seems a bit roundabout.
IMO this commit could be a part of a larger change, where all
loops in gro_list_prepare, inet_gro_receive and ipv6_gro_receive can be
removed, and the logic for finding a matching p will be moved to L4. This
means that when p is found, the rest of the gro_list would not need to be
traversed and thus would not even dirty cache lines at all. I can provide a
code snippet which would explain it better.
next prev parent reply other threads:[~2024-04-11 16:07 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-10 15:34 [PATCH net-next v6 0/6] net: gro: encapsulation bug fix and flush checks improvements Richard Gobert
2024-04-10 15:34 ` [PATCH net-next v6 1/6] net: gro: add flush check in udp_gro_receive_segment Richard Gobert
2024-04-10 15:34 ` [PATCH net-next v6 2/6] net: gro: add p_off param in *_gro_complete Richard Gobert
2024-04-11 2:21 ` Willem de Bruijn
2024-04-11 3:44 ` Willem de Bruijn
2024-04-11 16:00 ` Richard Gobert
2024-04-11 16:02 ` Willem de Bruijn
2024-04-10 15:34 ` [PATCH net-next v6 3/6] selftests/net: add local address bind in vxlan selftest Richard Gobert
2024-04-10 15:34 ` [PATCH net-next v6 4/6] net: gro: add {inner_}network_offset to napi_gro_cb Richard Gobert
2024-04-10 15:34 ` [PATCH net-next v6 5/6] net: gro: move L3 flush checks to tcp_gro_receive and udp_gro_receive_segment Richard Gobert
2024-04-11 2:45 ` Willem de Bruijn
2024-04-11 16:07 ` Richard Gobert [this message]
2024-04-11 21:35 ` Willem de Bruijn
2024-04-12 15:37 ` Richard Gobert
2024-04-10 15:34 ` [PATCH net-next v6 6/6] selftests/net: add flush id selftests Richard Gobert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=24daf0f8-1e81-4bdb-81f3-0f95bf4392f4@gmail.com \
--to=richardbgobert@gmail.com \
--cc=aduyck@mirantis.com \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=shuah@kernel.org \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox