From: Jesper Dangaard Brouer <hawk@kernel.org>
To: Daniel Borkmann <daniel@iogearbox.net>, bpf@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <borkmann@iogearbox.net>,
martin.lau@kernel.org, netdev@vger.kernel.org,
kernel-team@cloudflare.com
Subject: Re: [PATCH bpf-next] bpf/lpm_trie: inline longest_prefix_match for fastpath
Date: Mon, 18 Mar 2024 13:12:55 +0100 [thread overview]
Message-ID: <447caa0d-10d6-42f9-958b-5d6d7c472792@kernel.org> (raw)
In-Reply-To: <860f02ae-a8a8-4bcd-86d8-7a6da3f2056d@kernel.org>
On 15/03/2024 18.08, Jesper Dangaard Brouer wrote:
>
>
> On 15/03/2024 16.03, Daniel Borkmann wrote:
>> On 3/12/24 4:17 PM, Jesper Dangaard Brouer wrote:
>>> The BPF map type LPM (Longest Prefix Match) is used heavily
>>> in production by multiple products that have BPF components.
>>> Perf data shows trie_lookup_elem() and longest_prefix_match()
>>> being part of kernels perf top.
>>
>> You mention these are heavy hitters in prod ...
>>
>>> For every level in the LPM tree trie_lookup_elem() calls out
>>> to longest_prefix_match(). The compiler is free to inline this
>>> call, but chooses not to inline, because other slowpath callers
>>> (that can be invoked via syscall) exists like trie_update_elem(),
>>> trie_delete_elem() or trie_get_next_key().
>>>
>>> bcc/tools/funccount -Ti 1
>>> 'trie_lookup_elem|longest_prefix_match.isra.0'
>>> FUNC COUNT
>>> trie_lookup_elem 664945
>>> longest_prefix_match.isra.0 8101507
>>>
>>> Observation on a single random metal shows a factor 12 between
>>> the two functions. Given an average of 12 levels in the trie being
>>> searched.
>>>
>>> This patch force inlining longest_prefix_match(), but only for
>>> the lookup fastpath to balance object instruction size.
>>>
>>> $ bloat-o-meter kernel/bpf/lpm_trie.o.orig-noinline
>>> kernel/bpf/lpm_trie.o
>>> add/remove: 1/1 grow/shrink: 1/0 up/down: 335/-4 (331)
>>> Function old new delta
>>> trie_lookup_elem 179 510 +331
>>> __BTF_ID__struct__lpm_trie__706741 - 4 +4
>>> __BTF_ID__struct__lpm_trie__706733 4 - -4
>>> Total: Before=3056, After=3387, chg +10.83%
>>
>> ... and here you quote bloat-o-meter instead. But do you also see an
>> observable perf gain in prod after this change? (No objection from my
>> side but might be good to mention here.. given if not then why do the
>> change?)
>>
>
> I'm still waiting for more production servers to reboot into patched
> kernels. I do have some "low-level" numbers from previous generation
> AMD servers, running kernel 6.1, which should be less affected by the
> SRSO (than our 6.6 kernels). Waiting for newer generation to get kernel
> updates, and especially 6.6 will be interesting.
There were no larger performance benefit on 6.6 it is basically the same.
Newer generation (11G) hardware latency overhead of trie_lookup_elem
- avg 1181 nsecs for patched kernel
- avg 1269 nsecs for non patched kernel
- around 7% improvement or 88 ns
>
> From production measurements the latency overhead of trie_lookup_elem:
> - avg 1220 nsecs for patched kernel
> - avg 1329 nsecs for non patched kernel
> - around 8% improvement or 109 nanosec
> - given approx 12 calls "saved" this is 9 ns per function call
> - for reference on Intel I measured func call to cost 1.3ns
> - this extra overhead is caused by __x86_return_thunk().
>
> I also see slight improvement in the graphs, but given how much
> production varies I don't want to draw conclusions yet.
>
Still inconclusive due to variations in traffic distribution due to
load-balancing isn't perfect.
>
>>> Details: Due to AMD mitigation for SRSO (Speculative Return Stack Overflow)
>>> these function calls have additional overhead. On newer kernels this shows
>>> up under srso_safe_ret() + srso_return_thunk(), and on older kernels (6.1)
>>> under __x86_return_thunk(). Thus, for production workloads the biggest gain
>>> comes from avoiding this mitigation overhead.
>>>
>>> Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
I'm going to send a V2, because kernel doc processing found an issue:
- https://netdev.bots.linux.dev/static/nipa/834681/13590144/kdoc/stderr
I'll try to incorporate the production improvements we are seeing, but
it feels wrong to write about kernel 6.1 and 6.6 improvements, but I
(currently) cannot deploy a bpf-next kernel in production (but I do test
latest kernels in my local testlab).
--Jesper
prev parent reply other threads:[~2024-03-18 12:12 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-12 15:17 [PATCH bpf-next] bpf/lpm_trie: inline longest_prefix_match for fastpath Jesper Dangaard Brouer
2024-03-15 15:03 ` Daniel Borkmann
2024-03-15 17:08 ` Jesper Dangaard Brouer
2024-03-18 12:12 ` Jesper Dangaard Brouer [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=447caa0d-10d6-42f9-958b-5d6d7c472792@kernel.org \
--to=hawk@kernel.org \
--cc=ast@kernel.org \
--cc=borkmann@iogearbox.net \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@cloudflare.com \
--cc=martin.lau@kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox