From: Yonghong Song <yonghong.song@linux.dev>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: bpf <bpf@vger.kernel.org>, Alexei Starovoitov <ast@kernel.org>,
Andrii Nakryiko <andrii@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Kernel Team <kernel-team@fb.com>,
Martin KaFai Lau <martin.lau@kernel.org>
Subject: Re: [PATCH bpf-next v1 2/2] [no_merge] selftests/bpf: Benchmark runtime performance with private stack
Date: Tue, 16 Jul 2024 10:45:51 -0700 [thread overview]
Message-ID: <ec461a41-6bd4-4d8b-9cce-950aeae704c7@linux.dev> (raw)
In-Reply-To: <CAADnVQ+t0zEXwtrw9oCZN0bxOLTbNVkgz5u8yU+kqaTB3TL6bA@mail.gmail.com>
On 7/15/24 6:35 PM, Alexei Starovoitov wrote:
> On Mon, Jul 15, 2024 at 6:17 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>> With 4096 loop ierations per program run, I got
>> $ perf record -- ./bench -w3 -d10 -a --nr-batch-iters=4096 no-private-stack
>> 27.89% bench [kernel.vmlinux] [k] htab_map_hash
>> 21.55% bench [kernel.vmlinux] [k] _raw_spin_lock
>> 11.51% bench [kernel.vmlinux] [k] htab_map_delete_elem
>> 10.26% bench [kernel.vmlinux] [k] htab_map_update_elem
>> 4.85% bench [kernel.vmlinux] [k] __pcpu_freelist_push
>> 4.34% bench [kernel.vmlinux] [k] alloc_htab_elem
>> 3.50% bench [kernel.vmlinux] [k] memcpy_orig
>> 3.22% bench [kernel.vmlinux] [k] __pcpu_freelist_pop
>> 2.68% bench [kernel.vmlinux] [k] bcmp
>> 2.52% bench [kernel.vmlinux] [k] __htab_map_lookup_elem
>
> so the prog itself is not even in the top 10 which means
> that the test doesn't measure anything meaningful about the private
> stack itself.
> It just benchmarks hash map and overhead of extra push/pop is invisible.
>
>> +SEC("tp/syscalls/sys_enter_getpgid")
>> +int stack0(void *ctx)
>> +{
>> + struct data_t key = {}, value = {};
>> + struct data_t *pvalue;
>> + int i;
>> +
>> + hits++;
>> + key.d[10] = 5;
>> + value.d[8] = 10;
>> +
>> + for (i = 0; i < batch_iters; i++) {
>> + pvalue = bpf_map_lookup_elem(&htab, &key);
>> + if (!pvalue)
>> + bpf_map_update_elem(&htab, &key, &value, 0);
>> + bpf_map_delete_elem(&htab, &key);
>> + }
> Instead of calling helpers that do a lot of work the test should
> call global subprograms or noinline static functions that are nops.
> Only then we might see the overhead of push/pop r9.
>
> Once you do that you'll see that
> +SEC("tp/syscalls/sys_enter_getpgid")
> approach has too much overhead.
> (you don't see right now since hashmap dominates).
> Pls use an approach I mentioned earlier by fentry-ing into
> a helper and another prog calling that helper in for() loop.
Thanks for suggestion. Will use fentry program with empty functions
to test maximum worst performance.
>
> pw-bot: cr
prev parent reply other threads:[~2024-07-16 17:45 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-16 1:16 [PATCH bpf-next v1 1/2] bpf: Support private stack for bpf progs Yonghong Song
2024-07-16 1:16 ` [PATCH bpf-next v1 2/2] [no_merge] selftests/bpf: Benchmark runtime performance with private stack Yonghong Song
2024-07-16 1:35 ` Alexei Starovoitov
2024-07-16 17:45 ` Yonghong Song [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ec461a41-6bd4-4d8b-9cce-950aeae704c7@linux.dev \
--to=yonghong.song@linux.dev \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@fb.com \
--cc=martin.lau@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox