From: Leon Hwang <leon.hwang@linux.dev>
To: Daniel Xu <dxu@dxuuu.xyz>
Cc: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
andrii@kernel.org, yonghong.song@linux.dev, song@kernel.org,
eddyz87@gmail.com, kernel-patches-bot@fb.com
Subject: Re: [RFC PATCH bpf-next 0/2] bpf: Introduce global percpu data
Date: Tue, 14 Jan 2025 14:35:22 +0800 [thread overview]
Message-ID: <91d1adea-e8a0-48dd-b6dd-50402db7911d@linux.dev> (raw)
In-Reply-To: <jfo4cgmk76zibxylkclgw4u7j47phg2ic4ogilhgz52ddilsui@gc3hiffnezkc>
Hi Daniel Xu,
On 14/1/25 00:58, Daniel Xu wrote:
> Hi Leon,
>
> On Mon, Jan 13, 2025 at 11:24:35PM +0800, Leon Hwang wrote:
>> This patch set introduces global per-CPU data, similar to commit
>> 6316f78306c1 ("Merge branch 'support-global-data'"), to reduce restrictions
>> in C for BPF programs.
>>
>> With this enhancement, it becomes possible to define and use global per-CPU
>> variables, much like the DEFINE_PER_CPU() macro in the kernel[0].
>>
>> The idea stems from the bpflbr project[1], which itself was inspired by
>> retsnoop[2]. During testing of bpflbr on the v6.6 kernel, two LBR
>> (Last Branch Record) entries were observed related to the
>> bpf_get_smp_processor_id() helper.
>>
>> Since commit 1ae6921009e5 ("bpf: inline bpf_get_smp_processor_id() helper"),
>> the bpf_get_smp_processor_id() helper has been inlined on x86_64, reducing
>> the overhead and consequently minimizing these two LBR records.
>>
>> However, the introduction of global per-CPU data offers a more robust
>> solution. By leveraging the percpu_array map and percpu instructions,
>> global per-CPU data can be implemented intrinsically.
>>
>> This feature also facilitates sharing per-CPU information between tail
>> callers and callees or between freplace callers and callees through a
>> shared global per-CPU variable. Previously, this was achieved using a
>> 1-entry percpu map, which this patch set aims to improve upon.
>
> I think this would be great to have. bpftrace would've liked to use this
> for its recent big string support, but instead had to simulate a percpu
> global through regular globals.
>
Thank you for the feedback! I'm glad to hear that this feature could
help simplify bpftrace, especially for use cases like the recent big
string support. It's great to know this feature has potential to address
more real-world challenges.
Thanks,
Leon
prev parent reply other threads:[~2025-01-14 6:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-13 15:24 [RFC PATCH bpf-next 0/2] bpf: Introduce global percpu data Leon Hwang
2025-01-13 15:24 ` [RFC PATCH bpf-next 1/2] " Leon Hwang
2025-01-14 23:10 ` Andrii Nakryiko
2025-01-16 7:22 ` Leon Hwang
2025-01-16 23:37 ` Andrii Nakryiko
2025-01-17 6:24 ` Leon Hwang
2025-01-13 15:24 ` [RFC PATCH bpf-next 2/2] selftests/bpf: Add a case to test " Leon Hwang
2025-01-13 16:58 ` [RFC PATCH bpf-next 0/2] bpf: Introduce " Daniel Xu
2025-01-14 6:35 ` Leon Hwang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=91d1adea-e8a0-48dd-b6dd-50402db7911d@linux.dev \
--to=leon.hwang@linux.dev \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=dxu@dxuuu.xyz \
--cc=eddyz87@gmail.com \
--cc=kernel-patches-bot@fb.com \
--cc=song@kernel.org \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox