From: Leon Hwang <leon.hwang@linux.dev>
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
andrii@kernel.org, yonghong.song@linux.dev, song@kernel.org,
eddyz87@gmail.com, kernel-patches-bot@fb.com
Subject: Re: [RFC PATCH bpf-next 1/2] bpf: Introduce global percpu data
Date: Fri, 17 Jan 2025 14:24:26 +0800 [thread overview]
Message-ID: <9fc744b6-9139-4cbd-bcd7-23945cf94a92@linux.dev> (raw)
In-Reply-To: <CAEf4BzaBXuztqhvAxPGi6nzebMVifx2cU1iFQqEo_GwF3z-ADg@mail.gmail.com>
On 17/1/25 07:37, Andrii Nakryiko wrote:
> On Wed, Jan 15, 2025 at 11:22 PM Leon Hwang <leon.hwang@linux.dev> wrote:
>>
>> Hi,
>>
>> On 15/1/25 07:10, Andrii Nakryiko wrote:
>>> On Mon, Jan 13, 2025 at 7:25 AM Leon Hwang <leon.hwang@linux.dev> wrote:
>>>>
[...]
>>>
>>> So I think the feature overall makes sense, but we need to think
>>> through at least libbpf's side of things some more. Unlike .data,
>>> per-cpu .data section is not mmapable, and so that has implication on
>>> BPF skeleton and we should make sure all that makes sense on BPF
>>> skeleton side. In that sense, per-cpu global data is more akin to
>>> struct_ops initialization image, which can be accessed by user before
>>> skeleton is loaded to initialize the image.
>>>
>>> There are a few things to consider. What's the BPF skeleton interface?
>>> Do we expose it as single struct and use that struct as initial image
>>> for each CPU (which means user won't be able to initialize different
>>> CPU data differently, at least not through BPF skeleton facilities)?
>>> Or do we expose this as an array of structs and let user set each CPU
>>> data independently?
>>>
>>> I feel like keeping it simple and having one image for all CPUs would
>>> cover most cases. And users can still access the underlying
>>> PERCPU_ARRAY map if they need more control.
>>
>> Agree. It is necessary to keep it simple.
>>
>>>
>>> But either way, we need tests for skeleton, making sure we NULL-out
>>> this per-cpu global data, but take it into account before the load.
>>>
>>> Also, this huge calloc for possible CPUs, I'd like to avoid it
>>> altogether for the (probably very common) zero-initialized case.
>>
>> Ack.
>>
>>>
>>> So in short, needs a bit of iteration to figure out all the
>>> interfacing issues, but makes sense overall. See some more low-level
>>> remarks below.
>>>
>>
>> It is challenging to figure out them. I'll do my best to achieve it.
>>
>>> pw-bot: cr
>>>
>>>
>
> [...]
>
>>>
>>>> @@ -516,6 +516,7 @@ struct bpf_struct_ops {
>>>> };
>>>>
>>>> #define DATA_SEC ".data"
>>>> +#define PERCPU_DATA_SEC ".data..percpu"
>>>
>>> I don't like this prefix, even if that's what we have in the kernel.
>>> Something like just ".percpu" or ".percpu_data" or ".data_percpu" is
>>> better, IMO.
>>
>> I tested ".percpu". It is OK to use it. But we have to update "bpftool
>> gen" too, which relies on these section names.
>>
>> Is it better to keep ".data" prefix, like ".data.percpu", ".data_percpu"?
>> Can keeping ".data" prefix reduce some works on bpftool, go-ebpf and
>> akin bpf loaders?
>
> It's literally two lines of code in gen.c, and that should actually be
> a common array of known prefixes. Even if someone uses this new
> .percpu section with old bpftool nothing will break, they just won't
> have structure representing the initial per-CPU image. They will still
> have the generic map pointer in the skeleton. So I think this is
> acceptable.
>
> I'd definitely go with a simple and less error-prone ".percpu" prefix.
>
Being simple and less error-prone is indeed important. Let's proceed
with the ".percpu" prefix as suggested.
Thanks,
Leon
next prev parent reply other threads:[~2025-01-17 6:24 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-13 15:24 [RFC PATCH bpf-next 0/2] bpf: Introduce global percpu data Leon Hwang
2025-01-13 15:24 ` [RFC PATCH bpf-next 1/2] " Leon Hwang
2025-01-14 23:10 ` Andrii Nakryiko
2025-01-16 7:22 ` Leon Hwang
2025-01-16 23:37 ` Andrii Nakryiko
2025-01-17 6:24 ` Leon Hwang [this message]
2025-01-13 15:24 ` [RFC PATCH bpf-next 2/2] selftests/bpf: Add a case to test " Leon Hwang
2025-01-13 16:58 ` [RFC PATCH bpf-next 0/2] bpf: Introduce " Daniel Xu
2025-01-14 6:35 ` Leon Hwang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9fc744b6-9139-4cbd-bcd7-23945cf94a92@linux.dev \
--to=leon.hwang@linux.dev \
--cc=andrii.nakryiko@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=eddyz87@gmail.com \
--cc=kernel-patches-bot@fb.com \
--cc=song@kernel.org \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox