From: Leon Hwang <leon.hwang@linux.dev>
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org,
daniel@iogearbox.net, jolsa@kernel.org, yonghong.song@linux.dev,
song@kernel.org, eddyz87@gmail.com, dxu@dxuuu.xyz,
deso@posteo.net, kernel-patches-bot@fb.com
Subject: Re: [PATCH bpf-next v9 5/7] bpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu_cgroup_storage maps
Date: Wed, 8 Oct 2025 13:31:00 +0800 [thread overview]
Message-ID: <4b64447b-bacd-4f96-bd33-999802d824f2@linux.dev> (raw)
In-Reply-To: <CAEf4BzaVmJ83q5DxKkeJEhNeQ87HDQ7yZjg_PNFWpNEUvAFOnw@mail.gmail.com>
On 7/10/25 06:33, Andrii Nakryiko wrote:
> On Tue, Sep 30, 2025 at 8:40 AM Leon Hwang <leon.hwang@linux.dev> wrote:
>>
>> Introduce BPF_F_ALL_CPUS flag support for percpu_cgroup_storage maps to
>> allow updating values for all CPUs with a single value for update_elem
>> API.
>>
>> Introduce BPF_F_CPU flag support for percpu_cgroup_storage maps to
>> allow:
>>
>> * update value for specified CPU for update_elem API.
>> * lookup value for specified CPU for lookup_elem API.
>>
>> The BPF_F_CPU flag is passed via map_flags along with embedded cpu info.
>>
>> Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
>> ---
[...]
>> int bpf_percpu_cgroup_storage_copy(struct bpf_map *_map, void *key,
>> - void *value)
>> + void *value, u64 map_flags)
>> {
>> struct bpf_cgroup_storage_map *map = map_to_storage(_map);
>> struct bpf_cgroup_storage *storage;
>> @@ -198,12 +198,18 @@ int bpf_percpu_cgroup_storage_copy(struct bpf_map *_map, void *key,
>> * access 'value_size' of them, so copying rounded areas
>> * will not leak any kernel data
>> */
>> + if (map_flags & BPF_F_CPU) {
>> + cpu = map_flags >> 32;
>> + memcpy(value, per_cpu_ptr(storage->percpu_buf, cpu), _map->value_size);
>
> this is so far ok, because we don't seem to allow special fields for
> PERCPU_CGROUP_STORAGE, but it's best to switch this one to
> copy_map_value()
>
Agreed to use copy_map_value() here.
>> + goto unlock;
>> + }
>> size = round_up(_map->value_size, 8);
>> for_each_possible_cpu(cpu) {
>> bpf_long_memcpy(value + off,
>> per_cpu_ptr(storage->percpu_buf, cpu), size);
>
> and let's switch this to copy_map_value_long() to future-proof this:
> copy_map_value[_long]() should work correctly with any type of map and
> will take care of all existing and future special fields
>
> (but maybe have it as a separate patch with just that change to make it obvious)
>
Ack.
>> off += size;
>> }
>> +unlock:
>> rcu_read_unlock();
>> return 0;
>> }
>> @@ -213,10 +219,11 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *_map, void *key,
>> {
>> struct bpf_cgroup_storage_map *map = map_to_storage(_map);
>> struct bpf_cgroup_storage *storage;
>> - int cpu, off = 0;
>> + void *ptr;
>> u32 size;
>> + int cpu;
>>
>> - if (map_flags != BPF_ANY && map_flags != BPF_EXIST)
>> + if ((u32)map_flags & ~(BPF_ANY | BPF_EXIST | BPF_F_CPU | BPF_F_ALL_CPUS))
>> return -EINVAL;
>>
>> rcu_read_lock();
>> @@ -232,12 +239,18 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *_map, void *key,
>> * returned or zeros which were zero-filled by percpu_alloc,
>> * so no kernel data leaks possible
>> */
>> - size = round_up(_map->value_size, 8);
>> + size = (map_flags & (BPF_F_CPU | BPF_F_ALL_CPUS)) ? _map->value_size :
>> + round_up(_map->value_size, 8);
>> + if (map_flags & BPF_F_CPU) {
>> + cpu = map_flags >> 32;
>> + memcpy(per_cpu_ptr(storage->percpu_buf, cpu), value, size);
>> + goto unlock;
>> + }
>> for_each_possible_cpu(cpu) {
>> - bpf_long_memcpy(per_cpu_ptr(storage->percpu_buf, cpu),
>> - value + off, size);
>> - off += size;
>> + ptr = (map_flags & BPF_F_ALL_CPUS) ? value : value + size * cpu;
>> + memcpy(per_cpu_ptr(storage->percpu_buf, cpu), ptr, size);
Switch memcpy() to copy_map_value[_long](), too.
Thanks,
Leon
[...]
next prev parent reply other threads:[~2025-10-08 5:31 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-30 15:39 [PATCH bpf-next v9 0/7] bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags for percpu maps Leon Hwang
2025-09-30 15:39 ` [PATCH bpf-next v9 1/7] bpf: Introduce internal bpf_map_check_op_flags helper function Leon Hwang
2025-09-30 15:39 ` [PATCH bpf-next v9 2/7] bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags Leon Hwang
2025-09-30 15:39 ` [PATCH bpf-next v9 3/7] bpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu_array maps Leon Hwang
2025-09-30 15:39 ` [PATCH bpf-next v9 4/7] bpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu_hash and lru_percpu_hash maps Leon Hwang
2025-10-06 22:29 ` Andrii Nakryiko
2025-10-08 4:48 ` Leon Hwang
2025-10-13 23:17 ` Andrii Nakryiko
2025-09-30 15:39 ` [PATCH bpf-next v9 5/7] bpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu_cgroup_storage maps Leon Hwang
2025-10-06 22:33 ` Andrii Nakryiko
2025-10-08 5:31 ` Leon Hwang [this message]
2025-09-30 15:39 ` [PATCH bpf-next v9 6/7] libbpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu maps Leon Hwang
2025-09-30 15:39 ` [PATCH bpf-next v9 7/7] selftests/bpf: Add cases to test BPF_F_CPU and BPF_F_ALL_CPUS flags Leon Hwang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4b64447b-bacd-4f96-bd33-999802d824f2@linux.dev \
--to=leon.hwang@linux.dev \
--cc=andrii.nakryiko@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=deso@posteo.net \
--cc=dxu@dxuuu.xyz \
--cc=eddyz87@gmail.com \
--cc=jolsa@kernel.org \
--cc=kernel-patches-bot@fb.com \
--cc=song@kernel.org \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox