public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: "Alexei Starovoitov" <alexei.starovoitov@gmail.com>
To: "Leon Hwang" <leon.hwang@linux.dev>
Cc: <bot+bpf-ci@kernel.org>, "bpf" <bpf@vger.kernel.org>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Andrii Nakryiko" <andrii@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Yonghong Song" <yonghong.song@linux.dev>,
	"Song Liu" <song@kernel.org>, "Eduard" <eddyz87@gmail.com>,
	"Quentin Monnet" <qmo@kernel.org>, "Daniel Xu" <dxu@dxuuu.xyz>,
	<kernel-patches-bot@fb.com>,
	"Martin KaFai Lau" <martin.lau@kernel.org>,
	"Chris Mason" <clm@meta.com>,
	"Ihor Solodrai" <ihor.solodrai@linux.dev>
Subject: Re: [PATCH bpf-next v4 2/8] bpf: Introduce global percpu data
Date: Mon, 20 Apr 2026 07:58:06 -0700	[thread overview]
Message-ID: <DHY2JXM90466.13PVYC6ITVFKS@gmail.com> (raw)
In-Reply-To: <20260420052459.85772-1-leon.hwang@linux.dev>

On Sun Apr 19, 2026 at 10:24 PM PDT, Leon Hwang wrote:
>
> int xdp_prog(struct xdp_md * ctx):
> ; cnt++;
>    0: (18) r6 = map[id:28][0]+0
>    2: (bf) r6 = &(void __percpu *)(r6)

well. that insn was inserted by the verifier and it shows up in xlated.
That was expected.
The point about 'bogus xlated' was about offset translation.
map_direct_value_meta() should recover proper insns[i + 1].imm = off;
In your example it's zero, so not an interesting test.


>    3: (61) r1 = *(u32 *)(r6 +0)
> ; cnt++;
>    4: (07) r1 += 1
> ; cnt++;
>    5: (63) *(u32 *)(r6 +0) = r1
> ; __u32 cpu = bpf_get_smp_processor_id();
>    6: (b7) r0 = -1280774092
>    7: (bf) r0 = &(void __percpu *)(r0)
>    8: (61) r0 = *(u32 *)(r0 +0)
> ; bpf_printk("cpu: %u, cnt: %u\n", cpu, cnt);
>    9: (61) r4 = *(u32 *)(r6 +0)
>   10: (18) r1 = map[id:30][0]+0
>   12: (b7) r2 = 18
>   13: (bf) r3 = r0
>   14: (85) call bpf_trace_printk#-129408
> ; return XDP_PASS;
>   15: (b7) r0 = 2
>   16: (95) exit
>
> The difference between these xlated insns is "r6 = &(void __percpu *)(r6)".
> This insn is for ".percpu", not for ".data".
>
>>>>
>>>> Ah, let me dive deeper.
>>>>
>>>
>>> As for the above changes, let me explain them using diff snippet.
>>>
>>> @@ -5808,6 +5808,8 @@ int bpf_map_direct_read(struct bpf_map *map, int
>>> off, int size, u64 *val,
>>>         u64 addr;
>>>         int err;
>>>
>>> +       if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY)
>>> +               return -EINVAL;
>>>         err = map->ops->map_direct_value_addr(map, &addr, off);
>>>         if (err)
>>>                 return err;
>>>
>>> It is to guard percpu_array map against const_reg_xfer(). Instead of
>>> updating const_reg_xfer(), better to update bpf_map_direct_read(). WDYT?
>>
>> yeah and move map_type != BPF_MAP_TYPE_INSN_ARRAY check
>> into bpf_map_direct_read() as well.
>> To cleanup const_reg_xfer() a bit.
>
> Before sending the next revision, just confirm the change:
>
> 1. Move "map->map_type == BPF_MAP_TYPE_INSN_ARRAY" from const_reg_xfer()
>    to bpf_map_direct_read().
> 2. Keep "map->map_type != BPF_MAP_TYPE_INSN_ARRAY" in
>    check_mem_access(), because we should not propagate the error from
>    bpf_map_direct_read() for insn_array and percpu_array.
>
> Thanks,
> Leon
>
> ---
>
> --- a/kernel/bpf/const_fold.c
> +++ b/kernel/bpf/const_fold.c
> @@ -174,7 +181,6 @@ static void const_reg_xfer(struct bpf_verifier_env *env, struct const_arg_info *
>                 u64 val = 0;
>
>                 if (!bpf_map_is_rdonly(map) || !map->ops->map_direct_value_addr ||
> -                   map->map_type == BPF_MAP_TYPE_INSN_ARRAY ||

yes

>                     off < 0 || off + size > map->value_size ||
>                     bpf_map_direct_read(map, off, size, &val, is_ldsx)) {
>                         *dst = unknown;
>
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5816,6 +5816,8 @@ int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val,
>         u64 addr;
>         int err;
>
> +       if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY || map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY)
> +               return -EINVAL;

yes

>         err = map->ops->map_direct_value_addr(map, &addr, off);
>         if (err)
>                 return err;
> @@ -6370,7 +6372,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
>                         if (tnum_is_const(reg->var_off) &&
>                             bpf_map_is_rdonly(map) &&
>                             map->ops->map_direct_value_addr &&
> -                           map->map_type != BPF_MAP_TYPE_INSN_ARRAY) {
> +                           map->map_type != BPF_MAP_TYPE_INSN_ARRAY &&
> +                           map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY) {

why add BPF_MAP_TYPE_PERCPU_ARRAY here?

>                                 int map_off = off + reg->var_off.value;
>                                 u64 val = 0;


  reply	other threads:[~2026-04-20 14:58 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-14 13:24 [PATCH bpf-next v4 0/8] bpf: Introduce global percpu data Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 1/8] bpf: Drop duplicate blank lines in verifier Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 2/8] bpf: Introduce global percpu data Leon Hwang
2026-04-14 14:10   ` bot+bpf-ci
2026-04-14 14:19     ` Leon Hwang
2026-04-15  2:19       ` Alexei Starovoitov
2026-04-17  1:30         ` Leon Hwang
2026-04-17 15:48           ` Leon Hwang
2026-04-17 17:03             ` Alexei Starovoitov
2026-04-20  5:24               ` Leon Hwang
2026-04-20 14:58                 ` Alexei Starovoitov [this message]
2026-04-21  1:42                   ` Leon Hwang
2026-04-21  1:59                     ` Alexei Starovoitov
2026-04-21 14:13                       ` Leon Hwang
2026-04-21 14:35                         ` Alexei Starovoitov
2026-04-14 13:24 ` [PATCH bpf-next v4 3/8] libbpf: Probe percpu data feature Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 4/8] libbpf: Add support for global percpu data Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 5/8] bpf: Update per-CPU maps using BPF_F_ALL_CPUS flag Leon Hwang
2026-04-14 21:02   ` sashiko-bot
2026-04-17  1:54     ` Leon Hwang
2026-04-15  2:21   ` Alexei Starovoitov
2026-04-17  1:33     ` Leon Hwang
2026-04-17 16:07       ` Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 6/8] bpftool: Generate skeleton for global percpu data Leon Hwang
2026-04-14 21:26   ` sashiko-bot
2026-04-17  2:01     ` Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 7/8] selftests/bpf: Add tests to verify " Leon Hwang
2026-04-14 21:45   ` sashiko-bot
2026-04-17  2:06     ` Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 8/8] selftests/bpf: Add a test to verify bpf_iter for " Leon Hwang
2026-04-14 22:08   ` sashiko-bot
2026-04-17  2:17     ` Leon Hwang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DHY2JXM90466.13PVYC6ITVFKS@gmail.com \
    --to=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bot+bpf-ci@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=clm@meta.com \
    --cc=daniel@iogearbox.net \
    --cc=dxu@dxuuu.xyz \
    --cc=eddyz87@gmail.com \
    --cc=ihor.solodrai@linux.dev \
    --cc=kernel-patches-bot@fb.com \
    --cc=leon.hwang@linux.dev \
    --cc=martin.lau@kernel.org \
    --cc=qmo@kernel.org \
    --cc=song@kernel.org \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox