On Fri, 17 Apr 2026 10:03:19 -0700, Alexei Starovoitov wrote: > On Fri, Apr 17, 2026 at 8:48 AM Leon Hwang wrote: >> >> On 2026/4/17 09:30, Leon Hwang wrote: >>> On 15/4/26 10:19, Alexei Starovoitov wrote: >>>> On Tue, Apr 14, 2026 at 10:19:22PM +0800, Leon Hwang wrote: >>>>> On 2026/4/14 22:10, bot+bpf-ci@kernel.org wrote: >>> [...] >>>>>> In the v3 series, bpf_map_direct_read() itself had a guard >>>>>> (map->map_type != BPF_MAP_TYPE_ARRAY), which protected all callers. >>>>>> The v4 moved this to caller-side checks but appears to have missed >>>>>> const_reg_xfer(). >>>>>> >>>>>> >>>>> Correct. >>>>> >>>>> Will add a guard in bpf_map_direct_read() in the next revision: >>>>> >>>>> if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) >>>>> return -EINVAL; >>>> >>>> hold on. >>>> map->ops->map_direct_value_addr && >>>> - map->map_type != BPF_MAP_TYPE_INSN_ARRAY) { >>>> + map->map_type != BPF_MAP_TYPE_INSN_ARRAY && >>>> + map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY) { >>>> >>>> map_direct_value_addr() is set, but then immediately disallowed ? >>>> Where else it's used? >>>> >>>> Even if value_addr is working, then map_direct_value_meta() looks broken. >>>> >> >> IIUC, map_direct_value_meta() is only used for dumping xlated insns. If >> no available map_direct_value_addr(), map_direct_value_meta() won't be >> called. > > yes, but then xlated insn are bogus, no? > No. They look well. ./test_progs -t global_percpu_data -v XLATED: ============= 0: r1 = 0x900000007 2: r1 = &(void __percpu *)(r1) 3: r1 = *(u8 *)(r1 +0) 4: w0 = 1 5: if w1 == 0x64 goto pc+1 6: w0 = 0 7: exit ============= #152/5 global_percpu_data/verifier_percpu_read:OK I also verified the xlated insns via bpftool. static SEC(".data") __u32 cnt = 0; /* .data vs .percpu */ SEC("xdp") int xdp_prog(struct xdp_md *ctx) { cnt++; __u32 cpu = bpf_get_smp_processor_id(); bpf_printk("cpu: %u, cnt: %u\n", cpu, cnt); return XDP_PASS; } When the section of cnt is ".data", bpftool d x n xdp_prog: int xdp_prog(struct xdp_md * ctx): ; cnt++; 0: (18) r6 = map[id:23][0]+0 2: (61) r1 = *(u32 *)(r6 +0) ; cnt++; 3: (07) r1 += 1 ; cnt++; 4: (63) *(u32 *)(r6 +0) = r1 ; __u32 cpu = bpf_get_smp_processor_id(); 5: (b7) r0 = -1280774092 6: (bf) r0 = &(void __percpu *)(r0) 7: (61) r0 = *(u32 *)(r0 +0) ; bpf_printk("cpu: %u, cnt: %u\n", cpu, cnt); 8: (61) r4 = *(u32 *)(r6 +0) 9: (18) r1 = map[id:24][0]+0 11: (b7) r2 = 18 12: (bf) r3 = r0 13: (85) call bpf_trace_printk#-129408 ; return XDP_PASS; 14: (b7) r0 = 2 15: (95) exit When the section of cnt is ".percpu", bpftool d x n xdp_prog: int xdp_prog(struct xdp_md * ctx): ; cnt++; 0: (18) r6 = map[id:28][0]+0 2: (bf) r6 = &(void __percpu *)(r6) 3: (61) r1 = *(u32 *)(r6 +0) ; cnt++; 4: (07) r1 += 1 ; cnt++; 5: (63) *(u32 *)(r6 +0) = r1 ; __u32 cpu = bpf_get_smp_processor_id(); 6: (b7) r0 = -1280774092 7: (bf) r0 = &(void __percpu *)(r0) 8: (61) r0 = *(u32 *)(r0 +0) ; bpf_printk("cpu: %u, cnt: %u\n", cpu, cnt); 9: (61) r4 = *(u32 *)(r6 +0) 10: (18) r1 = map[id:30][0]+0 12: (b7) r2 = 18 13: (bf) r3 = r0 14: (85) call bpf_trace_printk#-129408 ; return XDP_PASS; 15: (b7) r0 = 2 16: (95) exit The difference between these xlated insns is "r6 = &(void __percpu *)(r6)". This insn is for ".percpu", not for ".data". >>> >>> Ah, let me dive deeper. >>> >> >> As for the above changes, let me explain them using diff snippet. >> >> @@ -5808,6 +5808,8 @@ int bpf_map_direct_read(struct bpf_map *map, int >> off, int size, u64 *val, >> u64 addr; >> int err; >> >> + if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) >> + return -EINVAL; >> err = map->ops->map_direct_value_addr(map, &addr, off); >> if (err) >> return err; >> >> It is to guard percpu_array map against const_reg_xfer(). Instead of >> updating const_reg_xfer(), better to update bpf_map_direct_read(). WDYT? > > yeah and move map_type != BPF_MAP_TYPE_INSN_ARRAY check > into bpf_map_direct_read() as well. > To cleanup const_reg_xfer() a bit. Before sending the next revision, just confirm the change: 1. Move "map->map_type == BPF_MAP_TYPE_INSN_ARRAY" from const_reg_xfer() to bpf_map_direct_read(). 2. Keep "map->map_type != BPF_MAP_TYPE_INSN_ARRAY" in check_mem_access(), because we should not propagate the error from bpf_map_direct_read() for insn_array and percpu_array. Thanks, Leon --- --- a/kernel/bpf/const_fold.c +++ b/kernel/bpf/const_fold.c @@ -174,7 +181,6 @@ static void const_reg_xfer(struct bpf_verifier_env *env, struct const_arg_info * u64 val = 0; if (!bpf_map_is_rdonly(map) || !map->ops->map_direct_value_addr || - map->map_type == BPF_MAP_TYPE_INSN_ARRAY || off < 0 || off + size > map->value_size || bpf_map_direct_read(map, off, size, &val, is_ldsx)) { *dst = unknown; --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5816,6 +5816,8 @@ int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val, u64 addr; int err; + if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY || map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) + return -EINVAL; err = map->ops->map_direct_value_addr(map, &addr, off); if (err) return err; @@ -6370,7 +6372,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn if (tnum_is_const(reg->var_off) && bpf_map_is_rdonly(map) && map->ops->map_direct_value_addr && - map->map_type != BPF_MAP_TYPE_INSN_ARRAY) { + map->map_type != BPF_MAP_TYPE_INSN_ARRAY && + map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY) { int map_off = off + reg->var_off.value; u64 val = 0;