public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Leon Hwang <leon.hwang@linux.dev>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org,
	daniel@iogearbox.net, yonghong.song@linux.dev, song@kernel.org,
	eddyz87@gmail.com, qmo@kernel.org, dxu@dxuuu.xyz,
	kernel-patches-bot@fb.com
Subject: Re: [PATCH bpf-next v4 5/8] bpf: Update per-CPU maps using BPF_F_ALL_CPUS flag
Date: Sat, 18 Apr 2026 00:07:24 +0800	[thread overview]
Message-ID: <b332207a-abd8-402a-9ce1-c1f502ed1fca@linux.dev> (raw)
In-Reply-To: <3578a97a-bb70-4644-ab9c-4cf95be533e2@linux.dev>

On 2026/4/17 09:33, Leon Hwang wrote:
> On 15/4/26 10:21, Alexei Starovoitov wrote:
>> On Tue, Apr 14, 2026 at 09:24:17PM +0800, Leon Hwang wrote:
>>> When updating per-CPU maps via the lightweight skeleton loader, use
>>> a single value slot across all CPUs. This avoids two potential issues
>>> when updating on an M-CPU kernel with N cached slots (N < M), especially
>>> when N is much smaller than M:
>>>
>>> 1) The update may trigger a page fault when copying data from the last
>>>    slot, as the read may go beyond the allocated buffer.
>>> 2) The update may copy unexpected data from slots [N, M-1].
>>>
>>> Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
>>> ---
>>>  kernel/bpf/syscall.c | 15 +++++++++++++++
>>>  1 file changed, 15 insertions(+)
>>>
>>> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
>>> index b73b25c63073..f0f3785ef57d 100644
>>> --- a/kernel/bpf/syscall.c
>>> +++ b/kernel/bpf/syscall.c
>>> @@ -1785,6 +1785,21 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
>>>  		goto err_put;
>>>  	}
>>>  
>>> +	/*
>>> +	 * When updating per-CPU maps via the lightweight skeleton
>>> +	 * loader, use a single value slot across all CPUs. This avoids
>>> +	 * two potential issues when updating on an M-CPU kernel with
>>> +	 * N cached slots (N < M), especially when N is much smaller
>>> +	 * than M:
>>> +	 * 1) The update may trigger a page fault when copying data from
>>> +	 *    the last slot, as the read may go beyond the allocated
>>> +	 *    buffer.
>>> +	 * 2) The update may copy unexpected data from slots [N, M-1].
>>> +	 */
>>> +	if (bpfptr_is_kernel(uattr) && bpf_map_supports_cpu_flags(map->map_type) &&
>>> +	    !(attr->flags & (BPF_F_CPU | BPF_F_ALL_CPUS)))
>>> +		attr->flags |= BPF_F_ALL_CPUS;
>>
>> This looks like a hack. It's not addressing the actual bug.
>> If there is a bug submit it separately with fixes tag.
> 
> Sure, will verify whether it is a bug. If it is, will fix it with
> separate patch.
> 

By implementing two selftests [1] against syscall progs and lightweight
skeleton, it is not a real issue.

The assuming oob reading might occur when updating percpu_array map
using a small value buffer for syscall progs. However, the oob reading
won't make kernel panic, because copy kernel memory using
copy_from_kernel_nofault(). However, as for lskel, the oob reading
doesn't occur, even set value size as 32000 and update percpu_array map
using an int.

The assuming issue #2 that copies unexpected data is true for both
syscall progs and lskel. However, I think it's users' responsibility to
correctly update percpu_array map using big enough value data.

In conclusion, I'll drop this patch from this series, and won't send
separate patch because it is not a bug.

[1] https://github.com/Asphaltt/bpf/commits/bpf/lskel-oob/v1/

Thanks,
Leon



  reply	other threads:[~2026-04-17 16:07 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-14 13:24 [PATCH bpf-next v4 0/8] bpf: Introduce global percpu data Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 1/8] bpf: Drop duplicate blank lines in verifier Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 2/8] bpf: Introduce global percpu data Leon Hwang
2026-04-14 14:10   ` bot+bpf-ci
2026-04-14 14:19     ` Leon Hwang
2026-04-15  2:19       ` Alexei Starovoitov
2026-04-17  1:30         ` Leon Hwang
2026-04-17 15:48           ` Leon Hwang
2026-04-17 17:03             ` Alexei Starovoitov
2026-04-14 13:24 ` [PATCH bpf-next v4 3/8] libbpf: Probe percpu data feature Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 4/8] libbpf: Add support for global percpu data Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 5/8] bpf: Update per-CPU maps using BPF_F_ALL_CPUS flag Leon Hwang
2026-04-14 21:02   ` sashiko-bot
2026-04-17  1:54     ` Leon Hwang
2026-04-15  2:21   ` Alexei Starovoitov
2026-04-17  1:33     ` Leon Hwang
2026-04-17 16:07       ` Leon Hwang [this message]
2026-04-14 13:24 ` [PATCH bpf-next v4 6/8] bpftool: Generate skeleton for global percpu data Leon Hwang
2026-04-14 21:26   ` sashiko-bot
2026-04-17  2:01     ` Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 7/8] selftests/bpf: Add tests to verify " Leon Hwang
2026-04-14 21:45   ` sashiko-bot
2026-04-17  2:06     ` Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 8/8] selftests/bpf: Add a test to verify bpf_iter for " Leon Hwang
2026-04-14 22:08   ` sashiko-bot
2026-04-17  2:17     ` Leon Hwang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b332207a-abd8-402a-9ce1-c1f502ed1fca@linux.dev \
    --to=leon.hwang@linux.dev \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=dxu@dxuuu.xyz \
    --cc=eddyz87@gmail.com \
    --cc=kernel-patches-bot@fb.com \
    --cc=qmo@kernel.org \
    --cc=song@kernel.org \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox