public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Leon Hwang <leon.hwang@linux.dev>
To: sashiko@lists.linux.dev
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH bpf-next v4 7/8] selftests/bpf: Add tests to verify global percpu data
Date: Fri, 17 Apr 2026 10:06:43 +0800	[thread overview]
Message-ID: <d4460398-2f11-42e3-80d1-588931ea2f17@linux.dev> (raw)
In-Reply-To: <20260414214525.589E8C2BCB6@smtp.kernel.org>

On 15/4/26 05:45, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [Medium] The tests use the raw CPU ID to index into the densely packed `percpu_data` array, which will cause an out-of-bounds read and test failures on systems with sparse CPU masks.
> --
> 
> commit a7c2685b539ebc816cdb05decab1d7a8a4e1328e
> Author: Leon Hwang <leon.hwang@linux.dev>
>     
> selftests/bpf: Add tests to verify global percpu data
>     
> This commit adds new selftests to verify the behavior of global percpu data
> by checking map initial values, struct pointers, and looking up elements from
> global percpu data maps.
> 
>> diff --git a/tools/testing/selftests/bpf/prog_tests/global_data_init.c b/tools/testing/selftests/bpf/prog_tests/global_data_init.c
>> --- a/tools/testing/selftests/bpf/prog_tests/global_data_init.c
>> +++ b/tools/testing/selftests/bpf/prog_tests/global_data_init.c
> 
> [ ... ]
> 
>> +static void test_global_percpu_data_init(void)
>> +{
> 
> [ ... ]
> 
>> +	err = bpf_map__lookup_elem(map, &key, sizeof(key), percpu_data,
>> +				   elem_sz * num_cpus, 0);
>> +	if (!ASSERT_OK(err, "bpf_map__lookup_elem"))
>> +		goto out;
>> +
>> +	for (i = 0; i < num_online; i++) {
>> +		if (!online[i])
>> +			continue;
>> +
>> +		data = (void *)percpu_data + elem_sz * i;
> 
> Will this logic cause an out-of-bounds read on systems with sparse CPU masks?
> 
> When reading a per-CPU map using bpf_map__lookup_elem(), the kernel packs the
> per-CPU values densely into the output buffer by iterating over the possible
> CPU mask.
> 
> The test allocates num_cpus (the total count of possible CPUs) elements for
> the output buffer, but then iterates over the raw CPU IDs using i and uses i
> as an index into the densely packed array.
> 
> On systems with sparse CPU masks (for example, where CPUs 0 and 100 are
> possible/online), num_cpus will be 2. When the loop reaches i = 100, it
> appears it will attempt to access percpu_data + elem_sz * 100, reading past
> the bounds of the allocated memory.
> 
> Could we maintain a separate index counter that increments only for possible
> CPUs to match the kernel's dense packing behavior?
> 
> [ ... ]
> 
>> +static void test_global_percpu_data_lskel(void)
>> +{
> 
> [ ... ]
> 
>> +	err = bpf_map_lookup_elem(map_fd, &key, percpu_data);
>> +	if (!ASSERT_OK(err, "bpf_map_lookup_elem"))
>> +		goto out;
>> +
>> +	for (i = 0; i < num_online; i++) {
>> +		if (!online[i])
>> +			continue;
>> +
>> +		data = (void *)percpu_data + elem_sz * i;
> 
> Does this loop have the same bounds checking issue as mentioned above?
> 
> If i is a raw CPU ID on a system with a sparse CPU mask, this calculation
> could exceed the bounds of the percpu_data buffer.
> 

Yep, agree with the review.

Instead of lookup values across all possible CPUs, it should lookup
value on specified CPU using BPF_F_CPU, because the test only runs on
online CPUs.

Thanks,
Leon


  reply	other threads:[~2026-04-17  2:06 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-14 13:24 [PATCH bpf-next v4 0/8] bpf: Introduce global percpu data Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 1/8] bpf: Drop duplicate blank lines in verifier Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 2/8] bpf: Introduce global percpu data Leon Hwang
2026-04-14 14:10   ` bot+bpf-ci
2026-04-14 14:19     ` Leon Hwang
2026-04-15  2:19       ` Alexei Starovoitov
2026-04-17  1:30         ` Leon Hwang
2026-04-17 15:48           ` Leon Hwang
2026-04-17 17:03             ` Alexei Starovoitov
2026-04-14 13:24 ` [PATCH bpf-next v4 3/8] libbpf: Probe percpu data feature Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 4/8] libbpf: Add support for global percpu data Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 5/8] bpf: Update per-CPU maps using BPF_F_ALL_CPUS flag Leon Hwang
2026-04-14 21:02   ` sashiko-bot
2026-04-17  1:54     ` Leon Hwang
2026-04-15  2:21   ` Alexei Starovoitov
2026-04-17  1:33     ` Leon Hwang
2026-04-17 16:07       ` Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 6/8] bpftool: Generate skeleton for global percpu data Leon Hwang
2026-04-14 21:26   ` sashiko-bot
2026-04-17  2:01     ` Leon Hwang
2026-04-14 13:24 ` [PATCH bpf-next v4 7/8] selftests/bpf: Add tests to verify " Leon Hwang
2026-04-14 21:45   ` sashiko-bot
2026-04-17  2:06     ` Leon Hwang [this message]
2026-04-14 13:24 ` [PATCH bpf-next v4 8/8] selftests/bpf: Add a test to verify bpf_iter for " Leon Hwang
2026-04-14 22:08   ` sashiko-bot
2026-04-17  2:17     ` Leon Hwang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d4460398-2f11-42e3-80d1-588931ea2f17@linux.dev \
    --to=leon.hwang@linux.dev \
    --cc=bpf@vger.kernel.org \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox