public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next] selftests/bpf: Harden cpu flags test for lru_percpu_hash map
@ 2026-01-19 13:34 Leon Hwang
  2026-01-26 20:39 ` Martin KaFai Lau
  2026-01-26 20:40 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Leon Hwang @ 2026-01-19 13:34 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, Leon Hwang, linux-kselftest, linux-kernel,
	kernel-patches-bot

CI occasionally reports failures in the
percpu_alloc/cpu_flag_lru_percpu_hash selftest, for example:

 First test_progs failure (test_progs_no_alu32-x86_64-llvm-21):
 #264/15 percpu_alloc/cpu_flag_lru_percpu_hash
 ...
 test_percpu_map_op_cpu_flag:FAIL:bpf_map_lookup_batch value on specified cpu unexpected bpf_map_lookup_batch value on specified cpu: actual 0 != expected 3735929054

The unexpected value indicates that an element was removed from the map.
However, the test never calls delete_elem(), so the only possible cause
is LRU eviction.

This can happen when the current task migrates to another CPU: an
update_elem() triggers eviction because there is no available LRU node
on local freelist and global freelist.

Harden the test against this behavior by provisioning sufficient spare
elements. Set max_entries to 'nr_cpus * 2' and restrict the test to using
the first nr_cpus entries, ensuring that updates do not spuriously trigger
LRU eviction.

Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
 .../testing/selftests/bpf/prog_tests/percpu_alloc.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
index c1d0949f093f..a72ae0b29f6e 100644
--- a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
@@ -236,6 +236,8 @@ static void test_percpu_map_op_cpu_flag(struct bpf_map *map, void *keys, size_t
 		err = bpf_map_update_batch(map_fd, keys, values, &count, &batch_opts);
 		if (!ASSERT_OK(err, "bpf_map_update_batch all_cpus"))
 			goto out;
+		if (!ASSERT_EQ(count, entries, "bpf_map_update_batch count"))
+			goto out;
 
 		/* update values on specified CPU */
 		for (i = 0; i < entries; i++)
@@ -246,6 +248,8 @@ static void test_percpu_map_op_cpu_flag(struct bpf_map *map, void *keys, size_t
 		err = bpf_map_update_batch(map_fd, keys, values, &count, &batch_opts);
 		if (!ASSERT_OK(err, "bpf_map_update_batch specified cpu"))
 			goto out;
+		if (!ASSERT_EQ(count, entries, "bpf_map_update_batch count"))
+			goto out;
 
 		/* lookup values on specified CPU */
 		batch = 0;
@@ -254,6 +258,8 @@ static void test_percpu_map_op_cpu_flag(struct bpf_map *map, void *keys, size_t
 		err = bpf_map_lookup_batch(map_fd, NULL, &batch, keys, values, &count, &batch_opts);
 		if (!ASSERT_TRUE(!err || err == -ENOENT, "bpf_map_lookup_batch specified cpu"))
 			goto out;
+		if (!ASSERT_EQ(count, entries, "bpf_map_lookup_batch count"))
+			goto out;
 
 		for (i = 0; i < entries; i++)
 			if (!ASSERT_EQ(values[i], value,
@@ -269,6 +275,8 @@ static void test_percpu_map_op_cpu_flag(struct bpf_map *map, void *keys, size_t
 					   &batch_opts);
 		if (!ASSERT_TRUE(!err || err == -ENOENT, "bpf_map_lookup_batch all_cpus"))
 			goto out;
+		if (!ASSERT_EQ(count, entries, "bpf_map_lookup_batch count"))
+			goto out;
 
 		for (i = 0; i < entries; i++) {
 			values_row = (void *) values_percpu +
@@ -287,7 +295,6 @@ static void test_percpu_map_op_cpu_flag(struct bpf_map *map, void *keys, size_t
 	free(values);
 }
 
-
 static void test_percpu_map_cpu_flag(enum bpf_map_type map_type)
 {
 	struct percpu_alloc_array *skel;
@@ -300,7 +307,7 @@ static void test_percpu_map_cpu_flag(enum bpf_map_type map_type)
 	if (!ASSERT_GT(nr_cpus, 0, "libbpf_num_possible_cpus"))
 		return;
 
-	max_entries = nr_cpus + 1;
+	max_entries = nr_cpus * 2;
 	keys = calloc(max_entries, key_sz);
 	if (!ASSERT_OK_PTR(keys, "calloc keys"))
 		return;
@@ -322,7 +329,7 @@ static void test_percpu_map_cpu_flag(enum bpf_map_type map_type)
 	if (!ASSERT_OK(err, "test_percpu_alloc__load"))
 		goto out;
 
-	test_percpu_map_op_cpu_flag(map, keys, key_sz, max_entries - 1, nr_cpus, true);
+	test_percpu_map_op_cpu_flag(map, keys, key_sz, nr_cpus, nr_cpus, true);
 out:
 	percpu_alloc_array__destroy(skel);
 	free(keys);
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf-next] selftests/bpf: Harden cpu flags test for lru_percpu_hash map
  2026-01-19 13:34 [PATCH bpf-next] selftests/bpf: Harden cpu flags test for lru_percpu_hash map Leon Hwang
@ 2026-01-26 20:39 ` Martin KaFai Lau
  2026-01-26 20:40 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Martin KaFai Lau @ 2026-01-26 20:39 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Shuah Khan,
	linux-kselftest, linux-kernel, kernel-patches-bot, bpf

On 1/19/26 5:34 AM, Leon Hwang wrote:
> Harden the test against this behavior by provisioning sufficient spare
> elements. Set max_entries to 'nr_cpus * 2' and restrict the test to using
> the first nr_cpus entries, ensuring that updates do not spuriously trigger
> LRU eviction.

[ ... ]

@ -300,7 +307,7 @@ static void test_percpu_map_cpu_flag(enum 
bpf_map_type map_type)
>   	if (!ASSERT_GT(nr_cpus, 0, "libbpf_num_possible_cpus"))
>   		return;
>   
> -	max_entries = nr_cpus + 1;
> +	max_entries = nr_cpus * 2;
>   	keys = calloc(max_entries, key_sz);

Does it need to allocate "nr_cpus * 2" number of keys while only first 
nr_cpus entries are used? This can be a followup if it's needed. Applied 
to start getting signal from CI.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf-next] selftests/bpf: Harden cpu flags test for lru_percpu_hash map
  2026-01-19 13:34 [PATCH bpf-next] selftests/bpf: Harden cpu flags test for lru_percpu_hash map Leon Hwang
  2026-01-26 20:39 ` Martin KaFai Lau
@ 2026-01-26 20:40 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-01-26 20:40 UTC (permalink / raw)
  To: Leon Hwang
  Cc: bpf, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	linux-kselftest, linux-kernel, kernel-patches-bot

Hello:

This patch was applied to bpf/bpf-next.git (master)
by Martin KaFai Lau <martin.lau@kernel.org>:

On Mon, 19 Jan 2026 21:34:17 +0800 you wrote:
> CI occasionally reports failures in the
> percpu_alloc/cpu_flag_lru_percpu_hash selftest, for example:
> 
>  First test_progs failure (test_progs_no_alu32-x86_64-llvm-21):
>  #264/15 percpu_alloc/cpu_flag_lru_percpu_hash
>  ...
>  test_percpu_map_op_cpu_flag:FAIL:bpf_map_lookup_batch value on specified cpu unexpected bpf_map_lookup_batch value on specified cpu: actual 0 != expected 3735929054
> 
> [...]

Here is the summary with links:
  - [bpf-next] selftests/bpf: Harden cpu flags test for lru_percpu_hash map
    https://git.kernel.org/bpf/bpf-next/c/78980b4c7fcb

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-01-26 20:40 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-19 13:34 [PATCH bpf-next] selftests/bpf: Harden cpu flags test for lru_percpu_hash map Leon Hwang
2026-01-26 20:39 ` Martin KaFai Lau
2026-01-26 20:40 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox