From: Leon Hwang <leon.hwang@linux.dev>
To: sashiko-reviews@lists.linux.dev
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH bpf-next v14 8/8] selftests/bpf: Add tests to verify map create failure log
Date: Thu, 14 May 2026 22:26:14 +0800 [thread overview]
Message-ID: <57b7430f-babc-49d2-9434-98d0fa6b45e1@linux.dev> (raw)
In-Reply-To: <20260514012532.E0375C2BCB3@smtp.kernel.org>
On 2026/5/14 09:25, sashiko-bot@kernel.org wrote:
[...]
>> +static void test_invalid_numa_node(void)
>> +{
>> + const char *msg = "Invalid numa_node.\n";
>> + LIBBPF_OPTS(bpf_map_create_opts, opts,
>> + .map_flags = BPF_F_NUMA_NODE,
>> + .numa_node = 0xFF,
>
> Would this test be flaky on a massive system? Using 0xFF (255) for the
> NUMA node could accidentally match an online node on systems with large NUMA
> topologies (e.g., up to 1024 nodes).
>
> Would it be safer to use ~0U or -1 to represent a guaranteed invalid node?
>
>> + );
>> +
>> + test_map_create_array(&opts, msg);
>> +}
> [ ... ]
>> +static void test_invalid_token_fd(void)
>> +{
>> + const char *msg = "Invalid map_token_fd.\n";
>> + LIBBPF_OPTS(bpf_map_create_opts, opts,
>> + .map_flags = BPF_F_TOKEN_FD,
>> + .token_fd = 0xFF,
>
> Could the use of 0xFF (255) as a file descriptor lead to spurious test failures?
>
> While an open FD 255 is unlikely to be a BPF token, using -1 or ~0U is the
> standard way to provide an invalid FD, which completely eliminates the risk of
> accidentally hitting an open file descriptor.
>
>> + );
>> +
>> + test_map_create_array(&opts, msg);
>> +}
>
Replied in v13
https://lore.kernel.org/bpf/b22790b4-1e33-4cab-bc2f-92cf7fd60bfd@linux.dev/.
Thanks,
Leon
[...]
next prev parent reply other threads:[~2026-05-14 14:26 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-12 15:31 [PATCH bpf-next v14 0/8] bpf: Extend BPF syscall with common attributes support Leon Hwang
2026-05-12 15:31 ` [PATCH bpf-next v14 1/8] " Leon Hwang
2026-05-13 22:48 ` sashiko-bot
2026-05-14 14:24 ` Leon Hwang
2026-05-12 15:31 ` [PATCH bpf-next v14 2/8] libbpf: Add support for extended BPF syscall Leon Hwang
2026-05-12 16:23 ` bot+bpf-ci
2026-05-13 2:10 ` Leon Hwang
2026-05-12 15:31 ` [PATCH bpf-next v14 3/8] bpf: Refactor reporting log_true_size for prog_load Leon Hwang
2026-05-12 15:31 ` [PATCH bpf-next v14 4/8] bpf: Add syscall common attributes support " Leon Hwang
2026-05-13 23:56 ` sashiko-bot
2026-05-12 15:31 ` [PATCH bpf-next v14 5/8] bpf: Add syscall common attributes support for btf_load Leon Hwang
2026-05-12 15:31 ` [PATCH bpf-next v14 6/8] bpf: Add syscall common attributes support for map_create Leon Hwang
2026-05-14 0:46 ` sashiko-bot
2026-05-14 14:25 ` Leon Hwang
2026-05-12 15:31 ` [PATCH bpf-next v14 7/8] libbpf: " Leon Hwang
2026-05-14 1:08 ` sashiko-bot
2026-05-14 14:25 ` Leon Hwang
2026-05-12 15:31 ` [PATCH bpf-next v14 8/8] selftests/bpf: Add tests to verify map create failure log Leon Hwang
2026-05-14 1:25 ` sashiko-bot
2026-05-14 14:26 ` Leon Hwang [this message]
2026-05-12 19:50 ` [PATCH bpf-next v14 0/8] bpf: Extend BPF syscall with common attributes support patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57b7430f-babc-49d2-9434-98d0fa6b45e1@linux.dev \
--to=leon.hwang@linux.dev \
--cc=bpf@vger.kernel.org \
--cc=sashiko-reviews@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox