From: Martin KaFai Lau <martin.lau@linux.dev>
To: Amery Hung <ameryhung@gmail.com>
Cc: bpf@vger.kernel.org, netdev@vger.kernel.org,
alexei.starovoitov@gmail.com, andrii@kernel.org,
daniel@iogearbox.net, paul.chaignon@gmail.com, kuba@kernel.org,
stfomichev@gmail.com, martin.lau@kernel.org,
mohsin.bashr@gmail.com, noren@nvidia.com, dtatulea@nvidia.com,
saeedm@nvidia.com, tariqt@nvidia.com, mbloch@nvidia.com,
maciej.fijalkowski@intel.com, kernel-team@meta.com
Subject: Re: [PATCH bpf-next v6 5/7] bpf: Support specifying linear xdp packet data size for BPF_PROG_TEST_RUN
Date: Mon, 22 Sep 2025 12:20:52 -0700 [thread overview]
Message-ID: <10e5dd51-701d-498b-b1eb-68b23df191d9@linux.dev> (raw)
In-Reply-To: <20250919230952.3628709-6-ameryhung@gmail.com>
On 9/19/25 4:09 PM, Amery Hung wrote:
> To test bpf_xdp_pull_data(), an xdp packet containing fragments as well
> as free linear data area after xdp->data_end needs to be created.
> However, bpf_prog_test_run_xdp() always fills the linear area with
> data_in before creating fragments, leaving no space to pull data. This
> patch will allow users to specify the linear data size through
> ctx->data_end.
>
> Currently, ctx_in->data_end must match data_size_in and will not be the
> final ctx->data_end seen by xdp programs. This is because ctx->data_end
> is populated according to the xdp_buff passed to test_run. The linear
> data area available in an xdp_buff, max_data_sz, is alawys filled up
> before copying data_in into fragments.
>
> This patch will allow users to specify the size of data that goes into
> the linear area. When ctx_in->data_end is different from data_size_in,
> only ctx_in->data_end bytes of data will be put into the linear area when
> creating the xdp_buff.
>
> While ctx_in->data_end will be allowed to be different from data_size_in,
> it cannot be larger than the data_size_in as there will be no data to
> copy from user space. If it is larger than the maximum linear data area
> size, the layout suggested by the user will not be honored. Data beyond
> max_data_sz bytes will still be copied into fragments.
>
> Finally, since it is possible for a NIC to produce a xdp_buff with empty
> linear data area, allow it when calling bpf_test_init() from
> bpf_prog_test_run_xdp() so that we can test XDP kfuncs with such
> xdp_buff. This is done by moving lower-bound check to callers as most of
> them already do except bpf_prog_test_run_skb().
>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
> net/bpf/test_run.c | 9 +++++++--
> .../selftests/bpf/prog_tests/xdp_context_test_run.c | 4 +---
> 2 files changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 4a862d605386..0cbd3b898c45 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -665,7 +665,7 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
> void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
> void *data;
>
> - if (user_size < ETH_HLEN || user_size > PAGE_SIZE - headroom - tailroom)
> + if (user_size > PAGE_SIZE - headroom - tailroom)
> return ERR_PTR(-EINVAL);
>
> size = SKB_DATA_ALIGN(size);
> @@ -1001,6 +1001,9 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
> kattr->test.cpu || kattr->test.batch_size)
> return -EINVAL;
>
> + if (size < ETH_HLEN)
> + return -EINVAL;
> +
> data = bpf_test_init(kattr, kattr->test.data_size_in,
> size, NET_SKB_PAD + NET_IP_ALIGN,
> SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
> @@ -1246,13 +1249,15 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
I just noticed it. It still needs a "size < ETH_HLEN" test at the beginning of
test_run_xdp. At least the do_live mode should still needs to have ETH_HLEN bytes.
>
> if (ctx) {
> /* There can't be user provided data before the meta data */
> - if (ctx->data_meta || ctx->data_end != size ||
> + if (ctx->data_meta || ctx->data_end > size ||
> ctx->data > ctx->data_end ||
> unlikely(xdp_metalen_invalid(ctx->data)) ||
> (do_live && (kattr->test.data_out || kattr->test.ctx_out)))
> goto free_ctx;
> /* Meta data is allocated from the headroom */
> headroom -= ctx->data;
> +
> + size = ctx->data_end;
> }
>
> max_data_sz = PAGE_SIZE - headroom - tailroom;
It still needs to avoid multi-frags/bufs in do_live and the "if (size >
max_data_sz)" needs some adjustments. I think it is cleaner to specifically test
"kattr->test.data_size_in". Something like this (untested) ?
- if (size > max_data_sz) {
- /* disallow live data mode for jumbo frames */
- if (do_live)
- goto free_ctx;
- size = max_data_sz;
- }
+ size = min_t(u32, size, max_data_sz);
+
+ if (kattr->test.data_size_in > size && do_live)
+ goto free_ctx;
> diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
> index 46e0730174ed..178292d1251a 100644
> --- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
> +++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
> @@ -97,9 +97,7 @@ void test_xdp_context_test_run(void)
> /* Meta data must be 255 bytes or smaller */
> test_xdp_context_error(prog_fd, opts, 0, 256, sizeof(data), 0, 0, 0);
>
> - /* Total size of data must match data_end - data_meta */
> - test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32),
> - sizeof(data) - 1, 0, 0, 0);
> + /* Total size of data must be data_end - data_meta or larger */
> test_xdp_context_error(prog_fd, opts, 0, sizeof(__u32),
> sizeof(data) + 1, 0, 0, 0);
>
next prev parent reply other threads:[~2025-09-22 19:21 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-19 23:09 [PATCH bpf-next v6 0/7] Add kfunc bpf_xdp_pull_data Amery Hung
2025-09-19 23:09 ` [PATCH bpf-next v6 1/7] bpf: Clear pfmemalloc flag when freeing all fragments Amery Hung
2025-09-22 15:05 ` Maciej Fijalkowski
2025-09-19 23:09 ` [PATCH bpf-next v6 2/7] bpf: Allow bpf_xdp_shrink_data to shrink a frag from head and tail Amery Hung
2025-09-22 15:09 ` Maciej Fijalkowski
2025-09-19 23:09 ` [PATCH bpf-next v6 3/7] bpf: Support pulling non-linear xdp data Amery Hung
2025-09-19 23:09 ` [PATCH bpf-next v6 4/7] bpf: Clear packet pointers after changing packet data in kfuncs Amery Hung
2025-09-19 23:09 ` [PATCH bpf-next v6 5/7] bpf: Support specifying linear xdp packet data size for BPF_PROG_TEST_RUN Amery Hung
2025-09-22 19:20 ` Martin KaFai Lau [this message]
2025-09-22 19:48 ` Amery Hung
2025-09-22 20:04 ` Martin KaFai Lau
2025-09-22 22:30 ` Amery Hung
2025-09-19 23:09 ` [PATCH bpf-next v6 6/7] selftests/bpf: Test bpf_xdp_pull_data Amery Hung
2025-09-19 23:09 ` [PATCH bpf-next v6 7/7] selftests: drv-net: Pull data before parsing headers Amery Hung
2025-09-22 15:05 ` [PATCH bpf-next v6 0/7] Add kfunc bpf_xdp_pull_data Maciej Fijalkowski
2025-09-22 17:46 ` Amery Hung
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=10e5dd51-701d-498b-b1eb-68b23df191d9@linux.dev \
--to=martin.lau@linux.dev \
--cc=alexei.starovoitov@gmail.com \
--cc=ameryhung@gmail.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=dtatulea@nvidia.com \
--cc=kernel-team@meta.com \
--cc=kuba@kernel.org \
--cc=maciej.fijalkowski@intel.com \
--cc=martin.lau@kernel.org \
--cc=mbloch@nvidia.com \
--cc=mohsin.bashr@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=noren@nvidia.com \
--cc=paul.chaignon@gmail.com \
--cc=saeedm@nvidia.com \
--cc=stfomichev@gmail.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).