netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: Amery Hung <ameryhung@gmail.com>
Cc: <bpf@vger.kernel.org>, <netdev@vger.kernel.org>,
	<alexei.starovoitov@gmail.com>, <andrii@kernel.org>,
	<daniel@iogearbox.net>, <paul.chaignon@gmail.com>,
	<kuba@kernel.org>, <stfomichev@gmail.com>,
	<martin.lau@kernel.org>, <mohsin.bashr@gmail.com>,
	<noren@nvidia.com>, <dtatulea@nvidia.com>, <saeedm@nvidia.com>,
	<tariqt@nvidia.com>, <mbloch@nvidia.com>, <kernel-team@meta.com>
Subject: Re: [PATCH bpf-next v4 2/6] bpf: Support pulling non-linear xdp data
Date: Thu, 18 Sep 2025 11:11:34 +0200	[thread overview]
Message-ID: <aMvMxrPsNXbTuF3c@boxer> (raw)
In-Reply-To: <20250917225513.3388199-3-ameryhung@gmail.com>

On Wed, Sep 17, 2025 at 03:55:09PM -0700, Amery Hung wrote:
> Add kfunc, bpf_xdp_pull_data(), to support pulling data from xdp
> fragments. Similar to bpf_skb_pull_data(), bpf_xdp_pull_data() makes
> the first len bytes of data directly readable and writable in bpf
> programs. If the "len" argument is larger than the linear data size,
> data in fragments will be copied to the linear data area when there
> is enough room. Specifically, the kfunc will try to use the tailroom
> first. When the tailroom is not enough, metadata and data will be
> shifted down to make room for pulling data.
> 
> A use case of the kfunc is to decapsulate headers residing in xdp
> fragments. It is possible for a NIC driver to place headers in xdp
> fragments. To keep using direct packet access for parsing and
> decapsulating headers, users can pull headers into the linear data
> area by calling bpf_xdp_pull_data() and then pop the header with
> bpf_xdp_adjust_head().
> 
> Reviewed-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  net/core/filter.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 91 insertions(+)
> 
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 0b82cb348ce0..0e8d63bf1d30 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -12212,6 +12212,96 @@ __bpf_kfunc int bpf_sock_ops_enable_tx_tstamp(struct bpf_sock_ops_kern *skops,
>  	return 0;
>  }
>  
> +/**
> + * bpf_xdp_pull_data() - Pull in non-linear xdp data.
> + * @x: &xdp_md associated with the XDP buffer
> + * @len: length of data to be made directly accessible in the linear part
> + *
> + * Pull in data in case the XDP buffer associated with @x is non-linear and
> + * not all @len are in the linear data area.
> + *
> + * Direct packet access allows reading and writing linear XDP data through
> + * packet pointers (i.e., &xdp_md->data + offsets). The amount of data which
> + * ends up in the linear part of the xdp_buff depends on the NIC and its
> + * configuration. When a frag-capable XDP program wants to directly access
> + * headers that may be in the non-linear area, call this kfunc to make sure
> + * the data is available in the linear area. Alternatively, use dynptr or
> + * bpf_xdp_{load,store}_bytes() to access data without pulling.
> + *
> + * This kfunc can also be used with bpf_xdp_adjust_head() to decapsulate
> + * headers in the non-linear data area.
> + *
> + * A call to this kfunc may reduce headroom. If there is not enough tailroom
> + * in the linear data area, metadata and data will be shifted down.
> + *
> + * A call to this kfunc is susceptible to change the buffer geometry.
> + * Therefore, at load time, all checks on pointers previously done by the
> + * verifier are invalidated and must be performed again, if the kfunc is used
> + * in combination with direct packet access.
> + *
> + * Return:
> + * * %0         - success
> + * * %-EINVAL   - invalid len
> + */
> +__bpf_kfunc int bpf_xdp_pull_data(struct xdp_md *x, u32 len)
> +{
> +	struct xdp_buff *xdp = (struct xdp_buff *)x;
> +	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> +	int i, delta, shift, headroom, tailroom, n_frags_free = 0;
> +	void *data_hard_end = xdp_data_hard_end(xdp);
> +	int data_len = xdp->data_end - xdp->data;
> +	void *start;
> +
> +	if (len <= data_len)
> +		return 0;
> +
> +	if (unlikely(len > xdp_get_buff_len(xdp)))
> +		return -EINVAL;
> +
> +	start = xdp_data_meta_unsupported(xdp) ? xdp->data : xdp->data_meta;
> +
> +	headroom = start - xdp->data_hard_start - sizeof(struct xdp_frame);
> +	tailroom = data_hard_end - xdp->data_end;
> +
> +	delta = len - data_len;
> +	if (unlikely(delta > tailroom + headroom))
> +		return -EINVAL;
> +
> +	shift = delta - tailroom;
> +	if (shift > 0) {
> +		memmove(start - shift, start, xdp->data_end - start);
> +
> +		xdp->data_meta -= shift;
> +		xdp->data -= shift;
> +		xdp->data_end -= shift;
> +	}
> +
> +	for (i = 0; i < sinfo->nr_frags && delta; i++) {
> +		skb_frag_t *frag = &sinfo->frags[i];
> +		u32 shrink = min_t(u32, delta, skb_frag_size(frag));
> +
> +		memcpy(xdp->data_end, skb_frag_address(frag), shrink);
> +
> +		xdp->data_end += shrink;
> +		sinfo->xdp_frags_size -= shrink;
> +		delta -= shrink;
> +		if (bpf_xdp_shrink_data(xdp, frag, shrink, false))
> +			n_frags_free++;
> +	}
> +
> +	if (unlikely(n_frags_free)) {
> +		memmove(sinfo->frags, sinfo->frags + n_frags_free,
> +			(sinfo->nr_frags - n_frags_free) * sizeof(skb_frag_t));
> +
> +		sinfo->nr_frags -= n_frags_free;
> +
> +		if (!sinfo->nr_frags)
> +			xdp_buff_clear_frags_flag(xdp);

Nit: should we take care of pfmemalloc flag as well?

> +	}
> +
> +	return 0;
> +}
> +
>  __bpf_kfunc_end_defs();
>  
>  int bpf_dynptr_from_skb_rdonly(struct __sk_buff *skb, u64 flags,
> @@ -12239,6 +12329,7 @@ BTF_KFUNCS_END(bpf_kfunc_check_set_skb_meta)
>  
>  BTF_KFUNCS_START(bpf_kfunc_check_set_xdp)
>  BTF_ID_FLAGS(func, bpf_dynptr_from_xdp)
> +BTF_ID_FLAGS(func, bpf_xdp_pull_data)
>  BTF_KFUNCS_END(bpf_kfunc_check_set_xdp)
>  
>  BTF_KFUNCS_START(bpf_kfunc_check_set_sock_addr)
> -- 
> 2.47.3
> 

  reply	other threads:[~2025-09-18  9:11 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-17 22:55 [PATCH bpf-next v4 0/6] Add kfunc bpf_xdp_pull_data Amery Hung
2025-09-17 22:55 ` [PATCH bpf-next v4 1/6] bpf: Allow bpf_xdp_shrink_data to shrink a frag from head and tail Amery Hung
2025-09-18  8:52   ` Maciej Fijalkowski
2025-09-18 17:50     ` Amery Hung
2025-09-17 22:55 ` [PATCH bpf-next v4 2/6] bpf: Support pulling non-linear xdp data Amery Hung
2025-09-18  9:11   ` Maciej Fijalkowski [this message]
2025-09-18 17:56     ` Amery Hung
2025-09-18 20:19       ` Amery Hung
2025-09-17 22:55 ` [PATCH bpf-next v4 3/6] bpf: Clear packet pointers after changing packet data in kfuncs Amery Hung
2025-09-17 22:55 ` [PATCH bpf-next v4 4/6] bpf: Support specifying linear xdp packet data size for BPF_PROG_TEST_RUN Amery Hung
2025-09-17 22:55 ` [PATCH bpf-next v4 5/6] selftests/bpf: Test bpf_xdp_pull_data Amery Hung
2025-09-18 11:33   ` Maciej Fijalkowski
2025-09-18 19:43     ` Amery Hung
2025-09-17 22:55 ` [PATCH bpf-next v4 6/6] selftests: drv-net: Pull data before parsing headers Amery Hung
  -- strict thread matches above, loose matches on Subject: below --
2025-09-19 18:09 [PATCH bpf-next v4 0/6] Add kfunc bpf_xdp_pull_data Amery Hung
2025-09-19 18:09 ` [PATCH bpf-next v4 2/6] bpf: Support pulling non-linear xdp data Amery Hung

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aMvMxrPsNXbTuF3c@boxer \
    --to=maciej.fijalkowski@intel.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=ameryhung@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=dtatulea@nvidia.com \
    --cc=kernel-team@meta.com \
    --cc=kuba@kernel.org \
    --cc=martin.lau@kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=mohsin.bashr@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=noren@nvidia.com \
    --cc=paul.chaignon@gmail.com \
    --cc=saeedm@nvidia.com \
    --cc=stfomichev@gmail.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).