netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin KaFai Lau <martin.lau@linux.dev>
To: Amery Hung <ameryhung@gmail.com>
Cc: bpf@vger.kernel.org, netdev@vger.kernel.org,
	alexei.starovoitov@gmail.com, andrii@kernel.org,
	daniel@iogearbox.net, kuba@kernel.org, stfomichev@gmail.com,
	martin.lau@kernel.org, mohsin.bashr@gmail.com, noren@nvidia.com,
	dtatulea@nvidia.com, saeedm@nvidia.com, tariqt@nvidia.com,
	mbloch@nvidia.com, maciej.fijalkowski@intel.com,
	kernel-team@meta.com
Subject: Re: [PATCH bpf-next v2 3/7] bpf: Support pulling non-linear xdp data
Date: Mon, 8 Sep 2025 12:27:49 -0700	[thread overview]
Message-ID: <54cddbbd-1c0d-467a-af49-bb6484a62f26@linux.dev> (raw)
In-Reply-To: <20250905173352.3759457-4-ameryhung@gmail.com>

On 9/5/25 10:33 AM, Amery Hung wrote:
> An unused argument, flags is reserved for future extension (e.g.,
> tossing the data instead of copying it to the linear data area).

> +__bpf_kfunc int bpf_xdp_pull_data(struct xdp_md *x, u32 len, u64 flags)

I was thinking the flag may be needed to avoid copy. I think we have recently 
concluded that bpf_xdp_adjust_head can support shrink on multi buf also. If it 
is the case, it is probably better to keep a similar api as the 
bpf_skb_pull_data which does not have the flags argument.


  reply	other threads:[~2025-09-08 19:28 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-05 17:33 [PATCH bpf-next v2 0/7] Add kfunc bpf_xdp_pull_data Amery Hung
2025-09-05 17:33 ` [PATCH bpf-next v2 1/7] net/mlx5e: Fix generating skb from nonlinear xdp_buff Amery Hung
2025-09-08 14:41   ` Dragos Tatulea
2025-09-08 17:23     ` Amery Hung
2025-09-05 17:33 ` [PATCH bpf-next v2 2/7] bpf: Allow bpf_xdp_shrink_data to shrink a frag from head and tail Amery Hung
2025-09-05 17:33 ` [PATCH bpf-next v2 3/7] bpf: Support pulling non-linear xdp data Amery Hung
2025-09-08 19:27   ` Martin KaFai Lau [this message]
2025-09-08 22:28     ` Amery Hung
2025-09-09  1:54   ` Jakub Kicinski
2025-09-10 15:17     ` Amery Hung
2025-09-10 18:04       ` Jakub Kicinski
2025-09-10 19:11         ` Amery Hung
2025-09-05 17:33 ` [PATCH bpf-next v2 4/7] bpf: Clear packet pointers after changing packet data in kfuncs Amery Hung
2025-09-05 17:33 ` [PATCH bpf-next v2 5/7] bpf: Support specifying linear xdp packet data size in test_run Amery Hung
2025-09-05 17:33 ` [PATCH bpf-next v2 6/7] selftests/bpf: Test bpf_xdp_pull_data Amery Hung
2025-09-05 17:33 ` [PATCH bpf-next v2 7/7] selftests: drv-net: Pull data before parsing headers Amery Hung

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54cddbbd-1c0d-467a-af49-bb6484a62f26@linux.dev \
    --to=martin.lau@linux.dev \
    --cc=alexei.starovoitov@gmail.com \
    --cc=ameryhung@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=dtatulea@nvidia.com \
    --cc=kernel-team@meta.com \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=martin.lau@kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=mohsin.bashr@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=noren@nvidia.com \
    --cc=saeedm@nvidia.com \
    --cc=stfomichev@gmail.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).