From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: <bpf@vger.kernel.org>, <ast@kernel.org>, <daniel@iogearbox.net>,
<hawk@kernel.org>, <ilias.apalodimas@linaro.org>,
<toke@redhat.com>, <lorenzo@kernel.org>, <netdev@vger.kernel.org>,
<magnus.karlsson@intel.com>, <andrii@kernel.org>,
<stfomichev@gmail.com>, <aleksander.lobakin@intel.com>
Subject: Re: [PATCH bpf 2/2] veth: update mem type in xdp_buff
Date: Tue, 7 Oct 2025 16:59:21 +0200 [thread overview]
Message-ID: <aOUqyXZvmxjhJnEe@boxer> (raw)
In-Reply-To: <20251003161026.5190fcd2@kernel.org>
On Fri, Oct 03, 2025 at 04:10:26PM -0700, Jakub Kicinski wrote:
> On Fri, 3 Oct 2025 16:02:43 +0200 Maciej Fijalkowski wrote:
> > + xdp_update_mem_type(xdp);
> > +
> > act = bpf_prog_run_xdp(xdp_prog, xdp);
>
> The new helper doesn't really express what's going on. Developers
> won't know what are we updating mem_type() to, and why. Right?
Hey sorry for delay.
Agree that it lacks sufficient comment explaining the purpose behind it.
>
> My thinking was that we should try to bake the rxq into "conversion"
> APIs, draft diff below, very much unfinished and I'm probably missing
> some cases but hopefully gets the point across:
That is not related IMHO. The bugs being fixed have existing rxqs. It's
just the mem type that needs to be correctly set per packet.
Plus we do *not* convert frame to buff here which was your initial (on
point) comment WRT onstack rxqs. Traffic comes as skbs from peer's
ndo_start_xmit(). What you're referring to is when source is xdp_frame (in
veth case this is when ndo_xdp_xmit or XDP_TX is used).
However the problem pointed out by AI (!) is something we should fix as
for XDP_{TX,REDIRECT} xdp_rxq_info is overwritten and mem type update is
lost.
>
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index aa742f413c35..e7f75d551d8f 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -384,9 +384,21 @@ struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf,
> struct net_device *dev);
> struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf);
>
> +/* Initialize rxq struct on the stack for processing @frame.
> + * Not necessary when processing in context of a driver which has a real rxq,
> + * and passes it to xdp_convert_frame_to_buff().
> + */
> +static inline
> +void xdp_rxq_prep_on_stack(const struct xdp_frame *frame,
> + struct xdp_rxq_info *rxq)
> +{
> + rxq->dev = xdpf->dev_rx;
> + /* TODO: report queue_index to xdp_rxq_info */
> +}
> +
> static inline
> void xdp_convert_frame_to_buff(const struct xdp_frame *frame,
> - struct xdp_buff *xdp)
> + struct xdp_buff *xdp, struct xdp_rxq_info *rxq)
> {
> xdp->data_hard_start = frame->data - frame->headroom - sizeof(*frame);
> xdp->data = frame->data;
> @@ -394,6 +406,22 @@ void xdp_convert_frame_to_buff(const struct xdp_frame *frame,
> xdp->data_meta = frame->data - frame->metasize;
> xdp->frame_sz = frame->frame_sz;
> xdp->flags = frame->flags;
> +
> + rxq->mem.type = xdpf->mem_type;
> +}
> +
> +/* Initialize an xdp_buff from an skb.
> + *
> + * Note: if skb has frags skb_cow_data_for_xdp() must be called first,
> + * or caller must otherwise guarantee that the frags come from a page pool
> + */
> +static inline
> +void xdp_convert_skb_to_buff(const struct xdp_frame *frame,
> + struct xdp_buff *xdp, struct xdp_rxq_info *rxq)
I would expect to get skb as an input here
> +{
> + // copy the init_buff / prep_buff here
> +
> + rxq->mem.type = MEM_TYPE_PAGE_POOL; /* see note above the function */
> }
>
> static inline
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index 703e5df1f4ef..60ba15bbec59 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -193,11 +193,8 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
> u32 act;
> int err;
>
> - rxq.dev = xdpf->dev_rx;
> - rxq.mem.type = xdpf->mem_type;
> - /* TODO: report queue_index to xdp_rxq_info */
> -
> - xdp_convert_frame_to_buff(xdpf, &xdp);
> + xdp_rxq_prep_on_stack(xdpf, &rxq);
> + xdp_convert_frame_to_buff(xdpf, &xdp, &rxq);
>
> act = bpf_prog_run_xdp(rcpu->prog, &xdp);
> switch (act) {
>
>
next prev parent reply other threads:[~2025-10-07 14:59 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-03 14:02 [PATCH bpf 0/2] xdp: fix page_pool leaks Maciej Fijalkowski
2025-10-03 14:02 ` [PATCH bpf 1/2] xdp: update xdp_rxq_info's mem type in XDP generic hook Maciej Fijalkowski
2025-10-03 17:29 ` Alexei Starovoitov
2025-10-07 19:39 ` Maciej Fijalkowski
2025-10-07 20:18 ` Alexei Starovoitov
2025-10-03 14:02 ` [PATCH bpf 2/2] veth: update mem type in xdp_buff Maciej Fijalkowski
2025-10-03 23:10 ` Jakub Kicinski
2025-10-07 14:59 ` Maciej Fijalkowski [this message]
2025-10-08 1:11 ` Jakub Kicinski
2025-10-08 9:22 ` Maciej Fijalkowski
2025-10-08 10:37 ` Maciej Fijalkowski
2025-10-13 23:24 ` Jakub Kicinski
2025-10-14 16:53 ` Toke Høiland-Jørgensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aOUqyXZvmxjhJnEe@boxer \
--to=maciej.fijalkowski@intel.com \
--cc=aleksander.lobakin@intel.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=lorenzo@kernel.org \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
--cc=stfomichev@gmail.com \
--cc=toke@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox