public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: "Jose A. Perez de Azpillaga" <azpijr@gmail.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: bpf <bpf@vger.kernel.org>, Madalin Bucur <madalin.bucur@nxp.com>,
	 Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	 Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>,
	 Paolo Abeni <pabeni@redhat.com>,
	Alexei Starovoitov <ast@kernel.org>,
	 Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	 John Fastabend <john.fastabend@gmail.com>,
	Stanislav Fomichev <sdf@fomichev.me>,
	 Simon Horman <horms@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	 Martin KaFai Lau <martin.lau@linux.dev>,
	Eduard Zingerman <eddyz87@gmail.com>,
	 Kumar Kartikeya Dwivedi <memxor@gmail.com>,
	Song Liu <song@kernel.org>,
	 Yonghong Song <yonghong.song@linux.dev>,
	Jiri Olsa <jolsa@kernel.org>,
	 Network Development <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] bpf: cpumap: report queue_index to xdp_rxq_info
Date: Sat, 11 Apr 2026 21:10:40 +0200	[thread overview]
Message-ID: <adqbrqZgYlmI37Ib@gmail.com> (raw)
In-Reply-To: <CAADnVQJOKc0zP4=-STEX0szgzUeS7RaxQTtre=P92R0UStug8A@mail.gmail.com>

On Sat, Apr 11, 2026 at 11:09:56AM -0700, Alexei Starovoitov wrote:
> On Sat, Apr 11, 2026 at 10:51 AM Jose A. Perez de Azpillaga
> <azpijr@gmail.com> wrote:
> >
> > When a packet is redirected to a CPU map entry,
> > cpu_map_bpf_prog_run_xdp() reconstructs a minimal xdp_rxq_info from
> > xdp_frame fields (dev_rx and mem_type) before re-running the BPF program
> > on the target CPU. However, queue_index was never preserved across the
> > CPU boundary, so BPF programs running in cpumap context always observe
> > ctx->rx_queue_index == 0, regardless of which hardware queue originally
> > received the packet.
> >
> > Fix this by storing the originating queue_index in struct xdp_frame,
> > following the same pattern already established for dev_rx and mem_type.
> > The field is populated from rxq->queue_index in
> > xdp_convert_buff_to_frame() during NAPI context, when the rxq_info is
> > still valid, and restored into the reconstructed rxq_info in
> > cpu_map_bpf_prog_run_xdp().
> >
> > Also use xdpf->queue_index in __xdp_build_skb_from_frame() to call
> > skb_record_rx_queue(), which was previously listed as missing
> > information in that function's comment.
> >
> > Also propagate queue_index in dpaa_a050385_wa_xdpf(), which manually
> > constructs a new xdp_frame from an uninitialized page. Without this,
> > queue_index would contain stale data from the page allocator.
> >
> > Signed-off-by: Jose A. Perez de Azpillaga <azpijr@gmail.com>
> > ---
> > Note: this patch was only compiled, not tested.
> >
> >  drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 1 +
> >  include/net/xdp.h                              | 4 +++-
> >  kernel/bpf/cpumap.c                            | 2 +-
> >  net/core/xdp.c                                 | 2 +-
> >  4 files changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > index 3edc8d142dd5..00e36b0ac74d 100644
> > --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > @@ -2281,6 +2281,7 @@ static int dpaa_a050385_wa_xdpf(struct dpaa_priv *priv,
> >         new_xdpf->headroom = priv->tx_headroom;
> >         new_xdpf->frame_sz = DPAA_BP_RAW_SIZE;
> >         new_xdpf->mem_type = MEM_TYPE_PAGE_ORDER0;
> > +       new_xdpf->queue_index = xdpf->queue_index;
> >
> >         /* Release the initial buffer */
> >         xdp_return_frame_rx_napi(xdpf);
> > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > index aa742f413c35..6db10e6a8864 100644
> > --- a/include/net/xdp.h
> > +++ b/include/net/xdp.h
> > @@ -297,10 +297,11 @@ struct xdp_frame {
> >         u32 headroom;
> >         u32 metasize; /* uses lower 8-bits */
> >         /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time,
> > -        * while mem_type is valid on remote CPU.
> > +        * while mem_type and queue_index are valid on remote CPU.
> >          */
> >         enum xdp_mem_type mem_type:32;
> >         struct net_device *dev_rx; /* used by cpumap */
> > +       u32 queue_index; /* used by cpumap */
> >         u32 frame_sz;
> >         u32 flags; /* supported values defined in xdp_buff_flags */
> >  };
> > @@ -441,6 +442,7 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp)
> >
> >         /* rxq only valid until napi_schedule ends, convert to xdp_mem_type */
> >         xdp_frame->mem_type = xdp->rxq->mem.type;
> > +       xdp_frame->queue_index = xdp->rxq->queue_index;
> >
> >         return xdp_frame;
> >  }
> > diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> > index 5e59ab896f05..448da572de9a 100644
> > --- a/kernel/bpf/cpumap.c
> > +++ b/kernel/bpf/cpumap.c
> > @@ -197,7 +197,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
> >
> >                 rxq.dev = xdpf->dev_rx;
> >                 rxq.mem.type = xdpf->mem_type;
> > -               /* TODO: report queue_index to xdp_rxq_info */
> > +               rxq.queue_index = xdpf->queue_index;
>
> This is like 5th time people attempt to address this TODO.
>
> Just remove that comment. Don't send broken patches.

oh... okay. but I have a question, since the bot detected something
I didn't and queue_index should be propagated in
xdp_convert_zc_to_xdp_frame(), or maybe intentional?

is it better to do as you said, removing the comment, or doing what the
bot said with proper test?

--
regards,
jose a. p-a

  reply	other threads:[~2026-04-11 19:10 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-11 17:51 [RFC PATCH] bpf: cpumap: report queue_index to xdp_rxq_info Jose A. Perez de Azpillaga
2026-04-11 18:09 ` Alexei Starovoitov
2026-04-11 19:10   ` Jose A. Perez de Azpillaga [this message]
2026-04-11 18:30 ` bot+bpf-ci

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adqbrqZgYlmI37Ib@gmail.com \
    --to=azpijr@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=eddyz87@gmail.com \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=madalin.bucur@nxp.com \
    --cc=martin.lau@linux.dev \
    --cc=memxor@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=song@kernel.org \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox