public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Adith-Joshua <adithalex29@gmail.com>
To: bpf@vger.kernel.org
Cc: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
	andrii@kernel.org, kuba@kernel.org, hawk@kernel.org,
	john.fastabend@gmail.com
Subject: [RFC bpf] Ingress RX queue provenance across cpumap redirect
Date: Tue,  7 Apr 2026 11:32:03 +0530	[thread overview]
Message-ID: <20260407060203.3391-1-adithalex29@gmail.com> (raw)

Hi,

While working with XDP programs using cpumap redirection, I observed that
ctx->rx_queue_index becomes 0 after xdp_buff -> xdp_frame conversion and
execution on the remote CPU.

This is expected since xdp_frame does not carry xdp_rxq_info and cpumap
re-invokes XDP in a new execution context.

---

## Motivation

Some XDP deployments use cpumap as part of a multi-stage packet processing
pipeline, where XDP is effectively used as a distributed processing model
across CPUs rather than a single RX invocation.

In such setups, ingress RX queue identity (rx_queue_index) is sometimes
used beyond the initial hook not only for debugging, but for maintaining
consistent observability and pipeline semantics across stages. This includes:

  - maintaining consistent per-RX-queue accounting across cpumap and later
    XDP stages
  - preserving RSS-based classification identity for traffic analysis and
    validation of NIC steering behavior across a distributed pipeline
  - enabling end-to-end telemetry correlation from hardware queue origin
    through CPU-side processing
  - supporting reproducibility of packet processing paths in asynchronous
    cpumap-driven execution where scheduling and CPU assignment may vary

In these cases, rx_queue_index acts as a stable ingress classification anchor.
Losing this information at the cpumap boundary breaks the assumption that
XDP programs operate on a logically continuous packet context across stages.

---

## Design observation

Current behavior appears intentional:

  - xdp_frame does not carry xdp_rxq_info
  - cpumap executes XDP in a new RX context
  - RX metadata is not considered part of redirected packet state

This suggests that RX provenance is currently scoped strictly to the NIC RX
invocation context, and is not carried across execution boundaries such as
cpumap or devmap.

---

## Question

Is RX queue provenance expected to be part of the XDP execution model across
redirect boundaries, or is it explicitly considered out of scope once a packet
is passed through cpumap/devmap?

---

## Alternative direction (for clarification only)

One possible model could be to treat ingress RX metadata as optional,
non-authoritative context and expose it via a helper-based mechanism
(e.g. ingress queue accessor), rather than embedding it in xdp_frame or
xdp_rxq_info.

However, I am not assuming this is aligned with existing design principles,
and would appreciate clarification on whether such a model is desirable at all.

To be explicit, I am not proposing that rx_queue_index itself be preserved
in xdp_frame or across cpumap, nor any change to existing struct layouts.

The question is whether ingress RX queue identity is intended to be
representable beyond the initial NIC RX invocation, particularly in
redirect-based XDP pipelines.

This question is motivated by cases where cpumap is used as part of a
multi-stage XDP processing pipeline, where loss of ingress queue identity
removes a stable classification signal that can otherwise be useful for:

  - correlating packets across distributed XDP execution stages
  - validating RSS / hardware steering behavior in software datapaths
  - maintaining consistent per-queue observability across cpumap boundaries
  - reconstructing ingress-to-processing paths in asynchronous CPU offload

The intent is to understand whether these requirements are intentionally
out of scope in the current XDP execution model, or whether a helper-based
or metadata-based abstraction is expected for such use cases.

---

## Follow-up

If this aligns with intended design, I would be happy to explore a concrete
proposal or implementation.

Thanks,
Adith

             reply	other threads:[~2026-04-07  6:02 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-07  6:02 Adith-Joshua [this message]
2026-04-07 14:47 ` [RFC bpf] Ingress RX queue provenance across cpumap redirect Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260407060203.3391-1-adithalex29@gmail.com \
    --to=adithalex29@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=hawk@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox