public inbox for dev@dpdk.org
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
Cc: <dev@dpdk.org>, <xavier.guillaume@ovhcloud.com>
Subject: Re: [PATCH v2] net/af_packet: add multi-segment mbuf support for jumbo frames
Date: Tue, 10 Mar 2026 07:02:51 -0700	[thread overview]
Message-ID: <20260310070251.17c934ea@phoenix.local> (raw)
In-Reply-To: <20260309210210.1665888-1-sriram.yagnaraman@ericsson.com>

On Mon, 9 Mar 2026 22:02:10 +0100
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> wrote:

> Enable jumbo frame reception with default mbuf data room size by
> chaining multiple mbufs when packet exceeds single mbuf tailroom.
> 
> The RX path now:
> - Copies first segment up to mbuf tailroom
> - Allocates and chains additional segments for remaining data
> - Properly maintains nb_segs, next pointers, and pkt_len
> - Silently drops packets on mbuf allocation failure
> 
> This allows receiving 9KB jumbo frames using standard 2KB mbufs,
> chaining ~5 segments per jumbo packet. No need for custom mbuf
> pools with large data room.
> 
> Advertises RTE_ETH_RX_OFFLOAD_SCATTER capability.
> 
> Depends-on: series-34567 ("net/af_packet: fix MTU handling and add
> jumbo frame support")
> 
> Note: Implementation generated with AI assistance.
> 
> Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>

Using AI patch review finds several issues.

I'll start by reading the patch and the relevant AGENTS.md section.Now let me check AGENTS.md for any `RTE_ETH_RX_OFFLOAD_SCATTER` guidance:No specific guidance on `RTE_ETH_RX_OFFLOAD_SCATTER` in AGENTS.md. Let me now look more closely at the original code to understand the loop structure better:The `for` loop itself isn't in the diff context — it's the existing `for (i = 0; i < nb_pkts; i++)` loop. Let me check the existing code to understand the full picture:Now let me look at the current upstream code to understand the existing loop structure:GitHub rendered HTML, not useful. I have enough context from the patch and the search results to do a thorough review. Let me now analyze the patch carefully.

Here's my review of the v2 patch:

---

**Patch: net/af_packet: add multi-segment mbuf support for jumbo frames**

### Error

**1. Scatter capability unconditionally advertised but RX path does not check if the application enabled it.**

The patch adds `RTE_ETH_RX_OFFLOAD_SCATTER` to `rx_offload_capa` in `eth_dev_info()`, which correctly advertises the *capability*. However, the multi-segment chaining code in `eth_af_packet_rx()` executes unconditionally — it will scatter into multiple mbufs even if the application did not request `RTE_ETH_RX_OFFLOAD_SCATTER` in its `rxmode.offloads`. When an application has not enabled scatter, it expects single-segment mbufs only and may not walk `mbuf->next` chains. The driver should check whether scatter was actually requested (via the queue or port config) and fall back to the old behavior (drop packets that exceed single-mbuf tailroom) when scatter is not enabled.

Other PMDs (e.g., ixgbe, i40e, mlx5) guard their scatter path behind a check of the configured offloads. Without this, applications that haven't opted into scatter will silently receive chained mbufs they don't expect, leading to data truncation or corruption at the application layer.

**2. `goto next_pkt` skips the loop increment — `i` still increments but `num_rx` does not, causing the outer `for` loop to consume ring frames without delivering packets, potentially skipping valid frames.**

The outer loop is `for (i = 0; i < nb_pkts; i++)`. When the `goto next_pkt` path is taken (mbuf alloc failure in the scatter loop), the code correctly releases the ring frame and advances `framenum`, then jumps to `next_pkt`. On the next iteration, `i` increments but the loop re-reads `ppd` from the *new* `framenum` — this part is fine.

However, the original code indexed `bufs[i]` and the patch changes it to `bufs[num_rx]`, which is correct. But consider: the loop variable `i` counts iterations including failures, while the loop condition is `i < nb_pkts`. This means the function will stop after `nb_pkts` ring-frame inspections regardless of how many were dropped. If several frames fail allocation, the function returns fewer packets than it could have — the ring may have more ready frames that will be delayed until the next call. This is a behavioral change from the original code (which also had `bufs[i]` so never had drops mid-loop), but it's minor — not a bug per se, just worth noting.

### Warning

**3. No `err_pkts` counter increment on allocation failure.**

When `rte_pktmbuf_alloc()` fails for a scatter segment, the head mbuf is freed and the ring frame released, but `pkt_q->err_pkts` is not incremented. The original code (before this patch) also doesn't increment `err_pkts` on the initial alloc failure `break`, but that's an existing limitation. For the new scatter path, a silent drop with no counter update makes diagnosis difficult. Consider:

```c
if (unlikely(seg == NULL)) {
    rte_pktmbuf_free(mbuf);
    pkt_q->err_pkts++;
    ...
```

**4. No guard against zero tailroom causing an infinite loop.**

If `rte_pktmbuf_tailroom(seg)` returns 0 for a freshly allocated mbuf (theoretically possible if headroom equals the entire buffer), the `while (remaining > 0)` loop would allocate mbufs endlessly with `data_len = 0`, never decrementing `remaining`. In practice this shouldn't happen with normal mempool configurations, but a defensive check would be prudent:

```c
data_len = RTE_MIN(remaining, rte_pktmbuf_tailroom(seg));
if (unlikely(data_len == 0)) {
    rte_pktmbuf_free(seg);
    rte_pktmbuf_free(mbuf);
    /* handle error */
    goto next_pkt;
}
```

**5. Missing `Cc: stable@dpdk.org` if this targets an LTS branch, and missing `Fixes:` tag if it addresses a known limitation.**

The commit message doesn't indicate this is a bug fix (it's a new feature), so this may be intentional. But the `Depends-on:` references a fix series — if that series targets stable, this patch should clarify that it does not.

### Info

**6. The `bufs[num_rx]` fix is a good correctness improvement.** The original code used `bufs[i]` which only worked because the original loop never had a `continue`/`goto` path that could skip a slot. The new indexing with `num_rx` is correct for the new drop path.

**7. Regarding `RTE_ETH_RX_OFFLOAD_SCATTER` and AGENTS.md** — there's no specific guidance in AGENTS.md about scatter offload. This is a gap worth considering: a pattern section could note that drivers advertising scatter capability should conditionally enable the scatter code path based on whether the application actually requested it, since applications that don't request scatter expect only single-segment mbufs.

  reply	other threads:[~2026-03-10 14:02 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-06  9:20 [PATCH v1 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-06  9:20 ` [PATCH v1 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-06  9:20 ` [PATCH v1 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
2026-03-06  9:20 ` [PATCH v1 3/3] net/af_packet: support jumbo frames Xavier Guillaume
2026-03-09 16:03   ` Stephen Hemminger
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-09 16:10   ` [PATCH v2 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-09 16:10   ` [PATCH v2 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
2026-03-09 16:10   ` [PATCH v2 3/3] net/af_packet: support jumbo frames Xavier Guillaume
2026-03-10 23:31     ` Stephen Hemminger
2026-03-12 13:32       ` Xavier Guillaume
2026-03-12 16:20         ` Stephen Hemminger
2026-03-09 20:16   ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
2026-03-09 20:49   ` [PATCH] net/af_packet: add multi-segment mbuf support for jumbo frames Sriram Yagnaraman
2026-03-09 21:02     ` [PATCH v2] " Sriram Yagnaraman
2026-03-10 14:02       ` Stephen Hemminger [this message]
2026-03-10 20:02         ` Sriram Yagnaraman
2026-03-16 16:02           ` Stephen Hemminger
2026-03-19  9:25             ` Sriram Yagnaraman
2026-03-10  1:55   ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
2026-03-10 11:21   ` [PATCH v3 " Xavier Guillaume
2026-03-10 11:21     ` [PATCH v3 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-10 11:21     ` [PATCH v3 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
2026-03-10 11:21     ` [PATCH v3 3/3] net/af_packet: support jumbo frames Xavier Guillaume
2026-03-11 16:03     ` [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
2026-03-12 18:46     ` Stephen Hemminger
2026-03-16 15:59     ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260310070251.17c934ea@phoenix.local \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=sriram.yagnaraman@ericsson.com \
    --cc=xavier.guillaume@ovhcloud.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox