* [PATCH v1 0/3] net/af_packet: fix MTU handling and add jumbo frame support
@ 2026-03-06 9:20 Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
` (3 more replies)
0 siblings, 4 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-06 9:20 UTC (permalink / raw)
To: dev; +Cc: linville, Xavier Guillaume
This series fixes two bugs in the af_packet PMD related to frame
size calculations and buffer safety, then enables jumbo frame
support by deriving the advertised capabilities from the actual
TPACKET ring configuration.
Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
sockaddr_ll) even though the sockaddr_ll does not consume frame
data space. The formula is now consistent with the RX and TX paths.
Patch 2 adds a bounds check in the RX path to prevent a heap buffer
overflow when the kernel delivers a packet larger than the mbuf data
room (e.g. if the kernel interface MTU is raised externally).
Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
adds max_mtu, both derived from the configured TPACKET frame size.
This enables jumbo frame support when the user specifies a larger
framesz devarg at vdev creation time.
Xavier Guillaume (3):
net/af_packet: fix MTU set data size calculation
net/af_packet: fix receive buffer overflow
net/af_packet: support jumbo frames
drivers/net/af_packet/rte_eth_af_packet.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 1/3] net/af_packet: fix MTU set data size calculation
2026-03-06 9:20 [PATCH v1 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
@ 2026-03-06 9:20 ` Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
` (2 subsequent siblings)
3 siblings, 0 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-06 9:20 UTC (permalink / raw)
To: dev; +Cc: linville, Xavier Guillaume, stable
The data_size in eth_dev_mtu_set() is calculated as:
tp_frame_size - TPACKET2_HDRLEN
Since TPACKET2_HDRLEN is defined as:
TPACKET_ALIGN(sizeof(struct tpacket2_hdr)) + sizeof(struct sockaddr_ll)
it includes sizeof(struct sockaddr_ll), but the actual packet data
in the TPACKET ring starts at TPACKET_ALIGN(sizeof(struct tpacket2_hdr)).
The sockaddr_ll does not consume frame data space. This makes the
current formula 20 bytes too restrictive, matching neither the
RX queue setup nor the TX queue init path, which both correctly use:
tp_frame_size - TPACKET2_HDRLEN + sizeof(struct sockaddr_ll)
Fix the calculation to be consistent and accept all valid MTU values.
Fixes: 8020573894a8 ("net/af_packet: fix setting MTU decrements sockaddr twice")
Cc: stable@dpdk.org
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index c0ba3381ea..bfa68297a6 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -612,7 +612,7 @@ eth_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
int ret;
int s;
unsigned int data_size = internals->req.tp_frame_size -
- TPACKET2_HDRLEN;
+ TPACKET2_HDRLEN + sizeof(struct sockaddr_ll);
if (mtu > data_size)
return -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v1 2/3] net/af_packet: fix receive buffer overflow
2026-03-06 9:20 [PATCH v1 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
@ 2026-03-06 9:20 ` Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 3/3] net/af_packet: support jumbo frames Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
3 siblings, 0 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-06 9:20 UTC (permalink / raw)
To: dev; +Cc: linville, Xavier Guillaume, stable
The receive path copies the entire incoming packet into a single
mbuf without verifying the packet fits. If the kernel interface
MTU is raised externally beyond the mbuf data room size, the
memcpy overflows the mbuf buffer.
Add a bounds check against the mbuf tailroom before copying.
Oversized packets are dropped and accounted for in the
rx_dropped_pkts counter, consistent with how the TX path
already drops oversized packets.
Fixes: 364e08f2bb ("af_packet: add PMD for AF_PACKET-based virtual devices")
Cc: stable@dpdk.org
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index bfa68297a6..b04987aaf7 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -154,6 +154,16 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
break;
}
+ /* drop packets that won't fit in the mbuf */
+ if (ppd->tp_snaplen > rte_pktmbuf_tailroom(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ ppd->tp_status = TP_STATUS_KERNEL;
+ if (++framenum >= framecount)
+ framenum = 0;
+ pkt_q->rx_dropped_pkts++;
+ continue;
+ }
+
/* packet will fit in the mbuf, go ahead and receive it */
rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_len(mbuf) = ppd->tp_snaplen;
pbuf = (uint8_t *) ppd + ppd->tp_mac;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v1 3/3] net/af_packet: support jumbo frames
2026-03-06 9:20 [PATCH v1 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
@ 2026-03-06 9:20 ` Xavier Guillaume
2026-03-09 16:03 ` Stephen Hemminger
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
3 siblings, 1 reply; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-06 9:20 UTC (permalink / raw)
To: dev; +Cc: linville, Xavier Guillaume
Derive max_rx_pktlen and max_mtu from the actual TPACKET ring
frame size rather than the static RTE_ETHER_MAX_LEN (1518).
This allows jumbo frame support when the user specifies a
larger framesz devarg at vdev creation time, for example:
--vdev=net_af_packet0,iface=eth0,framesz=9216
The advertised capabilities now accurately reflect what the
configured ring can handle, avoiding both false limitations
with large frames and false promises with small ones.
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index b04987aaf7..bfd2693a69 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -402,7 +402,10 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->if_index = internals->if_index;
dev_info->max_mac_addrs = 1;
- dev_info->max_rx_pktlen = RTE_ETHER_MAX_LEN;
+ dev_info->max_rx_pktlen = internals->req.tp_frame_size -
+ TPACKET2_HDRLEN + sizeof(struct sockaddr_ll);
+ dev_info->max_mtu = dev_info->max_rx_pktlen -
+ RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN;
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v1 3/3] net/af_packet: support jumbo frames
2026-03-06 9:20 ` [PATCH v1 3/3] net/af_packet: support jumbo frames Xavier Guillaume
@ 2026-03-09 16:03 ` Stephen Hemminger
0 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-09 16:03 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev, linville
On Fri, 6 Mar 2026 10:20:13 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> Derive max_rx_pktlen and max_mtu from the actual TPACKET ring
> frame size rather than the static RTE_ETHER_MAX_LEN (1518).
> This allows jumbo frame support when the user specifies a
> larger framesz devarg at vdev creation time, for example:
>
> --vdev=net_af_packet0,iface=eth0,framesz=9216
>
> The advertised capabilities now accurately reflect what the
> configured ring can handle, avoiding both false limitations
> with large frames and false promises with small ones.
>
> Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
> ---
AI had this observation.
Patch 3: max_mtu subtracts RTE_ETHER_CRC_LEN, but the kernel
strips CRC before placing data in the TPACKET ring, so MTU is
understated by 4 bytes. This is conservative and safe, and
matches what af_xdp does, so probably fine as-is.
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support
2026-03-06 9:20 [PATCH v1 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
` (2 preceding siblings ...)
2026-03-06 9:20 ` [PATCH v1 3/3] net/af_packet: support jumbo frames Xavier Guillaume
@ 2026-03-09 16:10 ` Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
` (6 more replies)
3 siblings, 7 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-09 16:10 UTC (permalink / raw)
To: dev; +Cc: stable, Xavier Guillaume
This series fixes two bugs in the af_packet PMD related to frame
size calculations and buffer safety, then enables jumbo frame
support by deriving the advertised capabilities from the actual
TPACKET ring configuration.
Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
sockaddr_ll) even though the sockaddr_ll does not consume frame
data space. The formula is now consistent with the RX and TX paths.
Patch 2 adds a bounds check in the RX path to prevent a heap buffer
overflow when the kernel delivers a packet larger than the mbuf data
room (e.g. if the kernel interface MTU is raised externally).
Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
adds max_mtu, both derived from the configured TPACKET frame size.
This enables jumbo frame support when the user specifies a larger
framesz devarg at vdev creation time.
v2:
- patch 2/3: fix Fixes tag to use 12-char SHA (checkpatch warning)
Xavier Guillaume (3):
net/af_packet: fix MTU set data size calculation
net/af_packet: fix receive buffer overflow
net/af_packet: support jumbo frames
drivers/net/af_packet/rte_eth_af_packet.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 1/3] net/af_packet: fix MTU set data size calculation
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
@ 2026-03-09 16:10 ` Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
` (5 subsequent siblings)
6 siblings, 0 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-09 16:10 UTC (permalink / raw)
To: dev; +Cc: stable, Xavier Guillaume
The data_size in eth_dev_mtu_set() is calculated as:
tp_frame_size - TPACKET2_HDRLEN
Since TPACKET2_HDRLEN is defined as:
TPACKET_ALIGN(sizeof(struct tpacket2_hdr)) + sizeof(struct sockaddr_ll)
it includes sizeof(struct sockaddr_ll), but the actual packet data
in the TPACKET ring starts at TPACKET_ALIGN(sizeof(struct tpacket2_hdr)).
The sockaddr_ll does not consume frame data space. This makes the
current formula 20 bytes too restrictive, matching neither the
RX queue setup nor the TX queue init path, which both correctly use:
tp_frame_size - TPACKET2_HDRLEN + sizeof(struct sockaddr_ll)
Fix the calculation to be consistent and accept all valid MTU values.
Fixes: 8020573894a8 ("net/af_packet: fix setting MTU decrements sockaddr twice")
Cc: stable@dpdk.org
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index c0ba3381ea..bfa68297a6 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -612,7 +612,7 @@ eth_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
int ret;
int s;
unsigned int data_size = internals->req.tp_frame_size -
- TPACKET2_HDRLEN;
+ TPACKET2_HDRLEN + sizeof(struct sockaddr_ll);
if (mtu > data_size)
return -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v2 2/3] net/af_packet: fix receive buffer overflow
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
@ 2026-03-09 16:10 ` Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 3/3] net/af_packet: support jumbo frames Xavier Guillaume
` (4 subsequent siblings)
6 siblings, 0 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-09 16:10 UTC (permalink / raw)
To: dev; +Cc: stable, Xavier Guillaume
The receive path copies the entire incoming packet into a single
mbuf without verifying the packet fits. If the kernel interface
MTU is raised externally beyond the mbuf data room size, the
memcpy overflows the mbuf buffer.
Add a bounds check against the mbuf tailroom before copying.
Oversized packets are dropped and accounted for in the
rx_dropped_pkts counter, consistent with how the TX path
already drops oversized packets.
Fixes: 364e08f2bbc0 ("af_packet: add PMD for AF_PACKET-based virtual devices")
Cc: stable@dpdk.org
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index bfa68297a6..b04987aaf7 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -154,6 +154,16 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
break;
}
+ /* drop packets that won't fit in the mbuf */
+ if (ppd->tp_snaplen > rte_pktmbuf_tailroom(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ ppd->tp_status = TP_STATUS_KERNEL;
+ if (++framenum >= framecount)
+ framenum = 0;
+ pkt_q->rx_dropped_pkts++;
+ continue;
+ }
+
/* packet will fit in the mbuf, go ahead and receive it */
rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_len(mbuf) = ppd->tp_snaplen;
pbuf = (uint8_t *) ppd + ppd->tp_mac;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v2 3/3] net/af_packet: support jumbo frames
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
@ 2026-03-09 16:10 ` Xavier Guillaume
2026-03-10 23:31 ` Stephen Hemminger
2026-03-09 20:16 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
` (3 subsequent siblings)
6 siblings, 1 reply; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-09 16:10 UTC (permalink / raw)
To: dev; +Cc: stable, Xavier Guillaume
Derive max_rx_pktlen and max_mtu from the actual TPACKET ring
frame size rather than the static RTE_ETHER_MAX_LEN (1518).
This allows jumbo frame support when the user specifies a
larger framesz devarg at vdev creation time, for example:
--vdev=net_af_packet0,iface=eth0,framesz=9216
The advertised capabilities now accurately reflect what the
configured ring can handle, avoiding both false limitations
with large frames and false promises with small ones.
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index b04987aaf7..bfd2693a69 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -402,7 +402,10 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->if_index = internals->if_index;
dev_info->max_mac_addrs = 1;
- dev_info->max_rx_pktlen = RTE_ETHER_MAX_LEN;
+ dev_info->max_rx_pktlen = internals->req.tp_frame_size -
+ TPACKET2_HDRLEN + sizeof(struct sockaddr_ll);
+ dev_info->max_mtu = dev_info->max_rx_pktlen -
+ RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN;
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
` (2 preceding siblings ...)
2026-03-09 16:10 ` [PATCH v2 3/3] net/af_packet: support jumbo frames Xavier Guillaume
@ 2026-03-09 20:16 ` Stephen Hemminger
2026-03-09 20:49 ` [PATCH] net/af_packet: add multi-segment mbuf support for jumbo frames Sriram Yagnaraman
` (2 subsequent siblings)
6 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-09 20:16 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev, stable
On Mon, 9 Mar 2026 17:10:18 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> This series fixes two bugs in the af_packet PMD related to frame
> size calculations and buffer safety, then enables jumbo frame
> support by deriving the advertised capabilities from the actual
> TPACKET ring configuration.
>
> Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
> is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
> sockaddr_ll) even though the sockaddr_ll does not consume frame
> data space. The formula is now consistent with the RX and TX paths.
>
> Patch 2 adds a bounds check in the RX path to prevent a heap buffer
> overflow when the kernel delivers a packet larger than the mbuf data
> room (e.g. if the kernel interface MTU is raised externally).
>
> Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
> adds max_mtu, both derived from the configured TPACKET frame size.
> This enables jumbo frame support when the user specifies a larger
> framesz devarg at vdev creation time.
>
> v2:
> - patch 2/3: fix Fixes tag to use 12-char SHA (checkpatch warning)
FYI - I have another series that addresses different things in af_packet.
It adds standalone tests and it would be good if the test there also check jumbo.
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH] net/af_packet: add multi-segment mbuf support for jumbo frames
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
` (3 preceding siblings ...)
2026-03-09 20:16 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
@ 2026-03-09 20:49 ` Sriram Yagnaraman
2026-03-09 21:02 ` [PATCH v2] " Sriram Yagnaraman
2026-03-10 1:55 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
6 siblings, 1 reply; 27+ messages in thread
From: Sriram Yagnaraman @ 2026-03-09 20:49 UTC (permalink / raw)
To: dev; +Cc: xavier.guillaume, stephen, Sriram Yagnaraman
Enable jumbo frame reception with default mbuf data room size by
chaining multiple mbufs when packet exceeds single mbuf tailroom.
The RX path now:
- Copies first segment up to mbuf tailroom
- Allocates and chains additional segments for remaining data
- Properly maintains nb_segs, next pointers, and pkt_len
- Drops packets only on mbuf allocation failure
This allows receiving 9KB jumbo frames using standard 2KB mbufs,
chaining ~5 segments per jumbo packet. No need for custom mbuf
pools with large data room.
Advertises RTE_ETH_RX_OFFLOAD_SCATTER capability.
Depends-on: series-34567 ("net/af_packet: fix MTU handling and add
jumbo frame support")
Note: Implementation generated with AI assistance.
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 53 ++++++++++++++++++-----
1 file changed, 43 insertions(+), 10 deletions(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index d9fafd4..a18bc41 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -121,12 +121,13 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
{
unsigned i;
struct tpacket2_hdr *ppd;
- struct rte_mbuf *mbuf;
+ struct rte_mbuf *mbuf, *seg, *prev;
uint8_t *pbuf;
struct pkt_rx_queue *pkt_q = queue;
uint16_t num_rx = 0;
unsigned long num_rx_bytes = 0;
unsigned int framecount, framenum;
+ uint32_t pkt_len, data_len, remaining;
if (unlikely(nb_pkts == 0))
return 0;
@@ -148,10 +149,36 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(mbuf == NULL))
break;
- /* packet will fit in the mbuf, go ahead and receive it */
- rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_len(mbuf) = ppd->tp_snaplen;
+ pkt_len = ppd->tp_snaplen;
pbuf = (uint8_t *) ppd + ppd->tp_mac;
- memcpy(rte_pktmbuf_mtod(mbuf, void *), pbuf, rte_pktmbuf_data_len(mbuf));
+
+ /* copy data to mbuf chain */
+ data_len = RTE_MIN(pkt_len, rte_pktmbuf_tailroom(mbuf));
+ memcpy(rte_pktmbuf_mtod(mbuf, void *), pbuf, data_len);
+ rte_pktmbuf_data_len(mbuf) = data_len;
+ remaining = pkt_len - data_len;
+ pbuf += data_len;
+ prev = mbuf;
+
+ while (remaining > 0) {
+ seg = rte_pktmbuf_alloc(pkt_q->mb_pool);
+ if (unlikely(seg == NULL)) {
+ rte_pktmbuf_free(mbuf);
+ goto drop;
+ }
+
+ data_len = RTE_MIN(remaining, rte_pktmbuf_tailroom(seg));
+ memcpy(rte_pktmbuf_mtod(seg, void *), pbuf, data_len);
+ rte_pktmbuf_data_len(seg) = data_len;
+ pbuf += data_len;
+ remaining -= data_len;
+
+ prev->next = seg;
+ prev = seg;
+ mbuf->nb_segs++;
+ }
+
+ rte_pktmbuf_pkt_len(mbuf) = pkt_len;
/* check for vlan info */
if (ppd->tp_status & TP_STATUS_VLAN_VALID) {
@@ -172,16 +199,21 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
mbuf->ol_flags |= timestamp_dynflag;
}
+drop:
/* release incoming frame and advance ring buffer */
ppd->tp_status = TP_STATUS_KERNEL;
if (++framenum >= framecount)
framenum = 0;
- mbuf->port = pkt_q->in_port;
- /* account for the receive frame */
- bufs[i] = mbuf;
- num_rx++;
- num_rx_bytes += mbuf->pkt_len;
+ if (mbuf != NULL) {
+ mbuf->port = pkt_q->in_port;
+ /* account for the receive frame */
+ bufs[num_rx] = mbuf;
+ num_rx++;
+ num_rx_bytes += mbuf->pkt_len;
+ } else {
+ pkt_q->rx_dropped_pkts++;
+ }
}
pkt_q->framenum = framenum;
pkt_q->rx_pkts += num_rx;
@@ -411,7 +443,8 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
return 0;
}
--
2.43.7
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v2] net/af_packet: add multi-segment mbuf support for jumbo frames
2026-03-09 20:49 ` [PATCH] net/af_packet: add multi-segment mbuf support for jumbo frames Sriram Yagnaraman
@ 2026-03-09 21:02 ` Sriram Yagnaraman
2026-03-10 14:02 ` Stephen Hemminger
0 siblings, 1 reply; 27+ messages in thread
From: Sriram Yagnaraman @ 2026-03-09 21:02 UTC (permalink / raw)
To: dev; +Cc: xavier.guillaume, stephen, Sriram Yagnaraman
Enable jumbo frame reception with default mbuf data room size by
chaining multiple mbufs when packet exceeds single mbuf tailroom.
The RX path now:
- Copies first segment up to mbuf tailroom
- Allocates and chains additional segments for remaining data
- Properly maintains nb_segs, next pointers, and pkt_len
- Silently drops packets on mbuf allocation failure
This allows receiving 9KB jumbo frames using standard 2KB mbufs,
chaining ~5 segments per jumbo packet. No need for custom mbuf
pools with large data room.
Advertises RTE_ETH_RX_OFFLOAD_SCATTER capability.
Depends-on: series-34567 ("net/af_packet: fix MTU handling and add
jumbo frame support")
Note: Implementation generated with AI assistance.
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 55 ++++++++++++++++++-----
1 file changed, 45 insertions(+), 10 deletions(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index d9fafd4..184d2c1 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -121,12 +121,13 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
{
unsigned i;
struct tpacket2_hdr *ppd;
- struct rte_mbuf *mbuf;
+ struct rte_mbuf *mbuf, *seg, *prev;
uint8_t *pbuf;
struct pkt_rx_queue *pkt_q = queue;
uint16_t num_rx = 0;
unsigned long num_rx_bytes = 0;
unsigned int framecount, framenum;
+ uint32_t pkt_len, data_len, remaining;
if (unlikely(nb_pkts == 0))
return 0;
@@ -148,10 +149,45 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (unlikely(mbuf == NULL))
break;
- /* packet will fit in the mbuf, go ahead and receive it */
- rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_len(mbuf) = ppd->tp_snaplen;
+ pkt_len = ppd->tp_snaplen;
pbuf = (uint8_t *) ppd + ppd->tp_mac;
- memcpy(rte_pktmbuf_mtod(mbuf, void *), pbuf, rte_pktmbuf_data_len(mbuf));
+
+ /* copy data to mbuf chain */
+ data_len = RTE_MIN(pkt_len, rte_pktmbuf_tailroom(mbuf));
+ memcpy(rte_pktmbuf_mtod(mbuf, void *), pbuf, data_len);
+ rte_pktmbuf_data_len(mbuf) = data_len;
+ remaining = pkt_len - data_len;
+ pbuf += data_len;
+ prev = mbuf;
+
+ while (remaining > 0) {
+ seg = rte_pktmbuf_alloc(pkt_q->mb_pool);
+ if (unlikely(seg == NULL)) {
+ rte_pktmbuf_free(mbuf);
+ /* release frame and break from outer loop */
+ ppd->tp_status = TP_STATUS_KERNEL;
+ if (++framenum >= framecount)
+ framenum = 0;
+ goto next_pkt;
+ }
+
+ data_len = RTE_MIN(remaining, rte_pktmbuf_tailroom(seg));
+ memcpy(rte_pktmbuf_mtod(seg, void *), pbuf, data_len);
+ rte_pktmbuf_data_len(seg) = data_len;
+ pbuf += data_len;
+ remaining -= data_len;
+
+ prev->next = seg;
+ prev = seg;
+ mbuf->nb_segs++;
+ }
+
+ /* release incoming frame and advance ring buffer */
+ ppd->tp_status = TP_STATUS_KERNEL;
+ if (++framenum >= framecount)
+ framenum = 0;
+
+ rte_pktmbuf_pkt_len(mbuf) = pkt_len;
/* check for vlan info */
if (ppd->tp_status & TP_STATUS_VLAN_VALID) {
@@ -172,16 +208,14 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
mbuf->ol_flags |= timestamp_dynflag;
}
- /* release incoming frame and advance ring buffer */
- ppd->tp_status = TP_STATUS_KERNEL;
- if (++framenum >= framecount)
- framenum = 0;
mbuf->port = pkt_q->in_port;
/* account for the receive frame */
- bufs[i] = mbuf;
+ bufs[num_rx] = mbuf;
num_rx++;
num_rx_bytes += mbuf->pkt_len;
+next_pkt:
+ ;
}
pkt_q->framenum = framenum;
pkt_q->rx_pkts += num_rx;
@@ -411,7 +445,8 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
return 0;
}
--
2.43.7
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
` (4 preceding siblings ...)
2026-03-09 20:49 ` [PATCH] net/af_packet: add multi-segment mbuf support for jumbo frames Sriram Yagnaraman
@ 2026-03-10 1:55 ` Stephen Hemminger
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
6 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-10 1:55 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev, stable
On Mon, 9 Mar 2026 17:10:18 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> This series fixes two bugs in the af_packet PMD related to frame
> size calculations and buffer safety, then enables jumbo frame
> support by deriving the advertised capabilities from the actual
> TPACKET ring configuration.
>
> Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
> is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
> sockaddr_ll) even though the sockaddr_ll does not consume frame
> data space. The formula is now consistent with the RX and TX paths.
>
> Patch 2 adds a bounds check in the RX path to prevent a heap buffer
> overflow when the kernel delivers a packet larger than the mbuf data
> room (e.g. if the kernel interface MTU is raised externally).
>
> Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
> adds max_mtu, both derived from the configured TPACKET frame size.
> This enables jumbo frame support when the user specifies a larger
> framesz devarg at vdev creation time.
>
> v2:
> - patch 2/3: fix Fixes tag to use 12-char SHA (checkpatch warning)
>
> Xavier Guillaume (3):
> net/af_packet: fix MTU set data size calculation
> net/af_packet: fix receive buffer overflow
> net/af_packet: support jumbo frames
>
> drivers/net/af_packet/rte_eth_af_packet.c | 17 +++++++++++++++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
>
Looks good to me, so sent AI off to look at the fine details around MTU.
The one observation from me is that the new check for data_size is redundnt
and should be removed or turned into RTE_ASSERT()
**Patch 1/3 - fix MTU set data size calculation**
The sockaddr_ll fix is correct. The formula now matches the Rx and Tx
queue setup paths. Good commit message explaining the TPACKET2_HDRLEN
decomposition.
Minor: now that patch 3 reports max_mtu correctly, the ethdev layer
validates mtu <= max_mtu before calling the PMD callback
(eth_dev_validate_mtu in rte_ethdev.c). The data_size check here
becomes redundant dead code. Consider removing it to keep the driver
simple and avoid confusing future readers.
**Patch 2/3 - fix receive buffer overflow**
Good catch. Real buffer overflow when kernel MTU is raised externally
past the TPACKET ring capacity. The tailroom check, frame return to
kernel, and dropped counter are all correct.
**Patch 3/3 - support jumbo frames**
The max_rx_pktlen and max_mtu derivation from the actual TPACKET ring
frame size is correct and consistent with the data_size formula in
patches 1 and 2.
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
` (5 preceding siblings ...)
2026-03-10 1:55 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
@ 2026-03-10 11:21 ` Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
` (5 more replies)
6 siblings, 6 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-10 11:21 UTC (permalink / raw)
To: dev; +Cc: Xavier Guillaume
This series fixes two bugs in the af_packet PMD related to frame
size calculations and buffer safety, then enables jumbo frame
support by deriving the advertised capabilities from the actual
TPACKET ring configuration.
Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
sockaddr_ll) even though the sockaddr_ll does not consume frame
data space. The formula is now consistent with the RX and TX paths.
Patch 2 adds a bounds check in the RX path to prevent a heap buffer
overflow when the kernel delivers a packet larger than the mbuf data
room (e.g. if the kernel interface MTU is raised externally).
Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
adds max_mtu, both derived from the configured TPACKET frame size.
This enables jumbo frame support when the user specifies a larger
framesz devarg at vdev creation time. Since the ethdev layer now
validates mtu <= max_mtu before calling the PMD, the data_size
bounds check in eth_dev_mtu_set() is converted to RTE_ASSERT().
v3:
- patch 3/3: convert redundant data_size bounds check in
eth_dev_mtu_set() to RTE_ASSERT() (reviewer feedback)
v2:
- patch 2/3: fix Fixes tag to use 12-char SHA (checkpatch warning)
Xavier Guillaume (3):
net/af_packet: fix MTU set data size calculation
net/af_packet: fix receive buffer overflow
net/af_packet: support jumbo frames
drivers/net/af_packet/rte_eth_af_packet.c | 21 +++++++++++++++++----
1 file changed, 17 insertions(+), 4 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 1/3] net/af_packet: fix MTU set data size calculation
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
@ 2026-03-10 11:21 ` Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
` (4 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-10 11:21 UTC (permalink / raw)
To: dev; +Cc: Xavier Guillaume, stable
The data_size in eth_dev_mtu_set() is calculated as:
tp_frame_size - TPACKET2_HDRLEN
Since TPACKET2_HDRLEN is defined as:
TPACKET_ALIGN(sizeof(struct tpacket2_hdr)) + sizeof(struct sockaddr_ll)
it includes sizeof(struct sockaddr_ll), but the actual packet data
in the TPACKET ring starts at TPACKET_ALIGN(sizeof(struct tpacket2_hdr)).
The sockaddr_ll does not consume frame data space. This makes the
current formula 20 bytes too restrictive, matching neither the
RX queue setup nor the TX queue init path, which both correctly use:
tp_frame_size - TPACKET2_HDRLEN + sizeof(struct sockaddr_ll)
Fix the calculation to be consistent and accept all valid MTU values.
Fixes: 8020573894a8 ("net/af_packet: fix setting MTU decrements sockaddr twice")
Cc: stable@dpdk.org
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index c0ba3381ea..bfa68297a6 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -612,7 +612,7 @@ eth_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
int ret;
int s;
unsigned int data_size = internals->req.tp_frame_size -
- TPACKET2_HDRLEN;
+ TPACKET2_HDRLEN + sizeof(struct sockaddr_ll);
if (mtu > data_size)
return -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v3 2/3] net/af_packet: fix receive buffer overflow
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
@ 2026-03-10 11:21 ` Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 3/3] net/af_packet: support jumbo frames Xavier Guillaume
` (3 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-10 11:21 UTC (permalink / raw)
To: dev; +Cc: Xavier Guillaume, stable
The receive path copies the entire incoming packet into a single
mbuf without verifying the packet fits. If the kernel interface
MTU is raised externally beyond the mbuf data room size, the
memcpy overflows the mbuf buffer.
Add a bounds check against the mbuf tailroom before copying.
Oversized packets are dropped and accounted for in the
rx_dropped_pkts counter, consistent with how the TX path
already drops oversized packets.
Fixes: 364e08f2bbc0 ("af_packet: add PMD for AF_PACKET-based virtual devices")
Cc: stable@dpdk.org
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index bfa68297a6..b04987aaf7 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -154,6 +154,16 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
break;
}
+ /* drop packets that won't fit in the mbuf */
+ if (ppd->tp_snaplen > rte_pktmbuf_tailroom(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ ppd->tp_status = TP_STATUS_KERNEL;
+ if (++framenum >= framecount)
+ framenum = 0;
+ pkt_q->rx_dropped_pkts++;
+ continue;
+ }
+
/* packet will fit in the mbuf, go ahead and receive it */
rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_len(mbuf) = ppd->tp_snaplen;
pbuf = (uint8_t *) ppd + ppd->tp_mac;
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v3 3/3] net/af_packet: support jumbo frames
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
@ 2026-03-10 11:21 ` Xavier Guillaume
2026-03-11 16:03 ` [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
` (2 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-10 11:21 UTC (permalink / raw)
To: dev; +Cc: Xavier Guillaume
Derive max_rx_pktlen and max_mtu from the actual TPACKET ring
frame size rather than the static RTE_ETHER_MAX_LEN (1518).
This allows jumbo frame support when the user specifies a
larger framesz devarg at vdev creation time, for example:
--vdev=net_af_packet0,iface=eth0,framesz=9216
The advertised capabilities now accurately reflect what the
configured ring can handle, avoiding both false limitations
with large frames and false promises with small ones.
Since the ethdev layer now validates mtu <= max_mtu before
calling the PMD callback, convert the data_size bounds check
in eth_dev_mtu_set() from a runtime error to RTE_ASSERT().
Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
---
drivers/net/af_packet/rte_eth_af_packet.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index b04987aaf7..e132dc387b 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -402,7 +402,10 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->if_index = internals->if_index;
dev_info->max_mac_addrs = 1;
- dev_info->max_rx_pktlen = RTE_ETHER_MAX_LEN;
+ dev_info->max_rx_pktlen = internals->req.tp_frame_size -
+ TPACKET2_HDRLEN + sizeof(struct sockaddr_ll);
+ dev_info->max_mtu = dev_info->max_rx_pktlen -
+ RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN;
dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
dev_info->min_rx_bufsize = 0;
@@ -624,8 +627,8 @@ eth_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
unsigned int data_size = internals->req.tp_frame_size -
TPACKET2_HDRLEN + sizeof(struct sockaddr_ll);
- if (mtu > data_size)
- return -EINVAL;
+ /* ethdev layer validates mtu <= max_mtu before calling this */
+ RTE_ASSERT(mtu <= data_size);
s = socket(PF_INET, SOCK_DGRAM, 0);
if (s < 0)
--
2.34.1
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v2] net/af_packet: add multi-segment mbuf support for jumbo frames
2026-03-09 21:02 ` [PATCH v2] " Sriram Yagnaraman
@ 2026-03-10 14:02 ` Stephen Hemminger
2026-03-10 20:02 ` Sriram Yagnaraman
0 siblings, 1 reply; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-10 14:02 UTC (permalink / raw)
To: Sriram Yagnaraman; +Cc: dev, xavier.guillaume
On Mon, 9 Mar 2026 22:02:10 +0100
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> wrote:
> Enable jumbo frame reception with default mbuf data room size by
> chaining multiple mbufs when packet exceeds single mbuf tailroom.
>
> The RX path now:
> - Copies first segment up to mbuf tailroom
> - Allocates and chains additional segments for remaining data
> - Properly maintains nb_segs, next pointers, and pkt_len
> - Silently drops packets on mbuf allocation failure
>
> This allows receiving 9KB jumbo frames using standard 2KB mbufs,
> chaining ~5 segments per jumbo packet. No need for custom mbuf
> pools with large data room.
>
> Advertises RTE_ETH_RX_OFFLOAD_SCATTER capability.
>
> Depends-on: series-34567 ("net/af_packet: fix MTU handling and add
> jumbo frame support")
>
> Note: Implementation generated with AI assistance.
>
> Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
Using AI patch review finds several issues.
I'll start by reading the patch and the relevant AGENTS.md section.Now let me check AGENTS.md for any `RTE_ETH_RX_OFFLOAD_SCATTER` guidance:No specific guidance on `RTE_ETH_RX_OFFLOAD_SCATTER` in AGENTS.md. Let me now look more closely at the original code to understand the loop structure better:The `for` loop itself isn't in the diff context — it's the existing `for (i = 0; i < nb_pkts; i++)` loop. Let me check the existing code to understand the full picture:Now let me look at the current upstream code to understand the existing loop structure:GitHub rendered HTML, not useful. I have enough context from the patch and the search results to do a thorough review. Let me now analyze the patch carefully.
Here's my review of the v2 patch:
---
**Patch: net/af_packet: add multi-segment mbuf support for jumbo frames**
### Error
**1. Scatter capability unconditionally advertised but RX path does not check if the application enabled it.**
The patch adds `RTE_ETH_RX_OFFLOAD_SCATTER` to `rx_offload_capa` in `eth_dev_info()`, which correctly advertises the *capability*. However, the multi-segment chaining code in `eth_af_packet_rx()` executes unconditionally — it will scatter into multiple mbufs even if the application did not request `RTE_ETH_RX_OFFLOAD_SCATTER` in its `rxmode.offloads`. When an application has not enabled scatter, it expects single-segment mbufs only and may not walk `mbuf->next` chains. The driver should check whether scatter was actually requested (via the queue or port config) and fall back to the old behavior (drop packets that exceed single-mbuf tailroom) when scatter is not enabled.
Other PMDs (e.g., ixgbe, i40e, mlx5) guard their scatter path behind a check of the configured offloads. Without this, applications that haven't opted into scatter will silently receive chained mbufs they don't expect, leading to data truncation or corruption at the application layer.
**2. `goto next_pkt` skips the loop increment — `i` still increments but `num_rx` does not, causing the outer `for` loop to consume ring frames without delivering packets, potentially skipping valid frames.**
The outer loop is `for (i = 0; i < nb_pkts; i++)`. When the `goto next_pkt` path is taken (mbuf alloc failure in the scatter loop), the code correctly releases the ring frame and advances `framenum`, then jumps to `next_pkt`. On the next iteration, `i` increments but the loop re-reads `ppd` from the *new* `framenum` — this part is fine.
However, the original code indexed `bufs[i]` and the patch changes it to `bufs[num_rx]`, which is correct. But consider: the loop variable `i` counts iterations including failures, while the loop condition is `i < nb_pkts`. This means the function will stop after `nb_pkts` ring-frame inspections regardless of how many were dropped. If several frames fail allocation, the function returns fewer packets than it could have — the ring may have more ready frames that will be delayed until the next call. This is a behavioral change from the original code (which also had `bufs[i]` so never had drops mid-loop), but it's minor — not a bug per se, just worth noting.
### Warning
**3. No `err_pkts` counter increment on allocation failure.**
When `rte_pktmbuf_alloc()` fails for a scatter segment, the head mbuf is freed and the ring frame released, but `pkt_q->err_pkts` is not incremented. The original code (before this patch) also doesn't increment `err_pkts` on the initial alloc failure `break`, but that's an existing limitation. For the new scatter path, a silent drop with no counter update makes diagnosis difficult. Consider:
```c
if (unlikely(seg == NULL)) {
rte_pktmbuf_free(mbuf);
pkt_q->err_pkts++;
...
```
**4. No guard against zero tailroom causing an infinite loop.**
If `rte_pktmbuf_tailroom(seg)` returns 0 for a freshly allocated mbuf (theoretically possible if headroom equals the entire buffer), the `while (remaining > 0)` loop would allocate mbufs endlessly with `data_len = 0`, never decrementing `remaining`. In practice this shouldn't happen with normal mempool configurations, but a defensive check would be prudent:
```c
data_len = RTE_MIN(remaining, rte_pktmbuf_tailroom(seg));
if (unlikely(data_len == 0)) {
rte_pktmbuf_free(seg);
rte_pktmbuf_free(mbuf);
/* handle error */
goto next_pkt;
}
```
**5. Missing `Cc: stable@dpdk.org` if this targets an LTS branch, and missing `Fixes:` tag if it addresses a known limitation.**
The commit message doesn't indicate this is a bug fix (it's a new feature), so this may be intentional. But the `Depends-on:` references a fix series — if that series targets stable, this patch should clarify that it does not.
### Info
**6. The `bufs[num_rx]` fix is a good correctness improvement.** The original code used `bufs[i]` which only worked because the original loop never had a `continue`/`goto` path that could skip a slot. The new indexing with `num_rx` is correct for the new drop path.
**7. Regarding `RTE_ETH_RX_OFFLOAD_SCATTER` and AGENTS.md** — there's no specific guidance in AGENTS.md about scatter offload. This is a gap worth considering: a pattern section could note that drivers advertising scatter capability should conditionally enable the scatter code path based on whether the application actually requested it, since applications that don't request scatter expect only single-segment mbufs.
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH v2] net/af_packet: add multi-segment mbuf support for jumbo frames
2026-03-10 14:02 ` Stephen Hemminger
@ 2026-03-10 20:02 ` Sriram Yagnaraman
2026-03-16 16:02 ` Stephen Hemminger
0 siblings, 1 reply; 27+ messages in thread
From: Sriram Yagnaraman @ 2026-03-10 20:02 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev@dpdk.org, xavier.guillaume@ovhcloud.com
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Tuesday, 10 March 2026 15:03
> To: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
> Cc: dev@dpdk.org; xavier.guillaume@ovhcloud.com
> Subject: Re: [PATCH v2] net/af_packet: add multi-segment mbuf support for
> jumbo frames
>
> On Mon, 9 Mar 2026 22:02:10 +0100
> Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> wrote:
>
> > Enable jumbo frame reception with default mbuf data room size by
> > chaining multiple mbufs when packet exceeds single mbuf tailroom.
> >
> > The RX path now:
> > - Copies first segment up to mbuf tailroom
> > - Allocates and chains additional segments for remaining data
> > - Properly maintains nb_segs, next pointers, and pkt_len
> > - Silently drops packets on mbuf allocation failure
> >
> > This allows receiving 9KB jumbo frames using standard 2KB mbufs,
> > chaining ~5 segments per jumbo packet. No need for custom mbuf pools
> > with large data room.
> >
> > Advertises RTE_ETH_RX_OFFLOAD_SCATTER capability.
> >
> > Depends-on: series-34567 ("net/af_packet: fix MTU handling and add
> > jumbo frame support")
> >
> > Note: Implementation generated with AI assistance.
> >
> > Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
>
> Using AI patch review finds several issues.
Thanks for the review, I will fix them in v3.
Perhaps, I will wait for Xavier's changes to be in main before posting the next version.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/3] net/af_packet: support jumbo frames
2026-03-09 16:10 ` [PATCH v2 3/3] net/af_packet: support jumbo frames Xavier Guillaume
@ 2026-03-10 23:31 ` Stephen Hemminger
2026-03-12 13:32 ` Xavier Guillaume
0 siblings, 1 reply; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-10 23:31 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev, stable
On Mon, 9 Mar 2026 17:10:21 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> Derive max_rx_pktlen and max_mtu from the actual TPACKET ring
> frame size rather than the static RTE_ETHER_MAX_LEN (1518).
> This allows jumbo frame support when the user specifies a
> larger framesz devarg at vdev creation time, for example:
>
> --vdev=net_af_packet0,iface=eth0,framesz=9216
>
> The advertised capabilities now accurately reflect what the
> configured ring can handle, avoiding both false limitations
> with large frames and false promises with small ones.
>
> Signed-off-by: Xavier Guillaume <xavier.guillaume@ovhcloud.com>
I wonder if TPACKET header could go in mbuf headroom.
And also, could the copy on receive be avoided?
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
` (2 preceding siblings ...)
2026-03-10 11:21 ` [PATCH v3 3/3] net/af_packet: support jumbo frames Xavier Guillaume
@ 2026-03-11 16:03 ` Stephen Hemminger
2026-03-12 18:46 ` Stephen Hemminger
2026-03-16 15:59 ` Stephen Hemminger
5 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-11 16:03 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev
On Tue, 10 Mar 2026 12:21:19 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> This series fixes two bugs in the af_packet PMD related to frame
> size calculations and buffer safety, then enables jumbo frame
> support by deriving the advertised capabilities from the actual
> TPACKET ring configuration.
>
> Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
> is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
> sockaddr_ll) even though the sockaddr_ll does not consume frame
> data space. The formula is now consistent with the RX and TX paths.
>
> Patch 2 adds a bounds check in the RX path to prevent a heap buffer
> overflow when the kernel delivers a packet larger than the mbuf data
> room (e.g. if the kernel interface MTU is raised externally).
>
> Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
> adds max_mtu, both derived from the configured TPACKET frame size.
> This enables jumbo frame support when the user specifies a larger
> framesz devarg at vdev creation time. Since the ethdev layer now
> validates mtu <= max_mtu before calling the PMD, the data_size
> bounds check in eth_dev_mtu_set() is converted to RTE_ASSERT().
Applied to next-net
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/3] net/af_packet: support jumbo frames
2026-03-10 23:31 ` Stephen Hemminger
@ 2026-03-12 13:32 ` Xavier Guillaume
2026-03-12 16:20 ` Stephen Hemminger
0 siblings, 1 reply; 27+ messages in thread
From: Xavier Guillaume @ 2026-03-12 13:32 UTC (permalink / raw)
To: stephen; +Cc: dev, stable, xavier.guillaume
Hi Stephen,
> I wonder if TPACKET header could go in mbuf headroom.
> And also, could the copy on receive be avoided?
Thank you for your review and the interesting questions. I had not
considered these angles, so I took some time to look into it.
As far as I understand, the current RX path copies the packet data
from the ring frame into an mbuf so that the ring slot can be returned to
the kernel immediately after the copy. This keeps the ring available
for new packets regardless of how long the application holds the mbuf.
Going down the zero-copy route would introduce a strong coupling
between kernel-managed ring frames and DPDK-managed mbufs: the ring
slot could not be released until the last reference to the mbuf is
freed, which risks stalling the ring under any buffering.
Because of this copy and the resulting decoupling, the TPACKET header
does not need to be carried into the mbuf at all. It is only read
for metadata (packet length, VLAN, timestamp) before the frame is
released back to the kernel.
In this context, my feeling is that the introduced risks outweigh the
gains (the memcpy looks relatively small compared to the full kernel
networking stack af_packet goes through).
Did I miss something?
Regards,
Xavier
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/3] net/af_packet: support jumbo frames
2026-03-12 13:32 ` Xavier Guillaume
@ 2026-03-12 16:20 ` Stephen Hemminger
0 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-12 16:20 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev, stable
On Thu, 12 Mar 2026 14:32:48 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> Hi Stephen,
>
> > I wonder if TPACKET header could go in mbuf headroom.
> > And also, could the copy on receive be avoided?
>
> Thank you for your review and the interesting questions. I had not
> considered these angles, so I took some time to look into it.
>
> As far as I understand, the current RX path copies the packet data
> from the ring frame into an mbuf so that the ring slot can be returned to
> the kernel immediately after the copy. This keeps the ring available
> for new packets regardless of how long the application holds the mbuf.
>
> Going down the zero-copy route would introduce a strong coupling
> between kernel-managed ring frames and DPDK-managed mbufs: the ring
> slot could not be released until the last reference to the mbuf is
> freed, which risks stalling the ring under any buffering.
>
> Because of this copy and the resulting decoupling, the TPACKET header
> does not need to be carried into the mbuf at all. It is only read
> for metadata (packet length, VLAN, timestamp) before the frame is
> released back to the kernel.
>
> In this context, my feeling is that the introduced risks outweigh the
> gains (the memcpy looks relatively small compared to the full kernel
> networking stack af_packet goes through).
>
> Did I miss something?
>
> Regards,
> Xavier
Copies matter, especially for larger packets.
I noticed that later kernels support TPACKET_V3 with sendmsg and MSG_ZEROCOPY
it was added in 4.18 kernel so should be ok; the downside is it goes from
ring to syscall per packet rather than syscall per burst.
For RX, you right it adds complexity.
Did some brainstorming (with AI as checking), and it looks like
maybe some mixed mode where it uses zero copy on Rx until there
is some high watermark. Something like:
## The design
The receive path becomes:
1. At queue setup, register the entire mmap'd region as an external memory zone that DPDK knows about (via `rte_extmem_register` if needed for IOVA).
2. On each received frame, allocate an mbuf but attach it to the ring frame via `rte_pktmbuf_attach_extbuf` instead of copying. The `shinfo` free callback atomically sets `tp_status = TP_STATUS_KERNEL` to release the frame back to the kernel.
3. Advance `framenum` as normal — the frame stays owned by userspace until the mbuf is freed.
## The hard part: ring backpressure
This is the real design question. In the copy path, frames are returned to the kernel immediately in the RX loop. With zero-copy, a frame is held until the application frees the mbuf. If the app is slow or holds references (e.g., reassembly, batching into a burst for a worker core), you burn through ring slots fast.
A few options:
- **Large ring** — bump `framecnt` significantly. Memory is cheap and the ring is already mmap'd. For a capture workload this is usually fine.
- **Fallback to copy** — track how many frames are outstanding. When it crosses a watermark (say 75% of the ring), fall back to the memcpy path for new packets so you keep returning frames to the kernel. This is what the AF_XDP PMD does conceptually with its fill ring management.
- **Just drop** — if the ring is exhausted, that's backpressure. The kernel drops packets, which shows up in `tp_drops`. For monitoring/capture workloads this is often acceptable.
The fallback approach is probably the most robust for a general-purpose patch. Something roughly like:
```c
/* threshold: if outstanding frames exceed 75% of ring, fall back to copy */
bool zero_copy = (outstanding_frames < (framecount * 3 / 4));
if (zero_copy) {
/* attach extbuf pointing into ring frame */
rte_pktmbuf_attach_extbuf(mbuf, pbuf, pbuf_iova, data_len, shinfo);
rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_len(mbuf) = ppd->tp_snaplen;
/* do NOT set tp_status = TP_STATUS_KERNEL here; callback does it */
outstanding_frames++;
} else {
/* copy path as before */
rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_len(mbuf) = ppd->tp_snaplen;
memcpy(rte_pktmbuf_mtod(mbuf, void *), pbuf, ppd->tp_snaplen);
ppd->tp_status = TP_STATUS_KERNEL;
}
```
The `shinfo` callback would need an atomic decrement on the outstanding counter plus the `tp_status` write. You'd pre-allocate one `rte_mbuf_ext_shared_info` per frame slot at init time, each wired to its corresponding `tpacket2_hdr`.
One subtlety: `framenum` advancement is no longer gated on the current frame being released. You're advancing past frames that are still in-flight. So you need a separate counter or bitmap to know which frames are actually available when you wrap around. The simplest approach is to just check `tp_status` as you already do — if you come back around the ring and the frame is still held by userspace (status not `TP_STATUS_USER` from the kernel), you stop, same as today.
That actually works cleanly because the existing `tp_status` check at the top of the loop already handles this — a frame you haven't returned to the kernel won't have `TP_STATUS_USER` set, so the loop naturally stops.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
` (3 preceding siblings ...)
2026-03-11 16:03 ` [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
@ 2026-03-12 18:46 ` Stephen Hemminger
2026-03-16 15:59 ` Stephen Hemminger
5 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-12 18:46 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev
On Tue, 10 Mar 2026 12:21:19 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> This series fixes two bugs in the af_packet PMD related to frame
> size calculations and buffer safety, then enables jumbo frame
> support by deriving the advertised capabilities from the actual
> TPACKET ring configuration.
>
> Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
> is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
> sockaddr_ll) even though the sockaddr_ll does not consume frame
> data space. The formula is now consistent with the RX and TX paths.
>
> Patch 2 adds a bounds check in the RX path to prevent a heap buffer
> overflow when the kernel delivers a packet larger than the mbuf data
> room (e.g. if the kernel interface MTU is raised externally).
>
> Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
> adds max_mtu, both derived from the configured TPACKET frame size.
> This enables jumbo frame support when the user specifies a larger
> framesz devarg at vdev creation time. Since the ethdev layer now
> validates mtu <= max_mtu before calling the PMD, the data_size
> bounds check in eth_dev_mtu_set() is converted to RTE_ASSERT().
Applied to next-net and added your email to .mailmap as needed.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
` (4 preceding siblings ...)
2026-03-12 18:46 ` Stephen Hemminger
@ 2026-03-16 15:59 ` Stephen Hemminger
5 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-16 15:59 UTC (permalink / raw)
To: Xavier Guillaume; +Cc: dev
On Tue, 10 Mar 2026 12:21:19 +0100
Xavier Guillaume <xavier.guillaume@ovhcloud.com> wrote:
> This series fixes two bugs in the af_packet PMD related to frame
> size calculations and buffer safety, then enables jumbo frame
> support by deriving the advertised capabilities from the actual
> TPACKET ring configuration.
>
> Patch 1 fixes the data size calculation in eth_dev_mtu_set() which
> is too restrictive due to TPACKET2_HDRLEN including sizeof(struct
> sockaddr_ll) even though the sockaddr_ll does not consume frame
> data space. The formula is now consistent with the RX and TX paths.
>
> Patch 2 adds a bounds check in the RX path to prevent a heap buffer
> overflow when the kernel delivers a packet larger than the mbuf data
> room (e.g. if the kernel interface MTU is raised externally).
>
> Patch 3 replaces the static max_rx_pktlen (RTE_ETHER_MAX_LEN) and
> adds max_mtu, both derived from the configured TPACKET frame size.
> This enables jumbo frame support when the user specifies a larger
> framesz devarg at vdev creation time. Since the ethdev layer now
> validates mtu <= max_mtu before calling the PMD, the data_size
> bounds check in eth_dev_mtu_set() is converted to RTE_ASSERT().
>
> v3:
> - patch 3/3: convert redundant data_size bounds check in
> eth_dev_mtu_set() to RTE_ASSERT() (reviewer feedback)
>
> v2:
> - patch 2/3: fix Fixes tag to use 12-char SHA (checkpatch warning)
>
> Xavier Guillaume (3):
> net/af_packet: fix MTU set data size calculation
> net/af_packet: fix receive buffer overflow
> net/af_packet: support jumbo frames
>
> drivers/net/af_packet/rte_eth_af_packet.c | 21 +++++++++++++++++----
> 1 file changed, 17 insertions(+), 4 deletions(-)
>
Applied these three to net-next
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2] net/af_packet: add multi-segment mbuf support for jumbo frames
2026-03-10 20:02 ` Sriram Yagnaraman
@ 2026-03-16 16:02 ` Stephen Hemminger
2026-03-19 9:25 ` Sriram Yagnaraman
0 siblings, 1 reply; 27+ messages in thread
From: Stephen Hemminger @ 2026-03-16 16:02 UTC (permalink / raw)
To: Sriram Yagnaraman; +Cc: dev@dpdk.org, xavier.guillaume@ovhcloud.com
On Tue, 10 Mar 2026 20:02:49 +0000
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> wrote:
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Tuesday, 10 March 2026 15:03
> > To: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
> > Cc: dev@dpdk.org; xavier.guillaume@ovhcloud.com
> > Subject: Re: [PATCH v2] net/af_packet: add multi-segment mbuf support for
> > jumbo frames
> >
> > On Mon, 9 Mar 2026 22:02:10 +0100
> > Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> wrote:
> >
> > > Enable jumbo frame reception with default mbuf data room size by
> > > chaining multiple mbufs when packet exceeds single mbuf tailroom.
> > >
> > > The RX path now:
> > > - Copies first segment up to mbuf tailroom
> > > - Allocates and chains additional segments for remaining data
> > > - Properly maintains nb_segs, next pointers, and pkt_len
> > > - Silently drops packets on mbuf allocation failure
> > >
> > > This allows receiving 9KB jumbo frames using standard 2KB mbufs,
> > > chaining ~5 segments per jumbo packet. No need for custom mbuf pools
> > > with large data room.
> > >
> > > Advertises RTE_ETH_RX_OFFLOAD_SCATTER capability.
> > >
> > > Depends-on: series-34567 ("net/af_packet: fix MTU handling and add
> > > jumbo frame support")
> > >
> > > Note: Implementation generated with AI assistance.
> > >
> > > Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
> >
> > Using AI patch review finds several issues.
>
> Thanks for the review, I will fix them in v3.
> Perhaps, I will wait for Xavier's changes to be in main before posting the next version.
>
You can use next-net for-main as baseline. But wait until that gets merged before resubmitting.
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH v2] net/af_packet: add multi-segment mbuf support for jumbo frames
2026-03-16 16:02 ` Stephen Hemminger
@ 2026-03-19 9:25 ` Sriram Yagnaraman
0 siblings, 0 replies; 27+ messages in thread
From: Sriram Yagnaraman @ 2026-03-19 9:25 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev@dpdk.org, xavier.guillaume@ovhcloud.com
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Monday, 16 March 2026 17:02
> To: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
> Cc: dev@dpdk.org; xavier.guillaume@ovhcloud.com
> Subject: Re: [PATCH v2] net/af_packet: add multi-segment mbuf support for
> jumbo frames
>
> On Tue, 10 Mar 2026 20:02:49 +0000
> Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> wrote:
>
> > > -----Original Message-----
> > > From: Stephen Hemminger <stephen@networkplumber.org>
> > > Sent: Tuesday, 10 March 2026 15:03
> > > To: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
> > > Cc: dev@dpdk.org; xavier.guillaume@ovhcloud.com
> > > Subject: Re: [PATCH v2] net/af_packet: add multi-segment mbuf
> > > support for jumbo frames
> > >
> > > On Mon, 9 Mar 2026 22:02:10 +0100
> > > Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> wrote:
> > >
> > > > Enable jumbo frame reception with default mbuf data room size by
> > > > chaining multiple mbufs when packet exceeds single mbuf tailroom.
> > > >
> > > > The RX path now:
> > > > - Copies first segment up to mbuf tailroom
> > > > - Allocates and chains additional segments for remaining data
> > > > - Properly maintains nb_segs, next pointers, and pkt_len
> > > > - Silently drops packets on mbuf allocation failure
> > > >
> > > > This allows receiving 9KB jumbo frames using standard 2KB mbufs,
> > > > chaining ~5 segments per jumbo packet. No need for custom mbuf
> > > > pools with large data room.
> > > >
> > > > Advertises RTE_ETH_RX_OFFLOAD_SCATTER capability.
> > > >
> > > > Depends-on: series-34567 ("net/af_packet: fix MTU handling and add
> > > > jumbo frame support")
> > > >
> > > > Note: Implementation generated with AI assistance.
> > > >
> > > > Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
> > >
> > > Using AI patch review finds several issues.
> >
> > Thanks for the review, I will fix them in v3.
> > Perhaps, I will wait for Xavier's changes to be in main before posting the
> next version.
> >
>
> You can use next-net for-main as baseline. But wait until that gets merged
> before resubmitting.
Thanks, I already sent v3/v4 before in a hurry.
Anyhow, I have now sent v5 which is rebased on Xavier's changes in main: https://lore.kernel.org/dpdk-dev/20260318094726.1284454-1-sriram.yagnaraman@ericsson.com/T/#u
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2026-03-19 9:25 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-06 9:20 [PATCH v1 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
2026-03-06 9:20 ` [PATCH v1 3/3] net/af_packet: support jumbo frames Xavier Guillaume
2026-03-09 16:03 ` Stephen Hemminger
2026-03-09 16:10 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
2026-03-09 16:10 ` [PATCH v2 3/3] net/af_packet: support jumbo frames Xavier Guillaume
2026-03-10 23:31 ` Stephen Hemminger
2026-03-12 13:32 ` Xavier Guillaume
2026-03-12 16:20 ` Stephen Hemminger
2026-03-09 20:16 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
2026-03-09 20:49 ` [PATCH] net/af_packet: add multi-segment mbuf support for jumbo frames Sriram Yagnaraman
2026-03-09 21:02 ` [PATCH v2] " Sriram Yagnaraman
2026-03-10 14:02 ` Stephen Hemminger
2026-03-10 20:02 ` Sriram Yagnaraman
2026-03-16 16:02 ` Stephen Hemminger
2026-03-19 9:25 ` Sriram Yagnaraman
2026-03-10 1:55 ` [PATCH v2 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
2026-03-10 11:21 ` [PATCH v3 " Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 1/3] net/af_packet: fix MTU set data size calculation Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 2/3] net/af_packet: fix receive buffer overflow Xavier Guillaume
2026-03-10 11:21 ` [PATCH v3 3/3] net/af_packet: support jumbo frames Xavier Guillaume
2026-03-11 16:03 ` [PATCH v3 0/3] net/af_packet: fix MTU handling and add jumbo frame support Stephen Hemminger
2026-03-12 18:46 ` Stephen Hemminger
2026-03-16 15:59 ` Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox