* [PATCH net-next v2 0/2] net/mlx5: Avoid payload in skb's linear part for better GRO-processing
@ 2025-08-16 15:39 Christoph Paasch via B4 Relay
2025-08-16 15:39 ` [PATCH net-next v2 1/2] net/mlx5: Bring back get_cqe_l3_hdr_type Christoph Paasch via B4 Relay
2025-08-16 15:39 ` [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part Christoph Paasch via B4 Relay
0 siblings, 2 replies; 6+ messages in thread
From: Christoph Paasch via B4 Relay @ 2025-08-16 15:39 UTC (permalink / raw)
To: Saeed Mahameed, Leon Romanovsky, Tariq Toukan, Mark Bloch,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Alexander Lobakin, Gal Pressman
Cc: linux-rdma, netdev, Christoph Paasch
When LRO is enabled on the MLX, mlx5e_skb_from_cqe_mpwrq_nonlinear
copies parts of the payload to the linear part of the skb.
This triggers suboptimal processing in GRO, causing slow throughput,...
This patch series addresses this by copying a lower-bound estimate of
the protocol headers - trying to avoid the payload part. This results in
a significant throughput improvement (detailled results in the specific
patch).
Signed-off-by: Christoph Paasch <cpaasch@openai.com>
---
Changes in v2:
- Refine commit-message with more info and testing data
- Make mlx5e_cqe_get_min_hdr_len() return MLX5E_RX_MAX_HEAD when l3_type
is neither IPv4 nor IPv6. Same for the l4_type. That way behavior is
unchanged for other traffic types.
- Rename mlx5e_cqe_get_min_hdr_len to mlx5e_cqe_estimate_hdr_len
- Link to v1: https://lore.kernel.org/r/20250713-cpaasch-pf-927-netmlx5-avoid-copying-the-payload-to-the-malloced-area-v1-0-ecaed8c2844e@openai.com
---
Christoph Paasch (2):
net/mlx5: Bring back get_cqe_l3_hdr_type
net/mlx5: Avoid copying payload to the skb's linear part
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 37 ++++++++++++++++++++++++-
include/linux/mlx5/device.h | 12 +++++++-
2 files changed, 47 insertions(+), 2 deletions(-)
---
base-commit: bab3ce404553de56242d7b09ad7ea5b70441ea41
change-id: 20250712-cpaasch-pf-927-netmlx5-avoid-copying-the-payload-to-the-malloced-area-6524917455a6
Best regards,
--
Christoph Paasch <cpaasch@openai.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH net-next v2 1/2] net/mlx5: Bring back get_cqe_l3_hdr_type
2025-08-16 15:39 [PATCH net-next v2 0/2] net/mlx5: Avoid payload in skb's linear part for better GRO-processing Christoph Paasch via B4 Relay
@ 2025-08-16 15:39 ` Christoph Paasch via B4 Relay
2025-08-16 15:39 ` [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part Christoph Paasch via B4 Relay
1 sibling, 0 replies; 6+ messages in thread
From: Christoph Paasch via B4 Relay @ 2025-08-16 15:39 UTC (permalink / raw)
To: Saeed Mahameed, Leon Romanovsky, Tariq Toukan, Mark Bloch,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Alexander Lobakin, Gal Pressman
Cc: linux-rdma, netdev, Christoph Paasch
From: Christoph Paasch <cpaasch@openai.com>
Commit 66af4fe37119 ("net/mlx5: Remove unused functions") removed
get_cqe_l3_hdr_type. Let's bring it back.
Also, define CQE_L3_HDR_TYPE_* to identify IPv6 and IPv4 packets.
Signed-off-by: Christoph Paasch <cpaasch@openai.com>
---
include/linux/mlx5/device.h | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
index 9d2467f982ad4697f0b36f6975b820c3a41fc78a..5e4a03cff0f1d9b11c5f562c23dbf85c3302f681 100644
--- a/include/linux/mlx5/device.h
+++ b/include/linux/mlx5/device.h
@@ -927,11 +927,16 @@ static inline u8 get_cqe_lro_tcppsh(struct mlx5_cqe64 *cqe)
return (cqe->lro.tcppsh_abort_dupack >> 6) & 1;
}
-static inline u8 get_cqe_l4_hdr_type(struct mlx5_cqe64 *cqe)
+static inline u8 get_cqe_l4_hdr_type(const struct mlx5_cqe64 *cqe)
{
return (cqe->l4_l3_hdr_type >> 4) & 0x7;
}
+static inline u8 get_cqe_l3_hdr_type(const struct mlx5_cqe64 *cqe)
+{
+ return (cqe->l4_l3_hdr_type >> 2) & 0x3;
+}
+
static inline bool cqe_is_tunneled(struct mlx5_cqe64 *cqe)
{
return cqe->tls_outer_l3_tunneled & 0x1;
@@ -1012,6 +1017,11 @@ enum {
CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA = 0x4,
};
+enum {
+ CQE_L3_HDR_TYPE_IPV6 = 0x1,
+ CQE_L3_HDR_TYPE_IPV4 = 0x2,
+};
+
enum {
CQE_RSS_HTYPE_IP = GENMASK(3, 2),
/* cqe->rss_hash_type[3:2] - IP destination selected for hash
--
2.50.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part
2025-08-16 15:39 [PATCH net-next v2 0/2] net/mlx5: Avoid payload in skb's linear part for better GRO-processing Christoph Paasch via B4 Relay
2025-08-16 15:39 ` [PATCH net-next v2 1/2] net/mlx5: Bring back get_cqe_l3_hdr_type Christoph Paasch via B4 Relay
@ 2025-08-16 15:39 ` Christoph Paasch via B4 Relay
2025-08-19 9:58 ` Dragos Tatulea
1 sibling, 1 reply; 6+ messages in thread
From: Christoph Paasch via B4 Relay @ 2025-08-16 15:39 UTC (permalink / raw)
To: Saeed Mahameed, Leon Romanovsky, Tariq Toukan, Mark Bloch,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Alexander Lobakin, Gal Pressman
Cc: linux-rdma, netdev, Christoph Paasch
From: Christoph Paasch <cpaasch@openai.com>
mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256)
bytes from the page-pool to the skb's linear part. Those 256 bytes
include part of the payload.
When attempting to do GRO in skb_gro_receive, if headlen > data_offset
(and skb->head_frag is not set), we end up aggregating packets in the
frag_list.
This is of course not good when we are CPU-limited. Also causes a worse
skb->len/truesize ratio,...
So, let's avoid copying parts of the payload to the linear part. The
goal here is to err on the side of caution and prefer to copy too little
instead of copying too much (because once it has been copied over, we
trigger the above described behavior in skb_gro_receive).
So, we can do a rough estimate of the header-space by looking at
cqe_l3/l4_hdr_type. This is now done in mlx5e_cqe_estimate_hdr_len().
We always assume that TCP timestamps are present, as that's the most common
use-case.
That header-len is then used in mlx5e_skb_from_cqe_mpwrq_nonlinear for
the headlen (which defines what is being copied over). We still
allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking stack
needs to call pskb_may_pull() later on, we don't need to reallocate
memory.
This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and
LRO enabled):
BEFORE:
=======
(netserver pinned to core receiving interrupts)
$ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
87380 16384 262144 60.01 32547.82
(netserver pinned to adjacent core receiving interrupts)
$ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
87380 16384 262144 60.00 52531.67
AFTER:
======
(netserver pinned to core receiving interrupts)
$ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
87380 16384 262144 60.00 52896.06
(netserver pinned to adjacent core receiving interrupts)
$ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
87380 16384 262144 60.00 85094.90
Additional tests across a larger range of parameters w/ and w/o LRO, w/
and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different
TCP read/write-sizes as well as UDP benchmarks, all have shown equal or
better performance with this patch.
Signed-off-by: Christoph Paasch <cpaasch@openai.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 37 ++++++++++++++++++++++++-
1 file changed, 36 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index b8c609d91d11bd315e8fb67f794a91bd37cd28c0..0f18d38f89f48f95a0ddd2c7d0b2a416fa76f6b3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1991,13 +1991,44 @@ mlx5e_shampo_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq,
} while (data_bcnt);
}
+static u16
+mlx5e_cqe_estimate_hdr_len(const struct mlx5_cqe64 *cqe)
+{
+ u16 hdr_len = sizeof(struct ethhdr);
+ u8 l3_type = get_cqe_l3_hdr_type(cqe);
+ u8 l4_type = get_cqe_l4_hdr_type(cqe);
+
+ if (cqe_has_vlan(cqe))
+ hdr_len += VLAN_HLEN;
+
+ if (l3_type == CQE_L3_HDR_TYPE_IPV4)
+ hdr_len += sizeof(struct iphdr);
+ else if (l3_type == CQE_L3_HDR_TYPE_IPV6)
+ hdr_len += sizeof(struct ipv6hdr);
+ else
+ return MLX5E_RX_MAX_HEAD;
+
+ if (l4_type == CQE_L4_HDR_TYPE_UDP)
+ hdr_len += sizeof(struct udphdr);
+ else if (l4_type & (CQE_L4_HDR_TYPE_TCP_NO_ACK |
+ CQE_L4_HDR_TYPE_TCP_ACK_NO_DATA |
+ CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA))
+ /* Previous condition works because we know that
+ * l4_type != 0x2 (CQE_L4_HDR_TYPE_UDP)
+ */
+ hdr_len += sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;
+ else
+ return MLX5E_RX_MAX_HEAD;
+
+ return hdr_len;
+}
+
static struct sk_buff *
mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
u32 page_idx)
{
struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx];
- u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt);
struct mlx5e_frag_page *head_page = frag_page;
struct mlx5e_xdp_buff *mxbuf = &rq->mxbuf;
u32 frag_offset = head_offset;
@@ -2009,10 +2040,14 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
u32 linear_frame_sz;
u16 linear_data_len;
u16 linear_hr;
+ u16 headlen;
void *va;
prog = rcu_dereference(rq->xdp_prog);
+ headlen = min3(mlx5e_cqe_estimate_hdr_len(cqe), cqe_bcnt,
+ (u16)MLX5E_RX_MAX_HEAD);
+
if (prog) {
/* area for bpf_xdp_[store|load]_bytes */
net_prefetchw(netmem_address(frag_page->netmem) + frag_offset);
--
2.50.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part
2025-08-16 15:39 ` [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part Christoph Paasch via B4 Relay
@ 2025-08-19 9:58 ` Dragos Tatulea
2025-08-20 0:15 ` Jakub Kicinski
0 siblings, 1 reply; 6+ messages in thread
From: Dragos Tatulea @ 2025-08-19 9:58 UTC (permalink / raw)
To: cpaasch, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
Mark Bloch, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Alexander Lobakin, Gal Pressman
Cc: linux-rdma, netdev
On Sat, Aug 16, 2025 at 08:39:04AM -0700, Christoph Paasch via B4 Relay wrote:
> From: Christoph Paasch <cpaasch@openai.com>
>
> mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256)
> bytes from the page-pool to the skb's linear part. Those 256 bytes
> include part of the payload.
>
> When attempting to do GRO in skb_gro_receive, if headlen > data_offset
> (and skb->head_frag is not set), we end up aggregating packets in the
> frag_list.
>
> This is of course not good when we are CPU-limited. Also causes a worse
> skb->len/truesize ratio,...
>
> So, let's avoid copying parts of the payload to the linear part. The
> goal here is to err on the side of caution and prefer to copy too little
> instead of copying too much (because once it has been copied over, we
> trigger the above described behavior in skb_gro_receive).
>
> So, we can do a rough estimate of the header-space by looking at
> cqe_l3/l4_hdr_type. This is now done in mlx5e_cqe_estimate_hdr_len().
> We always assume that TCP timestamps are present, as that's the most common
> use-case.
>
> That header-len is then used in mlx5e_skb_from_cqe_mpwrq_nonlinear for
> the headlen (which defines what is being copied over). We still
> allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking stack
> needs to call pskb_may_pull() later on, we don't need to reallocate
> memory.
>
> This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and
> LRO enabled):
>
> BEFORE:
> =======
> (netserver pinned to core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.01 32547.82
>
> (netserver pinned to adjacent core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.00 52531.67
>
> AFTER:
> ======
> (netserver pinned to core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.00 52896.06
>
> (netserver pinned to adjacent core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.00 85094.90
As Tariq and Gal said: Nice!
> Additional tests across a larger range of parameters w/ and w/o LRO, w/
> and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different
> TCP read/write-sizes as well as UDP benchmarks, all have shown equal or
> better performance with this patch.
>
> Signed-off-by: Christoph Paasch <cpaasch@openai.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 37 ++++++++++++++++++++++++-
> 1 file changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index b8c609d91d11bd315e8fb67f794a91bd37cd28c0..0f18d38f89f48f95a0ddd2c7d0b2a416fa76f6b3 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -1991,13 +1991,44 @@ mlx5e_shampo_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq,
> } while (data_bcnt);
> }
>
> +static u16
> +mlx5e_cqe_estimate_hdr_len(const struct mlx5_cqe64 *cqe)
> +{
> + u16 hdr_len = sizeof(struct ethhdr);
> + u8 l3_type = get_cqe_l3_hdr_type(cqe);
> + u8 l4_type = get_cqe_l4_hdr_type(cqe);
> +
> + if (cqe_has_vlan(cqe))
> + hdr_len += VLAN_HLEN;
> +
> + if (l3_type == CQE_L3_HDR_TYPE_IPV4)
> + hdr_len += sizeof(struct iphdr);
> + else if (l3_type == CQE_L3_HDR_TYPE_IPV6)
> + hdr_len += sizeof(struct ipv6hdr);
> + else
> + return MLX5E_RX_MAX_HEAD;
> +
> + if (l4_type == CQE_L4_HDR_TYPE_UDP)
> + hdr_len += sizeof(struct udphdr);
> + else if (l4_type & (CQE_L4_HDR_TYPE_TCP_NO_ACK |
> + CQE_L4_HDR_TYPE_TCP_ACK_NO_DATA |
> + CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA))
> + /* Previous condition works because we know that
> + * l4_type != 0x2 (CQE_L4_HDR_TYPE_UDP)
> + */
> + hdr_len += sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;
> + else
> + return MLX5E_RX_MAX_HEAD;
> +
> + return hdr_len;
> +}
> +
> static struct sk_buff *
> mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
> struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
> u32 page_idx)
> {
> struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx];
> - u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt);
> struct mlx5e_frag_page *head_page = frag_page;
> struct mlx5e_xdp_buff *mxbuf = &rq->mxbuf;
> u32 frag_offset = head_offset;
> @@ -2009,10 +2040,14 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> u32 linear_frame_sz;
> u16 linear_data_len;
> u16 linear_hr;
> + u16 headlen;
> void *va;
>
> prog = rcu_dereference(rq->xdp_prog);
>
> + headlen = min3(mlx5e_cqe_estimate_hdr_len(cqe), cqe_bcnt,
> + (u16)MLX5E_RX_MAX_HEAD);
> +
How about keeping the old calculation for XDP and do this one for
non-xdp in the following if/else block?
This way XDP perf will not be impacted by the extra call to
mlx5e_cqe_estimate_hdr_len().
Thanks,
Dragos
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part
2025-08-19 9:58 ` Dragos Tatulea
@ 2025-08-20 0:15 ` Jakub Kicinski
2025-08-21 22:59 ` Christoph Paasch
0 siblings, 1 reply; 6+ messages in thread
From: Jakub Kicinski @ 2025-08-20 0:15 UTC (permalink / raw)
To: Dragos Tatulea
Cc: cpaasch, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
Mark Bloch, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, Alexander Lobakin, Gal Pressman, linux-rdma, netdev
On Tue, 19 Aug 2025 09:58:54 +0000 Dragos Tatulea wrote:
> > @@ -2009,10 +2040,14 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> > u32 linear_frame_sz;
> > u16 linear_data_len;
> > u16 linear_hr;
> > + u16 headlen;
> > void *va;
> >
> > prog = rcu_dereference(rq->xdp_prog);
> >
> > + headlen = min3(mlx5e_cqe_estimate_hdr_len(cqe), cqe_bcnt,
> > + (u16)MLX5E_RX_MAX_HEAD);
> > +
> How about keeping the old calculation for XDP and do this one for
> non-xdp in the following if/else block?
>
> This way XDP perf will not be impacted by the extra call to
> mlx5e_cqe_estimate_hdr_len().
Perhaps move it further down for XDP?
Ideally attaching a program which returns XDP_PASS shouldn't impact
normal TCP perf.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part
2025-08-20 0:15 ` Jakub Kicinski
@ 2025-08-21 22:59 ` Christoph Paasch
0 siblings, 0 replies; 6+ messages in thread
From: Christoph Paasch @ 2025-08-21 22:59 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Dragos Tatulea, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
Mark Bloch, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, Alexander Lobakin, Gal Pressman, linux-rdma, netdev
On Tue, Aug 19, 2025 at 5:15 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Tue, 19 Aug 2025 09:58:54 +0000 Dragos Tatulea wrote:
> > > @@ -2009,10 +2040,14 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> > > u32 linear_frame_sz;
> > > u16 linear_data_len;
> > > u16 linear_hr;
> > > + u16 headlen;
> > > void *va;
> > >
> > > prog = rcu_dereference(rq->xdp_prog);
> > >
> > > + headlen = min3(mlx5e_cqe_estimate_hdr_len(cqe), cqe_bcnt,
> > > + (u16)MLX5E_RX_MAX_HEAD);
> > > +
> > How about keeping the old calculation for XDP and do this one for
> > non-xdp in the following if/else block?
> >
> > This way XDP perf will not be impacted by the extra call to
> > mlx5e_cqe_estimate_hdr_len().
>
> Perhaps move it further down for XDP?
> Ideally attaching a program which returns XDP_PASS shouldn't impact
> normal TCP perf.
Yes, makes sense!
Will do that and resubmit.
Thanks,
Christoph
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-08-21 22:59 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-16 15:39 [PATCH net-next v2 0/2] net/mlx5: Avoid payload in skb's linear part for better GRO-processing Christoph Paasch via B4 Relay
2025-08-16 15:39 ` [PATCH net-next v2 1/2] net/mlx5: Bring back get_cqe_l3_hdr_type Christoph Paasch via B4 Relay
2025-08-16 15:39 ` [PATCH net-next v2 2/2] net/mlx5: Avoid copying payload to the skb's linear part Christoph Paasch via B4 Relay
2025-08-19 9:58 ` Dragos Tatulea
2025-08-20 0:15 ` Jakub Kicinski
2025-08-21 22:59 ` Christoph Paasch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).