* [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size
@ 2025-11-03 21:20 Joshua Hay
2025-11-03 23:01 ` Jacob Keller
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Joshua Hay @ 2025-11-03 21:20 UTC (permalink / raw)
To: intel-wired-lan; +Cc: netdev, Joshua Hay, Alexander Lobakin, Madhu Chittim
The HW only supports a maximum Rx buffer size of 16K-128. On systems
using large pages, the libeth logic can configure the buffer size to be
larger than this. The upper bound is PAGE_SIZE while the lower bound is
MTU rounded up to the nearest power of 2. For example, ARM systems with
a 64K page size and an mtu of 9000 will set the Rx buffer size to 16K,
which will cause the config Rx queues message to fail.
Initialize the bufq/fill queue buf_len field to the maximum supported
size. This will trigger the libeth logic to cap the maximum Rx buffer
size by reducing the upper bound.
Fixes: 74d1412ac8f37 ("idpf: use libeth Rx buffer management for payload buffer")
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Acked-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 8 +++++---
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 +
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 828f7c444d30..dcdd4fef1c7a 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -695,9 +695,10 @@ static int idpf_rx_buf_alloc_singleq(struct idpf_rx_queue *rxq)
static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq)
{
struct libeth_fq fq = {
- .count = rxq->desc_count,
- .type = LIBETH_FQE_MTU,
- .nid = idpf_q_vector_to_mem(rxq->q_vector),
+ .count = rxq->desc_count,
+ .type = LIBETH_FQE_MTU,
+ .buf_len = IDPF_RX_MAX_BUF_SZ,
+ .nid = idpf_q_vector_to_mem(rxq->q_vector),
};
int ret;
@@ -754,6 +755,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq,
.truesize = bufq->truesize,
.count = bufq->desc_count,
.type = type,
+ .buf_len = IDPF_RX_MAX_BUF_SZ,
.hsplit = idpf_queue_has(HSPLIT_EN, bufq),
.xdp = idpf_xdp_enabled(bufq->q_vector->vport),
.nid = idpf_q_vector_to_mem(bufq->q_vector),
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 75b977094741..a1255099656f 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -101,6 +101,7 @@ do { \
idx = 0; \
} while (0)
+#define IDPF_RX_MAX_BUF_SZ (16384 - 128)
#define IDPF_RX_BUF_STRIDE 32
#define IDPF_RX_BUF_POST_STRIDE 16
#define IDPF_LOW_WATERMARK 64
--
2.39.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size
2025-11-03 21:20 [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size Joshua Hay
@ 2025-11-03 23:01 ` Jacob Keller
2025-11-05 7:26 ` Loktionov, Aleksandr
2025-12-30 23:23 ` David Decotigny via Intel-wired-lan
2 siblings, 0 replies; 4+ messages in thread
From: Jacob Keller @ 2025-11-03 23:01 UTC (permalink / raw)
To: Joshua Hay, intel-wired-lan; +Cc: netdev, Alexander Lobakin, Madhu Chittim
[-- Attachment #1.1: Type: text/plain, Size: 2756 bytes --]
On 11/3/2025 1:20 PM, Joshua Hay wrote:
> The HW only supports a maximum Rx buffer size of 16K-128. On systems
> using large pages, the libeth logic can configure the buffer size to be
> larger than this. The upper bound is PAGE_SIZE while the lower bound is
> MTU rounded up to the nearest power of 2. For example, ARM systems with
> a 64K page size and an mtu of 9000 will set the Rx buffer size to 16K,
> which will cause the config Rx queues message to fail.
>
> Initialize the bufq/fill queue buf_len field to the maximum supported
> size. This will trigger the libeth logic to cap the maximum Rx buffer
> size by reducing the upper bound.
>
> Fixes: 74d1412ac8f37 ("idpf: use libeth Rx buffer management for payload buffer")
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Acked-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 8 +++++---
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 +
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 828f7c444d30..dcdd4fef1c7a 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -695,9 +695,10 @@ static int idpf_rx_buf_alloc_singleq(struct idpf_rx_queue *rxq)
> static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq)
> {
> struct libeth_fq fq = {
> - .count = rxq->desc_count,
> - .type = LIBETH_FQE_MTU,
> - .nid = idpf_q_vector_to_mem(rxq->q_vector),
> + .count = rxq->desc_count,
> + .type = LIBETH_FQE_MTU,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> + .nid = idpf_q_vector_to_mem(rxq->q_vector),
> };
> int ret;
>
> @@ -754,6 +755,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq,
> .truesize = bufq->truesize,
> .count = bufq->desc_count,
> .type = type,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> .hsplit = idpf_queue_has(HSPLIT_EN, bufq),
> .xdp = idpf_xdp_enabled(bufq->q_vector->vport),
> .nid = idpf_q_vector_to_mem(bufq->q_vector),
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> index 75b977094741..a1255099656f 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> @@ -101,6 +101,7 @@ do { \
> idx = 0; \
> } while (0)
>
> +#define IDPF_RX_MAX_BUF_SZ (16384 - 128)
> #define IDPF_RX_BUF_STRIDE 32
> #define IDPF_RX_BUF_POST_STRIDE 16
> #define IDPF_LOW_WATERMARK 64
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 236 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size
2025-11-03 21:20 [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size Joshua Hay
2025-11-03 23:01 ` Jacob Keller
@ 2025-11-05 7:26 ` Loktionov, Aleksandr
2025-12-30 23:23 ` David Decotigny via Intel-wired-lan
2 siblings, 0 replies; 4+ messages in thread
From: Loktionov, Aleksandr @ 2025-11-05 7:26 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Lobakin, Aleksander,
Chittim, Madhu
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> Of Joshua Hay
> Sent: Monday, November 3, 2025 10:21 PM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>;
> Lobakin, Aleksander <aleksander.lobakin@intel.com>; Chittim, Madhu
> <madhu.chittim@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer
> size
>
> The HW only supports a maximum Rx buffer size of 16K-128. On systems
> using large pages, the libeth logic can configure the buffer size to
> be larger than this. The upper bound is PAGE_SIZE while the lower
> bound is MTU rounded up to the nearest power of 2. For example, ARM
> systems with a 64K page size and an mtu of 9000 will set the Rx buffer
> size to 16K, which will cause the config Rx queues message to fail.
>
> Initialize the bufq/fill queue buf_len field to the maximum supported
> size. This will trigger the libeth logic to cap the maximum Rx buffer
> size by reducing the upper bound.
>
> Fixes: 74d1412ac8f37 ("idpf: use libeth Rx buffer management for
> payload buffer")
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Acked-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 8 +++++---
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 +
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 828f7c444d30..dcdd4fef1c7a 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -695,9 +695,10 @@ static int idpf_rx_buf_alloc_singleq(struct
> idpf_rx_queue *rxq) static int idpf_rx_bufs_init_singleq(struct
> idpf_rx_queue *rxq) {
> struct libeth_fq fq = {
> - .count = rxq->desc_count,
> - .type = LIBETH_FQE_MTU,
> - .nid = idpf_q_vector_to_mem(rxq->q_vector),
> + .count = rxq->desc_count,
> + .type = LIBETH_FQE_MTU,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> + .nid = idpf_q_vector_to_mem(rxq->q_vector),
> };
> int ret;
>
> @@ -754,6 +755,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue
> *bufq,
> .truesize = bufq->truesize,
> .count = bufq->desc_count,
> .type = type,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> .hsplit = idpf_queue_has(HSPLIT_EN, bufq),
> .xdp = idpf_xdp_enabled(bufq->q_vector->vport),
> .nid = idpf_q_vector_to_mem(bufq->q_vector),
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> index 75b977094741..a1255099656f 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> @@ -101,6 +101,7 @@ do {
> \
> idx = 0; \
> } while (0)
>
> +#define IDPF_RX_MAX_BUF_SZ (16384 - 128)
> #define IDPF_RX_BUF_STRIDE 32
> #define IDPF_RX_BUF_POST_STRIDE 16
> #define IDPF_LOW_WATERMARK 64
> --
> 2.39.2
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size
2025-11-03 21:20 [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size Joshua Hay
2025-11-03 23:01 ` Jacob Keller
2025-11-05 7:26 ` Loktionov, Aleksandr
@ 2025-12-30 23:23 ` David Decotigny via Intel-wired-lan
2 siblings, 0 replies; 4+ messages in thread
From: David Decotigny via Intel-wired-lan @ 2025-12-30 23:23 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, Joshua Hay, Alexander Lobakin, Madhu Chittim,
David Decotigny
On 11/3/2025 1:20 PM, Joshua Hay wrote:
> The HW only supports a maximum Rx buffer size of 16K-128. On systems
> using large pages, the libeth logic can configure the buffer size to be
> larger than this. The upper bound is PAGE_SIZE while the lower bound is
> MTU rounded up to the nearest power of 2. For example, ARM systems with
> a 64K page size and an mtu of 9000 will set the Rx buffer size to 16K,
> which will cause the config Rx queues message to fail.
>
> Initialize the bufq/fill queue buf_len field to the maximum supported
> size. This will trigger the libeth logic to cap the maximum Rx buffer
> size by reducing the upper bound.
>
> Fixes: 74d1412ac8f37 ("idpf: use libeth Rx buffer management for payload buffer")
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Acked-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
Reviewed-by: David Decotigny <ddecotig@google.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-01-05 16:18 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-03 21:20 [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer size Joshua Hay
2025-11-03 23:01 ` Jacob Keller
2025-11-05 7:26 ` Loktionov, Aleksandr
2025-12-30 23:23 ` David Decotigny via Intel-wired-lan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox