public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/2] net: mana: add ethtool private flag for full-page RX buffers
@ 2026-04-05  3:42 Dipayaan Roy
  2026-04-07 13:10 ` Alexander Lobakin
  0 siblings, 1 reply; 3+ messages in thread
From: Dipayaan Roy @ 2026-04-05  3:42 UTC (permalink / raw)
  To: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
	kuba, pabeni, leon, longli, kotaranov, horms, shradhagupta,
	ssengar, ernis, shirazsaleem, linux-hyperv, netdev, linux-kernel,
	linux-rdma, stephen, jacob.e.keller, dipayanroy, leitao, kees

On some ARM64 platforms with 4K PAGE_SIZE, utilizing page_pool 
fragments for allocation in the RX refill path (~2kB buffer per fragment)
causes 15-20% throughput regression under high connection counts
(>16 TCP streams at 180+ Gbps). Using full-page buffers on these
platforms shows no regression and restores line-rate performance.

This behavior is observed on a single platform; other platforms
perform better with page_pool fragments, indicating this is not a
page_pool issue but platform-specific.

This series adds an ethtool private flag "full-page-rx" to let the
user opt in to one RX buffer per page:

  ethtool --set-priv-flags eth0 full-page-rx on

There is no behavioral change by default. The flag can be persisted
via udev rule for affected platforms.

Changes in v5:
  - Split prep refactor into separate patch (patch 1/2)
Changes in v4:
  - Dropping the smbios string parsing and add ethtool priv flag
    to reconfigure the queues with full page rx buffers.
Changes in v3:
  - changed u8* to char*
Changes in v2:
  - separate reading string index and the string, remove inline.

Dipayaan Roy (2):
  net: mana: refactor mana_get_strings() and mana_get_sset_count() to
    use switch
  net: mana: force full-page RX buffers via ethtool private flag

 drivers/net/ethernet/microsoft/mana/mana_en.c |  22 ++-
 .../ethernet/microsoft/mana/mana_ethtool.c    | 164 ++++++++++++++----
 include/net/mana/mana.h                       |   8 +
 3 files changed, 163 insertions(+), 31 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next v5 0/2] net: mana: add ethtool private flag for full-page RX buffers
  2026-04-05  3:42 [PATCH net-next v5 0/2] net: mana: add ethtool private flag for full-page RX buffers Dipayaan Roy
@ 2026-04-07 13:10 ` Alexander Lobakin
  2026-04-07 13:54   ` Dipayaan Roy
  0 siblings, 1 reply; 3+ messages in thread
From: Alexander Lobakin @ 2026-04-07 13:10 UTC (permalink / raw)
  To: Dipayaan Roy
  Cc: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
	kuba, pabeni, leon, longli, kotaranov, horms, shradhagupta,
	ssengar, ernis, shirazsaleem, linux-hyperv, netdev, linux-kernel,
	linux-rdma, stephen, jacob.e.keller, dipayanroy, leitao, kees

From: Dipayaan Roy <dipayanroy@linux.microsoft.com>
Date: Sat, 4 Apr 2026 20:42:15 -0700

> On some ARM64 platforms with 4K PAGE_SIZE, utilizing page_pool 
> fragments for allocation in the RX refill path (~2kB buffer per fragment)
> causes 15-20% throughput regression under high connection counts
> (>16 TCP streams at 180+ Gbps). Using full-page buffers on these
> platforms shows no regression and restores line-rate performance.
> 
> This behavior is observed on a single platform; other platforms
> perform better with page_pool fragments, indicating this is not a
> page_pool issue but platform-specific.
> 
> This series adds an ethtool private flag "full-page-rx" to let the
> user opt in to one RX buffer per page:
> 
>   ethtool --set-priv-flags eth0 full-page-rx on

Sorry I may've missed the previous threads.

Has this approach been discussed here? Private flags are generally
discouraged.

Alternatively, you can provide Ethtool ops to change the Rx buffer size,
so that you'd be able to set it to PAGE_SIZE on affected platforms and
the result would be the same.

> 
> There is no behavioral change by default. The flag can be persisted
> via udev rule for affected platforms.
> 
> Changes in v5:
>   - Split prep refactor into separate patch (patch 1/2)
> Changes in v4:
>   - Dropping the smbios string parsing and add ethtool priv flag
>     to reconfigure the queues with full page rx buffers.
> Changes in v3:
>   - changed u8* to char*
> Changes in v2:
>   - separate reading string index and the string, remove inline.
> 
> Dipayaan Roy (2):
>   net: mana: refactor mana_get_strings() and mana_get_sset_count() to
>     use switch
>   net: mana: force full-page RX buffers via ethtool private flag
> 
>  drivers/net/ethernet/microsoft/mana/mana_en.c |  22 ++-
>  .../ethernet/microsoft/mana/mana_ethtool.c    | 164 ++++++++++++++----
>  include/net/mana/mana.h                       |   8 +
>  3 files changed, 163 insertions(+), 31 deletions(-)

Thanks,
Olek

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next v5 0/2] net: mana: add ethtool private flag for full-page RX buffers
  2026-04-07 13:10 ` Alexander Lobakin
@ 2026-04-07 13:54   ` Dipayaan Roy
  0 siblings, 0 replies; 3+ messages in thread
From: Dipayaan Roy @ 2026-04-07 13:54 UTC (permalink / raw)
  To: Alexander Lobakin
  Cc: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
	kuba, pabeni, leon, longli, kotaranov, horms, shradhagupta,
	ssengar, ernis, shirazsaleem, linux-hyperv, netdev, linux-kernel,
	linux-rdma, stephen, jacob.e.keller, dipayanroy, leitao, kees

On Tue, Apr 07, 2026 at 03:10:45PM +0200, Alexander Lobakin wrote:
> From: Dipayaan Roy <dipayanroy@linux.microsoft.com>
> Date: Sat, 4 Apr 2026 20:42:15 -0700
> 
> > On some ARM64 platforms with 4K PAGE_SIZE, utilizing page_pool 
> > fragments for allocation in the RX refill path (~2kB buffer per fragment)
> > causes 15-20% throughput regression under high connection counts
> > (>16 TCP streams at 180+ Gbps). Using full-page buffers on these
> > platforms shows no regression and restores line-rate performance.
> > 
> > This behavior is observed on a single platform; other platforms
> > perform better with page_pool fragments, indicating this is not a
> > page_pool issue but platform-specific.
> > 
> > This series adds an ethtool private flag "full-page-rx" to let the
> > user opt in to one RX buffer per page:
> > 
> >   ethtool --set-priv-flags eth0 full-page-rx on
> 
> Sorry I may've missed the previous threads.
> 
> Has this approach been discussed here? Private flags are generally
> discouraged.
> 
> Alternatively, you can provide Ethtool ops to change the Rx buffer size,
> so that you'd be able to set it to PAGE_SIZE on affected platforms and
> the result would be the same.
>
Hi Alex, 
This was discussed here:
https://lore.kernel.org/all/adHTm2SvjDrezEdv@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net/ 
> > 
> > There is no behavioral change by default. The flag can be persisted
> > via udev rule for affected platforms.
> > 
> > Changes in v5:
> >   - Split prep refactor into separate patch (patch 1/2)
> > Changes in v4:
> >   - Dropping the smbios string parsing and add ethtool priv flag
> >     to reconfigure the queues with full page rx buffers.
> > Changes in v3:
> >   - changed u8* to char*
> > Changes in v2:
> >   - separate reading string index and the string, remove inline.
> > 
> > Dipayaan Roy (2):
> >   net: mana: refactor mana_get_strings() and mana_get_sset_count() to
> >     use switch
> >   net: mana: force full-page RX buffers via ethtool private flag
> > 
> >  drivers/net/ethernet/microsoft/mana/mana_en.c |  22 ++-
> >  .../ethernet/microsoft/mana/mana_ethtool.c    | 164 ++++++++++++++----
> >  include/net/mana/mana.h                       |   8 +
> >  3 files changed, 163 insertions(+), 31 deletions(-)
> 
> Thanks,
> Olek

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-04-07 13:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-05  3:42 [PATCH net-next v5 0/2] net: mana: add ethtool private flag for full-page RX buffers Dipayaan Roy
2026-04-07 13:10 ` Alexander Lobakin
2026-04-07 13:54   ` Dipayaan Roy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox