linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [net-next v5 0/9] Add support for per-NAPI config via netlink
@ 2024-10-09  0:54 Joe Damato
  2024-10-09  0:55 ` [net-next v5 8/9] mlx5: Add support for persistent NAPI config Joe Damato
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Joe Damato @ 2024-10-09  0:54 UTC (permalink / raw)
  To: netdev
  Cc: mkarsten, skhawaja, sdf, bjorn, amritha.nambiar,
	sridhar.samudrala, willemdebruijn.kernel, Joe Damato,
	Alexander Lobakin, Breno Leitao, Daniel Jurgens, David Ahern,
	David S. Miller, Donald Hunter, Eric Dumazet, Jakub Kicinski,
	Jesper Dangaard Brouer, Jiri Pirko, Johannes Berg,
	Jonathan Corbet, Kory Maincent, Leon Romanovsky,
	open list:DOCUMENTATION, open list,
	open list:MELLANOX MLX4 core VPI driver, Lorenzo Bianconi,
	Michael Chan, Mina Almasry, Paolo Abeni, Saeed Mahameed,
	Sebastian Andrzej Siewior, Tariq Toukan, Xuan Zhuo

Greetings:

Welcome to v5, the first non-RFC version of this code! See the changelog
below for more details on changes between revisions.

A few important call outs for reviewers:

  1. This revision seems to work (see below for a full walk through). I
     think this is the behavior we talked about, but please let me know if
     a use case is missing.
  
  2. Re a previous point made by Stanislav regarding "taking over a NAPI
     ID" when the channel count changes: mlx5 seems to call napi_disable
     followed by netif_napi_del for the old queues and then calls
     napi_enable for the new ones. In this RFC, the NAPI ID generation is
     deferred to napi_enable. This means we won't end up with two of the
     same NAPI IDs added to the hash at the same time.

     Can we assume all drivers will napi_disable the old queues before
     napi_enable the new ones?
       - If yes: we might not need to worry about a NAPI ID takeover
       	 function.
       - If no: I'll need to make a change so that the NAPI ID generation
       	 is deferred only for drivers which have opted into the config
       	 space via calls to netif_napi_add_config

  3. I made the decision to remove the WARN_ON_ONCE that (I think?)
     Jakub previously suggested in alloc_netdev_mqs (WARN_ON_ONCE(txqs
     != rxqs);) because this was triggering on every kernel boot with my
     mlx5 NIC.

  4. I left the "maxqs = max(txqs, rxqs);" in alloc_netdev_mqs despite
     thinking this is a bit strange. I think it's strange that we might
     be short some number of NAPI configs, but it seems like most people
     are in favor of this approach, so I've left it. 

I'd appreciate thoughts from reviewers on the above items, if at all
possible.

Now, on to the implementation.

Firstly, this implementation moves certain settings to napi_struct so that
they are "per-NAPI", while taking care to respect existing sysfs
parameters which are interface wide and affect all NAPIs:
  - NAPI ID
  - gro_flush_timeout
  - defer_hard_irqs

Furthermore:
   - NAPI ID generation and addition to the hash is now deferred to
     napi_enable, instead of during netif_napi_add
   - NAPIs are removed from the hash during napi_disable, instead of
     netif_napi_del.
   - An array of "struct napi_config" is allocated in net_device.

IMPORTANT: The above changes affect all network drivers.

Optionally, drivers may opt-in to using their config space by calling
netif_napi_add_config instead of netif_napi_add.

If a driver does this, the NAPI being added is linked with an allocated
"struct napi_config" and the per-NAPI settings (including NAPI ID) are
persisted even as hardware queues are destroyed and recreated.

To help illustrate how this would end up working, I've added patches for
3 drivers, of which I have access to only 1:
  - mlx5 which is the basis of the examples below
  - mlx4 which has TX only NAPIs, just to highlight that case. I have
    only compile tested this patch; I don't have this hardware.
  - bnxt which I have only compiled tested. I don't have this
    hardware.

NOTE: I only tested this on mlx5; I have no access to the other hardware
for which I provided patches. Hopefully other folks can help test :)

Here's how it works when I test it on my mlx5 system:

# start with 2 queues

$ ethtool -l eth4 | grep Combined | tail -1
Combined:       2

First, output the current NAPI settings:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'
[{'defer-hard-irqs': 0,
  'gro-flush-timeout': 0,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 0,
  'gro-flush-timeout': 0,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Now, set the global sysfs parameters:

$ sudo bash -c 'echo 20000 >/sys/class/net/eth4/gro_flush_timeout'
$ sudo bash -c 'echo 100 >/sys/class/net/eth4/napi_defer_hard_irqs'

Output current NAPI settings again:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'
[{'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Now set NAPI ID 345, via its NAPI ID to specific values:

$ sudo ./tools/net/ynl/cli.py \
          --spec Documentation/netlink/specs/netdev.yaml \
          --do napi-set \
          --json='{"id": 345,
                   "defer-hard-irqs": 111,
                   "gro-flush-timeout": 11111}'
None

Now output current NAPI settings again to ensure only NAPI ID 345
changed:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'

[{'defer-hard-irqs': 111,
  'gro-flush-timeout': 11111,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Now, increase gro-flush-timeout only:

$ sudo ./tools/net/ynl/cli.py \
       --spec Documentation/netlink/specs/netdev.yaml \
       --do napi-set --json='{"id": 345,
                              "gro-flush-timeout": 44444}'
None

Now output the current NAPI settings once more:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'
[{'defer-hard-irqs': 111,
  'gro-flush-timeout': 44444,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Now set NAPI ID 345 to have gro_flush_timeout of 0:

$ sudo ./tools/net/ynl/cli.py \
       --spec Documentation/netlink/specs/netdev.yaml \
       --do napi-set --json='{"id": 345,
                              "gro-flush-timeout": 0}'
None

Check that NAPI ID 345 has a value of 0:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'

[{'defer-hard-irqs': 111,
  'gro-flush-timeout': 0,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Change the queue count, ensuring that NAPI ID 345 retains its settings:

$ sudo ethtool -L eth4 combined 4

Check that the new queues have the system wide settings but that NAPI ID
345 remains unchanged:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'

[{'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 347,
  'ifindex': 7,
  'irq': 529},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 346,
  'ifindex': 7,
  'irq': 528},
 {'defer-hard-irqs': 111,
  'gro-flush-timeout': 0,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Now reduce the queue count below where NAPI ID 345 is indexed:

$ sudo ethtool -L eth4 combined 1

Check the output:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'
[{'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Re-increase the queue count to ensure NAPI ID 345 is re-assigned the same
values:

$ sudo ethtool -L eth4 combined 2

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'
[{'defer-hard-irqs': 111,
  'gro-flush-timeout': 0,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Create new queues to ensure the sysfs globals are used for the new NAPIs
but that NAPI ID 345 is unchanged:

$ sudo ethtool -L eth4 combined 8

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'
[...]
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 346,
  'ifindex': 7,
  'irq': 528},
 {'defer-hard-irqs': 111,
  'gro-flush-timeout': 0,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 100,
  'gro-flush-timeout': 20000,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Last, but not least, let's try writing the sysfs parameters to ensure
all NAPIs are rewritten:

$ sudo bash -c 'echo 33333 >/sys/class/net/eth4/gro_flush_timeout'
$ sudo bash -c 'echo 222 >/sys/class/net/eth4/napi_defer_hard_irqs'

Check that worked:

$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
                         --dump napi-get --json='{"ifindex": 7}'

[...]
 {'defer-hard-irqs': 222,
  'gro-flush-timeout': 33333,
  'id': 346,
  'ifindex': 7,
  'irq': 528},
 {'defer-hard-irqs': 222,
  'gro-flush-timeout': 33333,
  'id': 345,
  'ifindex': 7,
  'irq': 527},
 {'defer-hard-irqs': 222,
  'gro-flush-timeout': 33333,
  'id': 344,
  'ifindex': 7,
  'irq': 327}]

Thanks,
Joe

v5:
  - Converted from an RFC to a PATCH
  - Moved defer_hard_irqs above control-fields comment in napi_struct in
    patch 1
  - Moved gro_flush_timeout above control-fields comment in napi_struct in
    patch 3
  - Removed unnecessary idpf changes from patch 3
  - Removed unnecessary kdoc in patch 5 for a parameter removed in an
    earlier revision
  - Removed unnecessary NULL check in patch 5
  - Used tooling to autogenerate code for patch 6, which fixed the type and
    range of NETDEV_A_NAPI_DEFER_HARD_IRQ.

rfcv4:
  - Updated commit messages of most patches
  - Renamed netif_napi_add_storage to netif_napi_add_config in patch 5
  - Added a NULL check in netdev_set_defer_hard_irqs and
    netdev_set_gro_flush_timeout for netdev->napi_config in patch 5
  - Removed the WARN_ON_ONCE suggested in an earlier revision
    in alloc_netdev_mqs from patch 5; it triggers every time on my mlx5
    machine at boot and needlessly spams the log
  - Added a locking adjustment suggested by Stanislav to patch 6 to
    protect napi_id in patch 5
  - Removed napi_hash_del from netif_napi_del in patch 5. netif_napi_del
    calls __netif_napi_del which itself calls napi_hash_del. The
    original code thus resulted in two napi_hash_del calls, which is
    incorrect.
  - Removed the napi_hash_add from netif_napi_add_weight in patch 5.
    NAPIs are added to the hash when napi_enable is called, instead.
  - Moved the napi_restore_config to the top of napi_enable in patch 5.
  - Simplified the logic in __netif_napi_del and removed napi_hash_del.
    NAPIs are removed in napi_disable.
  - Fixed merge conflicts in patch 6 so it applies cleanly

rfcv3:
  - Renamed napi_storage to napi_config
  - Reordered patches
  - Added defer_hard_irqs and gro_flush_timeout to napi_struct
  - Attempt to save and restore settings on napi_disable/napi_enable
  - Removed weight as a parameter to netif_napi_add_storage
  - Updated driver patches to no longer pass in weight

rfcv2:
  - Almost total rewrite from v1


Joe Damato (9):
  net: napi: Make napi_defer_hard_irqs per-NAPI
  netdev-genl: Dump napi_defer_hard_irqs
  net: napi: Make gro_flush_timeout per-NAPI
  netdev-genl: Dump gro_flush_timeout
  net: napi: Add napi_config
  netdev-genl: Support setting per-NAPI config values
  bnxt: Add support for persistent NAPI config
  mlx5: Add support for persistent NAPI config
  mlx4: Add support for persistent NAPI config to RX CQs

 Documentation/netlink/specs/netdev.yaml       | 25 +++++
 .../networking/net_cachelines/net_device.rst  |  5 +-
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     |  3 +-
 drivers/net/ethernet/mellanox/mlx4/en_cq.c    |  3 +-
 .../net/ethernet/mellanox/mlx5/core/en_main.c |  2 +-
 include/linux/netdevice.h                     | 42 +++++++-
 include/uapi/linux/netdev.h                   |  3 +
 net/core/dev.c                                | 95 ++++++++++++++++---
 net/core/dev.h                                | 88 +++++++++++++++++
 net/core/net-sysfs.c                          |  4 +-
 net/core/netdev-genl-gen.c                    | 18 ++++
 net/core/netdev-genl-gen.h                    |  1 +
 net/core/netdev-genl.c                        | 57 +++++++++++
 tools/include/uapi/linux/netdev.h             |  3 +
 14 files changed, 322 insertions(+), 27 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [net-next v5 8/9] mlx5: Add support for persistent NAPI config
  2024-10-09  0:54 [net-next v5 0/9] Add support for per-NAPI config via netlink Joe Damato
@ 2024-10-09  0:55 ` Joe Damato
  2024-10-10  4:26   ` Eric Dumazet
  2024-10-09  0:55 ` [net-next v5 9/9] mlx4: Add support for persistent NAPI config to RX CQs Joe Damato
  2024-10-10  3:17 ` [net-next v5 0/9] Add support for per-NAPI config via netlink Jakub Kicinski
  2 siblings, 1 reply; 7+ messages in thread
From: Joe Damato @ 2024-10-09  0:55 UTC (permalink / raw)
  To: netdev
  Cc: mkarsten, skhawaja, sdf, bjorn, amritha.nambiar,
	sridhar.samudrala, willemdebruijn.kernel, Joe Damato,
	Saeed Mahameed, Tariq Toukan, Leon Romanovsky, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:MELLANOX MLX5 core VPI driver, open list

Use netif_napi_add_config to assign persistent per-NAPI config when
initializing NAPIs.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index a5659c0c4236..09ab7ac07c29 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -2697,7 +2697,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 	c->aff_mask = irq_get_effective_affinity_mask(irq);
 	c->lag_port = mlx5e_enumerate_lag_port(mdev, ix);
 
-	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll);
+	netif_napi_add_config(netdev, &c->napi, mlx5e_napi_poll, ix);
 	netif_napi_set_irq(&c->napi, irq);
 
 	err = mlx5e_open_queues(c, params, cparam);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [net-next v5 9/9] mlx4: Add support for persistent NAPI config to RX CQs
  2024-10-09  0:54 [net-next v5 0/9] Add support for per-NAPI config via netlink Joe Damato
  2024-10-09  0:55 ` [net-next v5 8/9] mlx5: Add support for persistent NAPI config Joe Damato
@ 2024-10-09  0:55 ` Joe Damato
  2024-10-10  4:28   ` Eric Dumazet
  2024-10-10  3:17 ` [net-next v5 0/9] Add support for per-NAPI config via netlink Jakub Kicinski
  2 siblings, 1 reply; 7+ messages in thread
From: Joe Damato @ 2024-10-09  0:55 UTC (permalink / raw)
  To: netdev
  Cc: mkarsten, skhawaja, sdf, bjorn, amritha.nambiar,
	sridhar.samudrala, willemdebruijn.kernel, Joe Damato,
	Tariq Toukan, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, open list:MELLANOX MLX4 core VPI driver, open list

Use netif_napi_add_config to assign persistent per-NAPI config when
initializing RX CQ NAPIs.

Presently, struct napi_config only has support for two fields used for
RX, so there is no need to support them with TX CQs, yet.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 drivers/net/ethernet/mellanox/mlx4/en_cq.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
index 461cc2c79c71..0e92956e84cf 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -156,7 +156,8 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
 		break;
 	case RX:
 		cq->mcq.comp = mlx4_en_rx_irq;
-		netif_napi_add(cq->dev, &cq->napi, mlx4_en_poll_rx_cq);
+		netif_napi_add_config(cq->dev, &cq->napi, mlx4_en_poll_rx_cq,
+				      cq_idx);
 		netif_napi_set_irq(&cq->napi, irq);
 		napi_enable(&cq->napi);
 		netif_queue_set_napi(cq->dev, cq_idx, NETDEV_QUEUE_TYPE_RX, &cq->napi);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [net-next v5 0/9] Add support for per-NAPI config via netlink
  2024-10-09  0:54 [net-next v5 0/9] Add support for per-NAPI config via netlink Joe Damato
  2024-10-09  0:55 ` [net-next v5 8/9] mlx5: Add support for persistent NAPI config Joe Damato
  2024-10-09  0:55 ` [net-next v5 9/9] mlx4: Add support for persistent NAPI config to RX CQs Joe Damato
@ 2024-10-10  3:17 ` Jakub Kicinski
  2 siblings, 0 replies; 7+ messages in thread
From: Jakub Kicinski @ 2024-10-10  3:17 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, mkarsten, skhawaja, sdf, bjorn, amritha.nambiar,
	sridhar.samudrala, willemdebruijn.kernel, Alexander Lobakin,
	Breno Leitao, Daniel Jurgens, David Ahern, David S. Miller,
	Donald Hunter, Eric Dumazet, Jesper Dangaard Brouer, Jiri Pirko,
	Johannes Berg, Jonathan Corbet, Kory Maincent, Leon Romanovsky,
	open list:DOCUMENTATION, open list,
	open list:MELLANOX MLX4 core VPI driver, Lorenzo Bianconi,
	Michael Chan, Mina Almasry, Paolo Abeni, Saeed Mahameed,
	Sebastian Andrzej Siewior, Tariq Toukan, Xuan Zhuo

On Wed,  9 Oct 2024 00:54:54 +0000 Joe Damato wrote:
> Welcome to v5, the first non-RFC version of this code! See the changelog
> below for more details on changes between revisions.

I'm probably going to end up applying this but just in case:

Reviewed-by: Jakub Kicinski <kuba@kernel.org>

Very neat work !

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [net-next v5 8/9] mlx5: Add support for persistent NAPI config
  2024-10-09  0:55 ` [net-next v5 8/9] mlx5: Add support for persistent NAPI config Joe Damato
@ 2024-10-10  4:26   ` Eric Dumazet
  0 siblings, 0 replies; 7+ messages in thread
From: Eric Dumazet @ 2024-10-10  4:26 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, mkarsten, skhawaja, sdf, bjorn, amritha.nambiar,
	sridhar.samudrala, willemdebruijn.kernel, Saeed Mahameed,
	Tariq Toukan, Leon Romanovsky, David S. Miller, Jakub Kicinski,
	Paolo Abeni, open list:MELLANOX MLX5 core VPI driver, open list

On Wed, Oct 9, 2024 at 2:56 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Use netif_napi_add_config to assign persistent per-NAPI config when
> initializing NAPIs.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---

Reviewed-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [net-next v5 9/9] mlx4: Add support for persistent NAPI config to RX CQs
  2024-10-09  0:55 ` [net-next v5 9/9] mlx4: Add support for persistent NAPI config to RX CQs Joe Damato
@ 2024-10-10  4:28   ` Eric Dumazet
  2024-10-10 16:07     ` Joe Damato
  0 siblings, 1 reply; 7+ messages in thread
From: Eric Dumazet @ 2024-10-10  4:28 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, mkarsten, skhawaja, sdf, bjorn, amritha.nambiar,
	sridhar.samudrala, willemdebruijn.kernel, Tariq Toukan,
	David S. Miller, Jakub Kicinski, Paolo Abeni,
	open list:MELLANOX MLX4 core VPI driver, open list

On Wed, Oct 9, 2024 at 2:56 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Use netif_napi_add_config to assign persistent per-NAPI config when
> initializing RX CQ NAPIs.
>
> Presently, struct napi_config only has support for two fields used for
> RX, so there is no need to support them with TX CQs, yet.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---

nit: technically, the napi_defer_hard_irqs could benefit TX completions as well.

Reviewed-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [net-next v5 9/9] mlx4: Add support for persistent NAPI config to RX CQs
  2024-10-10  4:28   ` Eric Dumazet
@ 2024-10-10 16:07     ` Joe Damato
  0 siblings, 0 replies; 7+ messages in thread
From: Joe Damato @ 2024-10-10 16:07 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: netdev, mkarsten, skhawaja, sdf, bjorn, amritha.nambiar,
	sridhar.samudrala, willemdebruijn.kernel, Tariq Toukan,
	David S. Miller, Jakub Kicinski, Paolo Abeni,
	open list:MELLANOX MLX4 core VPI driver, open list

On Thu, Oct 10, 2024 at 06:28:59AM +0200, Eric Dumazet wrote:
> On Wed, Oct 9, 2024 at 2:56 AM Joe Damato <jdamato@fastly.com> wrote:
> >
> > Use netif_napi_add_config to assign persistent per-NAPI config when
> > initializing RX CQ NAPIs.
> >
> > Presently, struct napi_config only has support for two fields used for
> > RX, so there is no need to support them with TX CQs, yet.
> >
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > ---
> 
> nit: technically, the napi_defer_hard_irqs could benefit TX completions as well.

That's true - I think I missed updating this commit message when I
realized it. I can correct the commit message while retaining your
Reviewed-by for the v6.

Note: This adds to the confusion I have around the support for
allocating max(rxqs, txqs) config structs; it would seem we'll be
missing config structure for some queues if the system is configured
to use the maximum number of each? Perhaps that configuration is
uncommon enough that it doesn't matter?
 
> Reviewed-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-10-10 16:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-09  0:54 [net-next v5 0/9] Add support for per-NAPI config via netlink Joe Damato
2024-10-09  0:55 ` [net-next v5 8/9] mlx5: Add support for persistent NAPI config Joe Damato
2024-10-10  4:26   ` Eric Dumazet
2024-10-09  0:55 ` [net-next v5 9/9] mlx4: Add support for persistent NAPI config to RX CQs Joe Damato
2024-10-10  4:28   ` Eric Dumazet
2024-10-10 16:07     ` Joe Damato
2024-10-10  3:17 ` [net-next v5 0/9] Add support for per-NAPI config via netlink Jakub Kicinski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).