netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC net-next v3 0/4] virtio_net: Link queues to NAPIs
@ 2025-01-21 19:10 Joe Damato
  2025-01-21 19:10 ` [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Joe Damato @ 2025-01-21 19:10 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, xuanzhuo, mkarsten, Joe Damato,
	Alexander Lobakin, Alexei Starovoitov, Andrew Lunn,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	Daniel Borkmann, David S. Miller, Eric Dumazet,
	Eugenio Pérez, Jakub Kicinski, Jesper Dangaard Brouer,
	John Fastabend, Kuniyuki Iwashima, open list, Lorenzo Bianconi,
	Michael S. Tsirkin, Paolo Abeni, Sebastian Andrzej Siewior,
	Simon Horman, open list:VIRTIO CORE AND NET DRIVERS

Greetings:

Welcome to RFC v3, since net-next is closed. See changelog below.

Recently [1], Jakub mentioned that there were a few drivers that are not
yet mapping queues to NAPIs.

While I don't have any of the other hardware mentioned, I do happen to
have a virtio_net laying around ;)

I've attempted to link queues to NAPIs, using the new locking Jakub
introduced avoiding RTNL.

Note: It seems virtio_net uses TX-only NAPIs which do not have NAPI IDs.
As such, I've left the TX NAPIs unset (as opposed to setting them to 0).

As per the discussion on the v2 [2], all RX queues now have their NAPIs
linked.

See the commit message of patch 3 for an example of how to get the NAPI
to queue mapping information.

See the commit message of patch 4 for an example of how NAPI IDs are
persistent despite queue count changes.

Thanks,
Joe

[1]: https://lore.kernel.org/netdev/20250109084301.2445a3e3@kernel.org/
[2]: https://lore.kernel.org/netdev/f8fe5618-af94-4f5b-8dbc-e8cae744aedf@engleder-embedded.com/

v3:
  - patch 3:
    - Removed the xdp checks completely, as Gerhard Engleder pointed
      out, they are likely not necessary.

  - patch 4:
    - Added Xuan Zhuo's Reviewed-by.

v2:
  - patch 1:
    - New in the v2 from Jakub.

  - patch 2:
    - Previously patch 1, unchanged from v1.
    - Added Gerhard Engleder's Reviewed-by.
    - Added Lei Yang's Tested-by.

  - patch 3:
    - Introduced virtnet_napi_disable to eliminate duplicated code
      in virtnet_xdp_set, virtnet_rx_pause, virtnet_disable_queue_pair,
      refill_work as suggested by Jason Wang.
    - As a result of the above refactor, dropped Reviewed-by and
      Tested-by from patch 3.

  - patch 4:
    - New in v2. Adds persistent NAPI configuration. See commit message
      for more details.

Jakub Kicinski (1):
  net: protect queue -> napi linking with netdev_lock()

Joe Damato (3):
  virtio_net: Prepare for NAPI to queue mapping
  virtio_net: Map NAPIs to queues
  virtio_net: Use persistent NAPI config

 drivers/net/virtio_net.c      | 38 +++++++++++++++++++++++++++--------
 include/linux/netdevice.h     |  9 +++++++--
 include/net/netdev_rx_queue.h |  2 +-
 net/core/dev.c                | 16 ++++++++++++---
 4 files changed, 51 insertions(+), 14 deletions(-)


base-commit: cf33d96f50903214226b379b3f10d1f262dae018
-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock()
  2025-01-21 19:10 [RFC net-next v3 0/4] virtio_net: Link queues to NAPIs Joe Damato
@ 2025-01-21 19:10 ` Joe Damato
  2025-01-27 21:37   ` Jakub Kicinski
  2025-01-21 19:10 ` [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping Joe Damato
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-21 19:10 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, xuanzhuo, mkarsten, Jakub Kicinski,
	Joe Damato, Andrew Lunn, David S. Miller, Eric Dumazet,
	Paolo Abeni, Simon Horman, Kuniyuki Iwashima,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Alexander Lobakin,
	open list

From: Jakub Kicinski <kuba@kernel.org>

netdev netlink is the only reader of netdev_{,rx_}queue->napi,
and it already holds netdev->lock. Switch protection of the
writes to netdev->lock as well.

Add netif_queue_set_napi_locked() for API completeness,
but the expectation is that most current drivers won't have
to worry about locking any more. Today they jump thru hoops
to take rtnl_lock.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Joe Damato <jdamato@fastly.com>
---
 v2:
   - Added in v2 from Jakub.

 include/linux/netdevice.h     |  9 +++++++--
 include/net/netdev_rx_queue.h |  2 +-
 net/core/dev.c                | 16 +++++++++++++---
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 8da4c61f97b9..4709d16bada5 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -691,7 +691,7 @@ struct netdev_queue {
  * slow- / control-path part
  */
 	/* NAPI instance for the queue
-	 * Readers and writers must hold RTNL
+	 * Readers and writers must hold netdev->lock
 	 */
 	struct napi_struct	*napi;
 
@@ -2467,7 +2467,8 @@ struct net_device {
 	 * Partially protects (writers must hold both @lock and rtnl_lock):
 	 *	@up
 	 *
-	 * Also protects some fields in struct napi_struct.
+	 * Also protects some fields in:
+	 *	struct napi_struct, struct netdev_queue, struct netdev_rx_queue
 	 *
 	 * Ordering: take after rtnl_lock.
 	 */
@@ -2694,6 +2695,10 @@ static inline void *netdev_priv(const struct net_device *dev)
 void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index,
 			  enum netdev_queue_type type,
 			  struct napi_struct *napi);
+void netif_queue_set_napi_locked(struct net_device *dev,
+				 unsigned int queue_index,
+				 enum netdev_queue_type type,
+				 struct napi_struct *napi);
 
 static inline void netdev_lock(struct net_device *dev)
 {
diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h
index 596836abf7bf..9fcac0b43b71 100644
--- a/include/net/netdev_rx_queue.h
+++ b/include/net/netdev_rx_queue.h
@@ -23,7 +23,7 @@ struct netdev_rx_queue {
 	struct xsk_buff_pool            *pool;
 #endif
 	/* NAPI instance for the queue
-	 * Readers and writers must hold RTNL
+	 * Readers and writers must hold netdev->lock
 	 */
 	struct napi_struct		*napi;
 	struct pp_memory_provider_params mp_params;
diff --git a/net/core/dev.c b/net/core/dev.c
index afa2282f2604..ab361fd9efd9 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6842,14 +6842,24 @@ EXPORT_SYMBOL(dev_set_threaded);
  */
 void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index,
 			  enum netdev_queue_type type, struct napi_struct *napi)
+{
+	netdev_lock(dev);
+	netif_queue_set_napi_locked(dev, queue_index, type, napi);
+	netdev_unlock(dev);
+}
+EXPORT_SYMBOL(netif_queue_set_napi);
+
+void netif_queue_set_napi_locked(struct net_device *dev,
+				 unsigned int queue_index,
+				 enum netdev_queue_type type,
+				 struct napi_struct *napi)
 {
 	struct netdev_rx_queue *rxq;
 	struct netdev_queue *txq;
 
 	if (WARN_ON_ONCE(napi && !napi->dev))
 		return;
-	if (dev->reg_state >= NETREG_REGISTERED)
-		ASSERT_RTNL();
+	netdev_assert_locked_or_invisible(dev);
 
 	switch (type) {
 	case NETDEV_QUEUE_TYPE_RX:
@@ -6864,7 +6874,7 @@ void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index,
 		return;
 	}
 }
-EXPORT_SYMBOL(netif_queue_set_napi);
+EXPORT_SYMBOL(netif_queue_set_napi_locked);
 
 static void napi_restore_config(struct napi_struct *n)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-21 19:10 [RFC net-next v3 0/4] virtio_net: Link queues to NAPIs Joe Damato
  2025-01-21 19:10 ` [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
@ 2025-01-21 19:10 ` Joe Damato
  2025-01-22  6:12   ` Jason Wang
  2025-01-21 19:10 ` [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues Joe Damato
  2025-01-21 19:10 ` [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config Joe Damato
  3 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-21 19:10 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, xuanzhuo, mkarsten, Joe Damato,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

Slight refactor to prepare the code for NAPI to queue mapping. No
functional changes.

Signed-off-by: Joe Damato <jdamato@fastly.com>
Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
Tested-by: Lei Yang <leiyang@redhat.com>
---
 v2:
   - Previously patch 1 in the v1.
   - Added Reviewed-by and Tested-by tags to commit message. No
     functional changes.

 drivers/net/virtio_net.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7646ddd9bef7..cff18c66b54a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
 	virtqueue_napi_schedule(&rq->napi, rvq);
 }
 
-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+static void virtnet_napi_do_enable(struct virtqueue *vq,
+				   struct napi_struct *napi)
 {
 	napi_enable(napi);
 
@@ -2802,6 +2803,11 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
 	local_bh_enable();
 }
 
+static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+{
+	virtnet_napi_do_enable(vq, napi);
+}
+
 static void virtnet_napi_tx_enable(struct virtnet_info *vi,
 				   struct virtqueue *vq,
 				   struct napi_struct *napi)
@@ -2817,7 +2823,7 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
 		return;
 	}
 
-	return virtnet_napi_enable(vq, napi);
+	virtnet_napi_do_enable(vq, napi);
 }
 
 static void virtnet_napi_tx_disable(struct napi_struct *napi)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues
  2025-01-21 19:10 [RFC net-next v3 0/4] virtio_net: Link queues to NAPIs Joe Damato
  2025-01-21 19:10 ` [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
  2025-01-21 19:10 ` [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping Joe Damato
@ 2025-01-21 19:10 ` Joe Damato
  2025-01-21 20:13   ` Gerhard Engleder
  2025-01-22  6:13   ` Jason Wang
  2025-01-21 19:10 ` [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config Joe Damato
  3 siblings, 2 replies; 24+ messages in thread
From: Joe Damato @ 2025-01-21 19:10 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, xuanzhuo, mkarsten, Joe Damato,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
can be accessed by user apps.

$ ethtool -i ens4 | grep driver
driver: virtio_net

$ sudo ethtool -L ens4 combined 4

$ ./tools/net/ynl/pyynl/cli.py \
       --spec Documentation/netlink/specs/netdev.yaml \
       --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'type': 'tx'},
 {'id': 1, 'ifindex': 2, 'type': 'tx'},
 {'id': 2, 'ifindex': 2, 'type': 'tx'},
 {'id': 3, 'ifindex': 2, 'type': 'tx'}]

Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
the lack of 'napi-id' in the above output is expected.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 rfcv3:
   - Eliminated XDP checks; NAPIs will always be mapped to RX queues, as
     Gerhard Engleder suggested.

 v2:
   - Eliminate RTNL code paths using the API Jakub introduced in patch 1
     of this v2.
   - Added virtnet_napi_disable to reduce code duplication as
     suggested by Jason Wang.

 drivers/net/virtio_net.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index cff18c66b54a..c120cb2106c0 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2805,7 +2805,11 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
 
 static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
 {
+	struct virtnet_info *vi = vq->vdev->priv;
+
 	virtnet_napi_do_enable(vq, napi);
+
+	netif_queue_set_napi(vi->dev, vq2rxq(vq), NETDEV_QUEUE_TYPE_RX, napi);
 }
 
 static void virtnet_napi_tx_enable(struct virtnet_info *vi,
@@ -2826,6 +2830,16 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
 	virtnet_napi_do_enable(vq, napi);
 }
 
+static void virtnet_napi_disable(struct virtqueue *vq,
+				 struct napi_struct *napi)
+{
+	struct virtnet_info *vi = vq->vdev->priv;
+
+	netif_queue_set_napi(vi->dev, vq2rxq(vq), NETDEV_QUEUE_TYPE_RX, NULL);
+
+	napi_disable(napi);
+}
+
 static void virtnet_napi_tx_disable(struct napi_struct *napi)
 {
 	if (napi->weight)
@@ -2842,7 +2856,8 @@ static void refill_work(struct work_struct *work)
 	for (i = 0; i < vi->curr_queue_pairs; i++) {
 		struct receive_queue *rq = &vi->rq[i];
 
-		napi_disable(&rq->napi);
+		virtnet_napi_disable(rq->vq, &rq->napi);
+
 		still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
 		virtnet_napi_enable(rq->vq, &rq->napi);
 
@@ -3042,7 +3057,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
 {
 	virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
-	napi_disable(&vi->rq[qp_index].napi);
+	virtnet_napi_disable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
 	xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
 }
 
@@ -3313,7 +3328,7 @@ static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
 	bool running = netif_running(vi->dev);
 
 	if (running) {
-		napi_disable(&rq->napi);
+		virtnet_napi_disable(rq->vq, &rq->napi);
 		virtnet_cancel_dim(vi, &rq->dim);
 	}
 }
@@ -5932,7 +5947,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 	/* Make sure NAPI is not using any XDP TX queues for RX. */
 	if (netif_running(dev)) {
 		for (i = 0; i < vi->max_queue_pairs; i++) {
-			napi_disable(&vi->rq[i].napi);
+			virtnet_napi_disable(vi->rq[i].vq, &vi->rq[i].napi);
 			virtnet_napi_tx_disable(&vi->sq[i].napi);
 		}
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config
  2025-01-21 19:10 [RFC net-next v3 0/4] virtio_net: Link queues to NAPIs Joe Damato
                   ` (2 preceding siblings ...)
  2025-01-21 19:10 ` [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues Joe Damato
@ 2025-01-21 19:10 ` Joe Damato
  2025-01-21 20:18   ` Gerhard Engleder
  2025-01-22  6:13   ` Jason Wang
  3 siblings, 2 replies; 24+ messages in thread
From: Joe Damato @ 2025-01-21 19:10 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, xuanzhuo, mkarsten, Joe Damato,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

Use persistent NAPI config so that NAPI IDs are not renumbered as queue
counts change.

$ sudo ethtool -l ens4  | tail -5 | egrep -i '(current|combined)'
Current hardware settings:
Combined:       4

$ ./tools/net/ynl/pyynl/cli.py \
    --spec Documentation/netlink/specs/netdev.yaml \
    --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'type': 'tx'},
 {'id': 1, 'ifindex': 2, 'type': 'tx'},
 {'id': 2, 'ifindex': 2, 'type': 'tx'},
 {'id': 3, 'ifindex': 2, 'type': 'tx'}]

Now adjust the queue count, note that the NAPI IDs are not renumbered:

$ sudo ethtool -L ens4 combined 1
$ ./tools/net/ynl/pyynl/cli.py \
    --spec Documentation/netlink/specs/netdev.yaml \
    --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'type': 'tx'}]

$ sudo ethtool -L ens4 combined 8
$ ./tools/net/ynl/pyynl/cli.py \
    --spec Documentation/netlink/specs/netdev.yaml \
    --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
 {'id': 4, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'},
 {'id': 5, 'ifindex': 2, 'napi-id': 8198, 'type': 'rx'},
 {'id': 6, 'ifindex': 2, 'napi-id': 8199, 'type': 'rx'},
 {'id': 7, 'ifindex': 2, 'napi-id': 8200, 'type': 'rx'},
 [...]

Signed-off-by: Joe Damato <jdamato@fastly.com>
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 rfcv3:
   - Added Xuan Zhuo's Reviewed-by tag. No functional changes.

 v2:
   - Eliminate RTNL code paths using the API Jakub introduced in patch 1
     of this v2.
   - Added virtnet_napi_disable to reduce code duplication as
     suggested by Jason Wang.

 drivers/net/virtio_net.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index c120cb2106c0..e0752a856adf 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -6411,8 +6411,9 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
 	INIT_DELAYED_WORK(&vi->refill, refill_work);
 	for (i = 0; i < vi->max_queue_pairs; i++) {
 		vi->rq[i].pages = NULL;
-		netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll,
-				      napi_weight);
+		netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,
+				      i);
+		vi->rq[i].napi.weight = napi_weight;
 		netif_napi_add_tx_weight(vi->dev, &vi->sq[i].napi,
 					 virtnet_poll_tx,
 					 napi_tx ? napi_weight : 0);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues
  2025-01-21 19:10 ` [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues Joe Damato
@ 2025-01-21 20:13   ` Gerhard Engleder
  2025-01-22  6:13   ` Jason Wang
  1 sibling, 0 replies; 24+ messages in thread
From: Gerhard Engleder @ 2025-01-21 20:13 UTC (permalink / raw)
  To: Joe Damato, netdev
  Cc: jasowang, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On 21.01.25 20:10, Joe Damato wrote:
> Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> can be accessed by user apps.
> 
> $ ethtool -i ens4 | grep driver
> driver: virtio_net
> 
> $ sudo ethtool -L ens4 combined 4
> 
> $ ./tools/net/ynl/pyynl/cli.py \
>         --spec Documentation/netlink/specs/netdev.yaml \
>         --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
>   {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
>   {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
>   {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
>   {'id': 0, 'ifindex': 2, 'type': 'tx'},
>   {'id': 1, 'ifindex': 2, 'type': 'tx'},
>   {'id': 2, 'ifindex': 2, 'type': 'tx'},
>   {'id': 3, 'ifindex': 2, 'type': 'tx'}]
> 
> Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> the lack of 'napi-id' in the above output is expected.
> 
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---
>   rfcv3:
>     - Eliminated XDP checks; NAPIs will always be mapped to RX queues, as
>       Gerhard Engleder suggested.
> 
>   v2:
>     - Eliminate RTNL code paths using the API Jakub introduced in patch 1
>       of this v2.
>     - Added virtnet_napi_disable to reduce code duplication as
>       suggested by Jason Wang.
> 
>   drivers/net/virtio_net.c | 23 +++++++++++++++++++----
>   1 file changed, 19 insertions(+), 4 deletions(-)

Looks good to me. I hope I didn't get you on the wrong track!

Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config
  2025-01-21 19:10 ` [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config Joe Damato
@ 2025-01-21 20:18   ` Gerhard Engleder
  2025-01-22  6:13   ` Jason Wang
  1 sibling, 0 replies; 24+ messages in thread
From: Gerhard Engleder @ 2025-01-21 20:18 UTC (permalink / raw)
  To: Joe Damato, netdev
  Cc: jasowang, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On 21.01.25 20:10, Joe Damato wrote:
> Use persistent NAPI config so that NAPI IDs are not renumbered as queue
> counts change.
> 
> $ sudo ethtool -l ens4  | tail -5 | egrep -i '(current|combined)'
> Current hardware settings:
> Combined:       4
> 
> $ ./tools/net/ynl/pyynl/cli.py \
>      --spec Documentation/netlink/specs/netdev.yaml \
>      --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>   {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
>   {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
>   {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
>   {'id': 0, 'ifindex': 2, 'type': 'tx'},
>   {'id': 1, 'ifindex': 2, 'type': 'tx'},
>   {'id': 2, 'ifindex': 2, 'type': 'tx'},
>   {'id': 3, 'ifindex': 2, 'type': 'tx'}]
> 
> Now adjust the queue count, note that the NAPI IDs are not renumbered:
> 
> $ sudo ethtool -L ens4 combined 1
> $ ./tools/net/ynl/pyynl/cli.py \
>      --spec Documentation/netlink/specs/netdev.yaml \
>      --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>   {'id': 0, 'ifindex': 2, 'type': 'tx'}]
> 
> $ sudo ethtool -L ens4 combined 8
> $ ./tools/net/ynl/pyynl/cli.py \
>      --spec Documentation/netlink/specs/netdev.yaml \
>      --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>   {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
>   {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
>   {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
>   {'id': 4, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'},
>   {'id': 5, 'ifindex': 2, 'napi-id': 8198, 'type': 'rx'},
>   {'id': 6, 'ifindex': 2, 'napi-id': 8199, 'type': 'rx'},
>   {'id': 7, 'ifindex': 2, 'napi-id': 8200, 'type': 'rx'},
>   [...]
> 
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>   rfcv3:
>     - Added Xuan Zhuo's Reviewed-by tag. No functional changes.
> 
>   v2:
>     - Eliminate RTNL code paths using the API Jakub introduced in patch 1
>       of this v2.
>     - Added virtnet_napi_disable to reduce code duplication as
>       suggested by Jason Wang.
> 
>   drivers/net/virtio_net.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index c120cb2106c0..e0752a856adf 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -6411,8 +6411,9 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
>   	INIT_DELAYED_WORK(&vi->refill, refill_work);
>   	for (i = 0; i < vi->max_queue_pairs; i++) {
>   		vi->rq[i].pages = NULL;
> -		netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll,
> -				      napi_weight);
> +		netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,
> +				      i);
> +		vi->rq[i].napi.weight = napi_weight;
>   		netif_napi_add_tx_weight(vi->dev, &vi->sq[i].napi,
>   					 virtnet_poll_tx,
>   					 napi_tx ? napi_weight : 0);

Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-21 19:10 ` [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping Joe Damato
@ 2025-01-22  6:12   ` Jason Wang
  2025-01-22 17:40     ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Jason Wang @ 2025-01-22  6:12 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, gerhard, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Slight refactor to prepare the code for NAPI to queue mapping. No
> functional changes.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> Tested-by: Lei Yang <leiyang@redhat.com>
> ---
>  v2:
>    - Previously patch 1 in the v1.
>    - Added Reviewed-by and Tested-by tags to commit message. No
>      functional changes.
>
>  drivers/net/virtio_net.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 7646ddd9bef7..cff18c66b54a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
>         virtqueue_napi_schedule(&rq->napi, rvq);
>  }
>
> -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +static void virtnet_napi_do_enable(struct virtqueue *vq,
> +                                  struct napi_struct *napi)
>  {
>         napi_enable(napi);

Nit: it might be better to not have this helper to avoid a misuse of
this function directly.

Other than this.

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

>
> @@ -2802,6 +2803,11 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
>         local_bh_enable();
>  }
>
> +static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +{
> +       virtnet_napi_do_enable(vq, napi);
> +}
> +
>  static void virtnet_napi_tx_enable(struct virtnet_info *vi,
>                                    struct virtqueue *vq,
>                                    struct napi_struct *napi)
> @@ -2817,7 +2823,7 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
>                 return;
>         }
>
> -       return virtnet_napi_enable(vq, napi);
> +       virtnet_napi_do_enable(vq, napi);
>  }
>
>  static void virtnet_napi_tx_disable(struct napi_struct *napi)
> --
> 2.25.1
>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues
  2025-01-21 19:10 ` [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues Joe Damato
  2025-01-21 20:13   ` Gerhard Engleder
@ 2025-01-22  6:13   ` Jason Wang
  1 sibling, 0 replies; 24+ messages in thread
From: Jason Wang @ 2025-01-22  6:13 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, gerhard, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> can be accessed by user apps.
>
> $ ethtool -i ens4 | grep driver
> driver: virtio_net
>
> $ sudo ethtool -L ens4 combined 4
>
> $ ./tools/net/ynl/pyynl/cli.py \
>        --spec Documentation/netlink/specs/netdev.yaml \
>        --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'type': 'tx'},
>  {'id': 1, 'ifindex': 2, 'type': 'tx'},
>  {'id': 2, 'ifindex': 2, 'type': 'tx'},
>  {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>
> Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> the lack of 'napi-id' in the above output is expected.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config
  2025-01-21 19:10 ` [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config Joe Damato
  2025-01-21 20:18   ` Gerhard Engleder
@ 2025-01-22  6:13   ` Jason Wang
  1 sibling, 0 replies; 24+ messages in thread
From: Jason Wang @ 2025-01-22  6:13 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, gerhard, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Use persistent NAPI config so that NAPI IDs are not renumbered as queue
> counts change.
>
> $ sudo ethtool -l ens4  | tail -5 | egrep -i '(current|combined)'
> Current hardware settings:
> Combined:       4
>
> $ ./tools/net/ynl/pyynl/cli.py \
>     --spec Documentation/netlink/specs/netdev.yaml \
>     --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'type': 'tx'},
>  {'id': 1, 'ifindex': 2, 'type': 'tx'},
>  {'id': 2, 'ifindex': 2, 'type': 'tx'},
>  {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>
> Now adjust the queue count, note that the NAPI IDs are not renumbered:
>
> $ sudo ethtool -L ens4 combined 1
> $ ./tools/net/ynl/pyynl/cli.py \
>     --spec Documentation/netlink/specs/netdev.yaml \
>     --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'type': 'tx'}]
>
> $ sudo ethtool -L ens4 combined 8
> $ ./tools/net/ynl/pyynl/cli.py \
>     --spec Documentation/netlink/specs/netdev.yaml \
>     --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
>  {'id': 4, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'},
>  {'id': 5, 'ifindex': 2, 'napi-id': 8198, 'type': 'rx'},
>  {'id': 6, 'ifindex': 2, 'napi-id': 8199, 'type': 'rx'},
>  {'id': 7, 'ifindex': 2, 'napi-id': 8200, 'type': 'rx'},
>  [...]
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-22  6:12   ` Jason Wang
@ 2025-01-22 17:40     ` Joe Damato
  2025-01-23  2:40       ` Jason Wang
  0 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-22 17:40 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, gerhard, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
> >
> > Slight refactor to prepare the code for NAPI to queue mapping. No
> > functional changes.
> >
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> > Tested-by: Lei Yang <leiyang@redhat.com>
> > ---
> >  v2:
> >    - Previously patch 1 in the v1.
> >    - Added Reviewed-by and Tested-by tags to commit message. No
> >      functional changes.
> >
> >  drivers/net/virtio_net.c | 10 ++++++++--
> >  1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 7646ddd9bef7..cff18c66b54a 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
> >         virtqueue_napi_schedule(&rq->napi, rvq);
> >  }
> >
> > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > +                                  struct napi_struct *napi)
> >  {
> >         napi_enable(napi);
> 
> Nit: it might be better to not have this helper to avoid a misuse of
> this function directly.

Sorry, I'm probably missing something here.

Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
in virtnet_napi_do_enable.

Are you suggesting that I remove virtnet_napi_do_enable and repeat
the block of code in there twice (in virtnet_napi_enable and
virtnet_napi_tx_enable)?

Just seemed like a lot of code to repeat twice and a helper would be
cleaner?

Let me know; since net-next is closed there is a plenty of time to
get this to where you'd like it to be before new code is accepted.

> Other than this.
> 
> Acked-by: Jason Wang <jasowang@redhat.com>

Thanks.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-22 17:40     ` Joe Damato
@ 2025-01-23  2:40       ` Jason Wang
  2025-01-23  2:47         ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Jason Wang @ 2025-01-23  2:40 UTC (permalink / raw)
  To: Joe Damato, Jason Wang, netdev, gerhard, leiyang, xuanzhuo,
	mkarsten, Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@fastly.com> wrote:
>
> On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
> > >
> > > Slight refactor to prepare the code for NAPI to queue mapping. No
> > > functional changes.
> > >
> > > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > > Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> > > Tested-by: Lei Yang <leiyang@redhat.com>
> > > ---
> > >  v2:
> > >    - Previously patch 1 in the v1.
> > >    - Added Reviewed-by and Tested-by tags to commit message. No
> > >      functional changes.
> > >
> > >  drivers/net/virtio_net.c | 10 ++++++++--
> > >  1 file changed, 8 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 7646ddd9bef7..cff18c66b54a 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
> > >         virtqueue_napi_schedule(&rq->napi, rvq);
> > >  }
> > >
> > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > +                                  struct napi_struct *napi)
> > >  {
> > >         napi_enable(napi);
> >
> > Nit: it might be better to not have this helper to avoid a misuse of
> > this function directly.
>
> Sorry, I'm probably missing something here.
>
> Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
> in virtnet_napi_do_enable.
>
> Are you suggesting that I remove virtnet_napi_do_enable and repeat
> the block of code in there twice (in virtnet_napi_enable and
> virtnet_napi_tx_enable)?

I think I miss something here, it looks like virtnet_napi_tx_enable()
calls virtnet_napi_do_enable() directly.

I would like to know why we don't call netif_queue_set_napi() for TX NAPI here?

Thanks

>
> Just seemed like a lot of code to repeat twice and a helper would be
> cleaner?
>
> Let me know; since net-next is closed there is a plenty of time to
> get this to where you'd like it to be before new code is accepted.
>
> > Other than this.
> >
> > Acked-by: Jason Wang <jasowang@redhat.com>
>
> Thanks.
>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-23  2:40       ` Jason Wang
@ 2025-01-23  2:47         ` Joe Damato
  2025-01-24  1:14           ` Jason Wang
  0 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-23  2:47 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, gerhard, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Thu, Jan 23, 2025 at 10:40:43AM +0800, Jason Wang wrote:
> On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@fastly.com> wrote:
> >
> > On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> > > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
> > > >
> > > > Slight refactor to prepare the code for NAPI to queue mapping. No
> > > > functional changes.
> > > >
> > > > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > > > Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> > > > Tested-by: Lei Yang <leiyang@redhat.com>
> > > > ---
> > > >  v2:
> > > >    - Previously patch 1 in the v1.
> > > >    - Added Reviewed-by and Tested-by tags to commit message. No
> > > >      functional changes.
> > > >
> > > >  drivers/net/virtio_net.c | 10 ++++++++--
> > > >  1 file changed, 8 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > index 7646ddd9bef7..cff18c66b54a 100644
> > > > --- a/drivers/net/virtio_net.c
> > > > +++ b/drivers/net/virtio_net.c
> > > > @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
> > > >         virtqueue_napi_schedule(&rq->napi, rvq);
> > > >  }
> > > >
> > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > > +                                  struct napi_struct *napi)
> > > >  {
> > > >         napi_enable(napi);
> > >
> > > Nit: it might be better to not have this helper to avoid a misuse of
> > > this function directly.
> >
> > Sorry, I'm probably missing something here.
> >
> > Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
> > in virtnet_napi_do_enable.
> >
> > Are you suggesting that I remove virtnet_napi_do_enable and repeat
> > the block of code in there twice (in virtnet_napi_enable and
> > virtnet_napi_tx_enable)?
> 
> I think I miss something here, it looks like virtnet_napi_tx_enable()
> calls virtnet_napi_do_enable() directly.
> 
> I would like to know why we don't call netif_queue_set_napi() for TX NAPI here?

Please see both the cover letter and the commit message of the next
commit which addresses this question.

TX-only NAPIs do not have NAPI IDs so there is nothing to map.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-23  2:47         ` Joe Damato
@ 2025-01-24  1:14           ` Jason Wang
  2025-01-24 20:19             ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Jason Wang @ 2025-01-24  1:14 UTC (permalink / raw)
  To: Joe Damato, Jason Wang, netdev, gerhard, leiyang, xuanzhuo,
	mkarsten, Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Thu, Jan 23, 2025 at 10:47 AM Joe Damato <jdamato@fastly.com> wrote:
>
> On Thu, Jan 23, 2025 at 10:40:43AM +0800, Jason Wang wrote:
> > On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@fastly.com> wrote:
> > >
> > > On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> > > > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
> > > > >
> > > > > Slight refactor to prepare the code for NAPI to queue mapping. No
> > > > > functional changes.
> > > > >
> > > > > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > > > > Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> > > > > Tested-by: Lei Yang <leiyang@redhat.com>
> > > > > ---
> > > > >  v2:
> > > > >    - Previously patch 1 in the v1.
> > > > >    - Added Reviewed-by and Tested-by tags to commit message. No
> > > > >      functional changes.
> > > > >
> > > > >  drivers/net/virtio_net.c | 10 ++++++++--
> > > > >  1 file changed, 8 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > index 7646ddd9bef7..cff18c66b54a 100644
> > > > > --- a/drivers/net/virtio_net.c
> > > > > +++ b/drivers/net/virtio_net.c
> > > > > @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
> > > > >         virtqueue_napi_schedule(&rq->napi, rvq);
> > > > >  }
> > > > >
> > > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > > > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > > > +                                  struct napi_struct *napi)
> > > > >  {
> > > > >         napi_enable(napi);
> > > >
> > > > Nit: it might be better to not have this helper to avoid a misuse of
> > > > this function directly.
> > >
> > > Sorry, I'm probably missing something here.
> > >
> > > Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
> > > in virtnet_napi_do_enable.
> > >
> > > Are you suggesting that I remove virtnet_napi_do_enable and repeat
> > > the block of code in there twice (in virtnet_napi_enable and
> > > virtnet_napi_tx_enable)?
> >
> > I think I miss something here, it looks like virtnet_napi_tx_enable()
> > calls virtnet_napi_do_enable() directly.
> >
> > I would like to know why we don't call netif_queue_set_napi() for TX NAPI here?
>
> Please see both the cover letter and the commit message of the next
> commit which addresses this question.
>
> TX-only NAPIs do not have NAPI IDs so there is nothing to map.

Interesting, but I have more questions

1) why need a driver to know the NAPI implementation like this?

2) does NAPI know (or why it needs to know) whether or not it's a TX
or not? I only see the following code in napi_hash_add():

static void napi_hash_add(struct napi_struct *napi)
{
        unsigned long flags;

        if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state))
                return;

...

        __napi_hash_add_with_id(napi, napi_gen_id);

        spin_unlock_irqrestore(&napi_hash_lock, flags);
}

It seems it only matters with NAPI_STATE_NO_BUSY_POLL.

And if NAPI knows everything, should it be better to just do the
linking in napi_enable/disable() instead of letting each driver do it
by itself?

Thanks

>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-24  1:14           ` Jason Wang
@ 2025-01-24 20:19             ` Joe Damato
  2025-01-26  8:04               ` Jason Wang
  0 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-24 20:19 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, gerhard, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Fri, Jan 24, 2025 at 09:14:54AM +0800, Jason Wang wrote:
> On Thu, Jan 23, 2025 at 10:47 AM Joe Damato <jdamato@fastly.com> wrote:
> >
> > On Thu, Jan 23, 2025 at 10:40:43AM +0800, Jason Wang wrote:
> > > On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@fastly.com> wrote:
> > > >
> > > > On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> > > > > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
> > > > > >
> > > > > > Slight refactor to prepare the code for NAPI to queue mapping. No
> > > > > > functional changes.
> > > > > >
> > > > > > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > > > > > Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> > > > > > Tested-by: Lei Yang <leiyang@redhat.com>
> > > > > > ---
> > > > > >  v2:
> > > > > >    - Previously patch 1 in the v1.
> > > > > >    - Added Reviewed-by and Tested-by tags to commit message. No
> > > > > >      functional changes.
> > > > > >
> > > > > >  drivers/net/virtio_net.c | 10 ++++++++--
> > > > > >  1 file changed, 8 insertions(+), 2 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > index 7646ddd9bef7..cff18c66b54a 100644
> > > > > > --- a/drivers/net/virtio_net.c
> > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
> > > > > >         virtqueue_napi_schedule(&rq->napi, rvq);
> > > > > >  }
> > > > > >
> > > > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > > > > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > > > > +                                  struct napi_struct *napi)
> > > > > >  {
> > > > > >         napi_enable(napi);
> > > > >
> > > > > Nit: it might be better to not have this helper to avoid a misuse of
> > > > > this function directly.
> > > >
> > > > Sorry, I'm probably missing something here.
> > > >
> > > > Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
> > > > in virtnet_napi_do_enable.
> > > >
> > > > Are you suggesting that I remove virtnet_napi_do_enable and repeat
> > > > the block of code in there twice (in virtnet_napi_enable and
> > > > virtnet_napi_tx_enable)?
> > >
> > > I think I miss something here, it looks like virtnet_napi_tx_enable()
> > > calls virtnet_napi_do_enable() directly.
> > >
> > > I would like to know why we don't call netif_queue_set_napi() for TX NAPI here?
> >
> > Please see both the cover letter and the commit message of the next
> > commit which addresses this question.
> >
> > TX-only NAPIs do not have NAPI IDs so there is nothing to map.
> 
> Interesting, but I have more questions
> 
> 1) why need a driver to know the NAPI implementation like this?

I'm not sure I understand the question, but I'll try to give an
answer and please let me know if you have another question.

Mapping the NAPI IDs to queue IDs is useful for applications that
use epoll based busy polling (which relies on the NAPI ID, see also
SO_INCOMING_NAPI_ID and [1]), IRQ suspension [2], and generally
per-NAPI configuration [3].

Without this code added to the driver, the user application can get
the NAPI ID of an incoming connection, but has no way to know which
queue (or NIC) that NAPI ID is associated with or to set per-NAPI
configuration settings.

[1]: https://lore.kernel.org/all/20240213061652.6342-1-jdamato@fastly.com/
[2]: https://lore.kernel.org/netdev/20241109050245.191288-5-jdamato@fastly.com/T/
[3]: https://lore.kernel.org/lkml/20241011184527.16393-1-jdamato@fastly.com/

> 2) does NAPI know (or why it needs to know) whether or not it's a TX
> or not? I only see the following code in napi_hash_add():

Note that I did not write the original implementation of NAPI IDs or
epoll-based busy poll, so I can only comment on what I know :)

I don't know why TX-only NAPIs do not have NAPI IDs. My guess is
that in the original implementation, the code was designed only for
RX busy polling, so TX-only NAPIs were not assigned NAPI IDs.

Perhaps in the future, TX-only NAPIs will be assigned NAPI IDs, but
currently they do not have NAPI IDs.

> static void napi_hash_add(struct napi_struct *napi)
> {
>         unsigned long flags;
> 
>         if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state))
>                 return;
> 
> ...
> 
>         __napi_hash_add_with_id(napi, napi_gen_id);
> 
>         spin_unlock_irqrestore(&napi_hash_lock, flags);
> }
> 
> It seems it only matters with NAPI_STATE_NO_BUSY_POLL.
> 
> And if NAPI knows everything, should it be better to just do the
> linking in napi_enable/disable() instead of letting each driver do it
> by itself?

It would be nice if this were possible, I agree. Perhaps in the
future some work could be done to make this possible.

I believe that this is not currently possible because the NAPI does
not know which queue ID it is associated with. That mapping of which
queue is associated with which NAPI is established in patch 3
(please see the commit message of patch 3 to see an example of the
output).

The driver knows both the queue ID and the NAPI for that queue, so
the mapping can be established only by the driver.

Let me know if that helps.

- Joe

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-24 20:19             ` Joe Damato
@ 2025-01-26  8:04               ` Jason Wang
  2025-01-27 17:52                 ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Jason Wang @ 2025-01-26  8:04 UTC (permalink / raw)
  To: Joe Damato, Jason Wang, netdev, gerhard, leiyang, xuanzhuo,
	mkarsten, Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Sat, Jan 25, 2025 at 4:19 AM Joe Damato <jdamato@fastly.com> wrote:
>
> On Fri, Jan 24, 2025 at 09:14:54AM +0800, Jason Wang wrote:
> > On Thu, Jan 23, 2025 at 10:47 AM Joe Damato <jdamato@fastly.com> wrote:
> > >
> > > On Thu, Jan 23, 2025 at 10:40:43AM +0800, Jason Wang wrote:
> > > > On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@fastly.com> wrote:
> > > > >
> > > > > On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> > > > > > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
> > > > > > >
> > > > > > > Slight refactor to prepare the code for NAPI to queue mapping. No
> > > > > > > functional changes.
> > > > > > >
> > > > > > > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > > > > > > Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> > > > > > > Tested-by: Lei Yang <leiyang@redhat.com>
> > > > > > > ---
> > > > > > >  v2:
> > > > > > >    - Previously patch 1 in the v1.
> > > > > > >    - Added Reviewed-by and Tested-by tags to commit message. No
> > > > > > >      functional changes.
> > > > > > >
> > > > > > >  drivers/net/virtio_net.c | 10 ++++++++--
> > > > > > >  1 file changed, 8 insertions(+), 2 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > > index 7646ddd9bef7..cff18c66b54a 100644
> > > > > > > --- a/drivers/net/virtio_net.c
> > > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > > @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
> > > > > > >         virtqueue_napi_schedule(&rq->napi, rvq);
> > > > > > >  }
> > > > > > >
> > > > > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > > > > > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > > > > > +                                  struct napi_struct *napi)
> > > > > > >  {
> > > > > > >         napi_enable(napi);
> > > > > >
> > > > > > Nit: it might be better to not have this helper to avoid a misuse of
> > > > > > this function directly.
> > > > >
> > > > > Sorry, I'm probably missing something here.
> > > > >
> > > > > Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
> > > > > in virtnet_napi_do_enable.
> > > > >
> > > > > Are you suggesting that I remove virtnet_napi_do_enable and repeat
> > > > > the block of code in there twice (in virtnet_napi_enable and
> > > > > virtnet_napi_tx_enable)?
> > > >
> > > > I think I miss something here, it looks like virtnet_napi_tx_enable()
> > > > calls virtnet_napi_do_enable() directly.
> > > >
> > > > I would like to know why we don't call netif_queue_set_napi() for TX NAPI here?
> > >
> > > Please see both the cover letter and the commit message of the next
> > > commit which addresses this question.
> > >
> > > TX-only NAPIs do not have NAPI IDs so there is nothing to map.
> >
> > Interesting, but I have more questions
> >
> > 1) why need a driver to know the NAPI implementation like this?
>
> I'm not sure I understand the question, but I'll try to give an
> answer and please let me know if you have another question.
>
> Mapping the NAPI IDs to queue IDs is useful for applications that
> use epoll based busy polling (which relies on the NAPI ID, see also
> SO_INCOMING_NAPI_ID and [1]), IRQ suspension [2], and generally
> per-NAPI configuration [3].
>
> Without this code added to the driver, the user application can get
> the NAPI ID of an incoming connection, but has no way to know which
> queue (or NIC) that NAPI ID is associated with or to set per-NAPI
> configuration settings.
>
> [1]: https://lore.kernel.org/all/20240213061652.6342-1-jdamato@fastly.com/
> [2]: https://lore.kernel.org/netdev/20241109050245.191288-5-jdamato@fastly.com/T/
> [3]: https://lore.kernel.org/lkml/20241011184527.16393-1-jdamato@fastly.com/

Yes, exactly. Sorry for being unclear, what I want to ask is actually:

1) TX NAPI doesn't have a NAPI ID, this seems more like a NAPI
implementation details which should be hidden from the driver.
2) If 1 is true, in the netif_queue_set_napi(), should it be better to
add and check for whether or not NAPI has an ID and return early if it
doesn't have one
3) Then driver doesn't need to know NAPI implementation details like
NAPI stuffs?

>
> > 2) does NAPI know (or why it needs to know) whether or not it's a TX
> > or not? I only see the following code in napi_hash_add():
>
> Note that I did not write the original implementation of NAPI IDs or
> epoll-based busy poll, so I can only comment on what I know :)
>
> I don't know why TX-only NAPIs do not have NAPI IDs. My guess is
> that in the original implementation, the code was designed only for
> RX busy polling, so TX-only NAPIs were not assigned NAPI IDs.
>
> Perhaps in the future, TX-only NAPIs will be assigned NAPI IDs, but
> currently they do not have NAPI IDs.

Jakub, could you please help to clarify this part?

>
> > static void napi_hash_add(struct napi_struct *napi)
> > {
> >         unsigned long flags;
> >
> >         if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state))
> >                 return;
> >
> > ...
> >
> >         __napi_hash_add_with_id(napi, napi_gen_id);
> >
> >         spin_unlock_irqrestore(&napi_hash_lock, flags);
> > }
> >
> > It seems it only matters with NAPI_STATE_NO_BUSY_POLL.
> >
> > And if NAPI knows everything, should it be better to just do the
> > linking in napi_enable/disable() instead of letting each driver do it
> > by itself?
>
> It would be nice if this were possible, I agree. Perhaps in the
> future some work could be done to make this possible.
>
> I believe that this is not currently possible because the NAPI does
> not know which queue ID it is associated with. That mapping of which
> queue is associated with which NAPI is established in patch 3
> (please see the commit message of patch 3 to see an example of the
> output).
>
> The driver knows both the queue ID and the NAPI for that queue, so
> the mapping can be established only by the driver.
>
> Let me know if that helps.

Yes, definitely.

Let's see Jakub's comment.

Thanks

>
> - Joe
>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-26  8:04               ` Jason Wang
@ 2025-01-27 17:52                 ` Joe Damato
  2025-01-27 19:31                   ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-27 17:52 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, gerhard, leiyang, xuanzhuo, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Sun, Jan 26, 2025 at 04:04:02PM +0800, Jason Wang wrote:
> On Sat, Jan 25, 2025 at 4:19 AM Joe Damato <jdamato@fastly.com> wrote:
> >
> > On Fri, Jan 24, 2025 at 09:14:54AM +0800, Jason Wang wrote:
> > > On Thu, Jan 23, 2025 at 10:47 AM Joe Damato <jdamato@fastly.com> wrote:
> > > >
> > > > On Thu, Jan 23, 2025 at 10:40:43AM +0800, Jason Wang wrote:
> > > > > On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@fastly.com> wrote:
> > > > > >
> > > > > > On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> > > > > > > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:

[...]

> > > > > > > >
> > > > > > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > > > > > > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > > > > > > +                                  struct napi_struct *napi)
> > > > > > > >  {
> > > > > > > >         napi_enable(napi);
> > > > > > >
> > > > > > > Nit: it might be better to not have this helper to avoid a misuse of
> > > > > > > this function directly.
> > > > > >
> > > > > > Sorry, I'm probably missing something here.
> > > > > >
> > > > > > Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
> > > > > > in virtnet_napi_do_enable.
> > > > > >
> > > > > > Are you suggesting that I remove virtnet_napi_do_enable and repeat
> > > > > > the block of code in there twice (in virtnet_napi_enable and
> > > > > > virtnet_napi_tx_enable)?
> > > > >
> > > > > I think I miss something here, it looks like virtnet_napi_tx_enable()
> > > > > calls virtnet_napi_do_enable() directly.
> > > > >
> > > > > I would like to know why we don't call netif_queue_set_napi() for TX NAPI here?
> > > >
> > > > Please see both the cover letter and the commit message of the next
> > > > commit which addresses this question.
> > > >
> > > > TX-only NAPIs do not have NAPI IDs so there is nothing to map.
> > >
> > > Interesting, but I have more questions
> > >
> > > 1) why need a driver to know the NAPI implementation like this?
> >
> > I'm not sure I understand the question, but I'll try to give an
> > answer and please let me know if you have another question.
> >
> > Mapping the NAPI IDs to queue IDs is useful for applications that
> > use epoll based busy polling (which relies on the NAPI ID, see also
> > SO_INCOMING_NAPI_ID and [1]), IRQ suspension [2], and generally
> > per-NAPI configuration [3].
> >
> > Without this code added to the driver, the user application can get
> > the NAPI ID of an incoming connection, but has no way to know which
> > queue (or NIC) that NAPI ID is associated with or to set per-NAPI
> > configuration settings.
> >
> > [1]: https://lore.kernel.org/all/20240213061652.6342-1-jdamato@fastly.com/
> > [2]: https://lore.kernel.org/netdev/20241109050245.191288-5-jdamato@fastly.com/T/
> > [3]: https://lore.kernel.org/lkml/20241011184527.16393-1-jdamato@fastly.com/
> 
> Yes, exactly. Sorry for being unclear, what I want to ask is actually:
> 
> 1) TX NAPI doesn't have a NAPI ID, this seems more like a NAPI
> implementation details which should be hidden from the driver.
> 2) If 1 is true, in the netif_queue_set_napi(), should it be better to
> add and check for whether or not NAPI has an ID and return early if it
> doesn't have one
> 3) Then driver doesn't need to know NAPI implementation details like
> NAPI stuffs?

Sorry it just feels like this conversation is getting off track.

This change is about mapping virtio_net RX queues to NAPI IDs to
allow for RX busy polling, per-NAPI config settings, etc.

If you try to use netif_queue_set_napi with a TX-only NAPI, it will
set the NAPI ID to 0.

I already addressed this in the cover letter, would you mind
carefully re-reading my cover letter and commit messages?

If your main concern is that you want me to call
netif_queue_set_napi for TX-only NAPIs in addition to the RX NAPIs
in virtio_net, I can do that and resend an RFC.

In that case, the output will show "0" for NAPI ID for the TX-only
NAPIs. See the commit message of patch 3 and imagine that the output
shows this instead:

$ ./tools/net/ynl/pyynl/cli.py \
       --spec Documentation/netlink/specs/netdev.yaml \
       --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'}]

If in the future the TX-only NAPIs get NAPI IDs, then nothing would
need to be updated in the driver and the NAPI IDs would "just work"
and appear.

> >
> > > 2) does NAPI know (or why it needs to know) whether or not it's a TX
> > > or not? I only see the following code in napi_hash_add():
> >
> > Note that I did not write the original implementation of NAPI IDs or
> > epoll-based busy poll, so I can only comment on what I know :)
> >
> > I don't know why TX-only NAPIs do not have NAPI IDs. My guess is
> > that in the original implementation, the code was designed only for
> > RX busy polling, so TX-only NAPIs were not assigned NAPI IDs.
> >
> > Perhaps in the future, TX-only NAPIs will be assigned NAPI IDs, but
> > currently they do not have NAPI IDs.
> 
> Jakub, could you please help to clarify this part?

Can you please explain what part needs clarification?

Regardless of TX-only NAPIs, we can still set NAPI IDs for
virtio_net RX queues and that would be immensely useful for users.

There's two options for virtio_net as I've outlined above and in my
cover letter and commit messages:

1. This implementation as-is. Then if one day in the future
   TX-only NAPIs get NAPI IDs, this driver (and others like mlx4)
   can be updated.

- OR -

2. Calling netif_queue_set_napi for all NAPIs, which results in the
   TX-only NAPIs displaying "0" as shown above.

Please let me know which option you'd like to see; I don't have a
preference, I just want to get support for this API in virtio_net.

> >
> > > static void napi_hash_add(struct napi_struct *napi)
> > > {
> > >         unsigned long flags;
> > >
> > >         if (test_bit(NAPI_STATE_NO_BUSY_POLL, &napi->state))
> > >                 return;
> > >
> > > ...
> > >
> > >         __napi_hash_add_with_id(napi, napi_gen_id);
> > >
> > >         spin_unlock_irqrestore(&napi_hash_lock, flags);
> > > }
> > >
> > > It seems it only matters with NAPI_STATE_NO_BUSY_POLL.
> > >
> > > And if NAPI knows everything, should it be better to just do the
> > > linking in napi_enable/disable() instead of letting each driver do it
> > > by itself?
> >
> > It would be nice if this were possible, I agree. Perhaps in the
> > future some work could be done to make this possible.
> >
> > I believe that this is not currently possible because the NAPI does
> > not know which queue ID it is associated with. That mapping of which
> > queue is associated with which NAPI is established in patch 3
> > (please see the commit message of patch 3 to see an example of the
> > output).
> >
> > The driver knows both the queue ID and the NAPI for that queue, so
> > the mapping can be established only by the driver.
> >
> > Let me know if that helps.
> 
> Yes, definitely.
> 
> Let's see Jakub's comment.

As mentioned above, I'm not sure if we need to worry about TX-only
NAPIs getting NAPI IDs.

That seems pretty unrelated to this change as I've explained above.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-27 17:52                 ` Joe Damato
@ 2025-01-27 19:31                   ` Joe Damato
  2025-01-27 21:33                     ` Jakub Kicinski
  0 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-27 19:31 UTC (permalink / raw)
  To: Jason Wang, netdev, gerhard, leiyang, xuanzhuo, mkarsten,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Mon, Jan 27, 2025 at 12:52:06PM -0500, Joe Damato wrote:
> On Sun, Jan 26, 2025 at 04:04:02PM +0800, Jason Wang wrote:
> > On Sat, Jan 25, 2025 at 4:19 AM Joe Damato <jdamato@fastly.com> wrote:
> > >
> > > On Fri, Jan 24, 2025 at 09:14:54AM +0800, Jason Wang wrote:
> > > > On Thu, Jan 23, 2025 at 10:47 AM Joe Damato <jdamato@fastly.com> wrote:
> > > > >
> > > > > On Thu, Jan 23, 2025 at 10:40:43AM +0800, Jason Wang wrote:
> > > > > > On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@fastly.com> wrote:
> > > > > > >
> > > > > > > On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote:
> > > > > > > > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@fastly.com> wrote:
> 
> [...]
> 
> > > > > > > > >
> > > > > > > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > > > > > > > +static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > > > > > > > +                                  struct napi_struct *napi)
> > > > > > > > >  {
> > > > > > > > >         napi_enable(napi);
> > > > > > > >
> > > > > > > > Nit: it might be better to not have this helper to avoid a misuse of
> > > > > > > > this function directly.
> > > > > > >
> > > > > > > Sorry, I'm probably missing something here.
> > > > > > >
> > > > > > > Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic
> > > > > > > in virtnet_napi_do_enable.
> > > > > > >
> > > > > > > Are you suggesting that I remove virtnet_napi_do_enable and repeat
> > > > > > > the block of code in there twice (in virtnet_napi_enable and
> > > > > > > virtnet_napi_tx_enable)?
> > > > > >
> > > > > > I think I miss something here, it looks like virtnet_napi_tx_enable()
> > > > > > calls virtnet_napi_do_enable() directly.
> > > > > >
> > > > > > I would like to know why we don't call netif_queue_set_napi() for TX NAPI here?
> > > > >
> > > > > Please see both the cover letter and the commit message of the next
> > > > > commit which addresses this question.
> > > > >
> > > > > TX-only NAPIs do not have NAPI IDs so there is nothing to map.
> > > >
> > > > Interesting, but I have more questions
> > > >
> > > > 1) why need a driver to know the NAPI implementation like this?
> > >
> > > I'm not sure I understand the question, but I'll try to give an
> > > answer and please let me know if you have another question.
> > >
> > > Mapping the NAPI IDs to queue IDs is useful for applications that
> > > use epoll based busy polling (which relies on the NAPI ID, see also
> > > SO_INCOMING_NAPI_ID and [1]), IRQ suspension [2], and generally
> > > per-NAPI configuration [3].
> > >
> > > Without this code added to the driver, the user application can get
> > > the NAPI ID of an incoming connection, but has no way to know which
> > > queue (or NIC) that NAPI ID is associated with or to set per-NAPI
> > > configuration settings.
> > >
> > > [1]: https://lore.kernel.org/all/20240213061652.6342-1-jdamato@fastly.com/
> > > [2]: https://lore.kernel.org/netdev/20241109050245.191288-5-jdamato@fastly.com/T/
> > > [3]: https://lore.kernel.org/lkml/20241011184527.16393-1-jdamato@fastly.com/
> > 
> > Yes, exactly. Sorry for being unclear, what I want to ask is actually:
> > 
> > 1) TX NAPI doesn't have a NAPI ID, this seems more like a NAPI
> > implementation details which should be hidden from the driver.
> > 2) If 1 is true, in the netif_queue_set_napi(), should it be better to
> > add and check for whether or not NAPI has an ID and return early if it
> > doesn't have one
> > 3) Then driver doesn't need to know NAPI implementation details like
> > NAPI stuffs?
> 
> Sorry it just feels like this conversation is getting off track.
> 
> This change is about mapping virtio_net RX queues to NAPI IDs to
> allow for RX busy polling, per-NAPI config settings, etc.
> 
> If you try to use netif_queue_set_napi with a TX-only NAPI, it will
> set the NAPI ID to 0.
> 
> I already addressed this in the cover letter, would you mind
> carefully re-reading my cover letter and commit messages?
> 
> If your main concern is that you want me to call
> netif_queue_set_napi for TX-only NAPIs in addition to the RX NAPIs
> in virtio_net, I can do that and resend an RFC.
> 
> In that case, the output will show "0" for NAPI ID for the TX-only
> NAPIs. See the commit message of patch 3 and imagine that the output
> shows this instead:
> 
> $ ./tools/net/ynl/pyynl/cli.py \
>        --spec Documentation/netlink/specs/netdev.yaml \
>        --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'}]
> 
> If in the future the TX-only NAPIs get NAPI IDs, then nothing would
> need to be updated in the driver and the NAPI IDs would "just work"
> and appear.

Actually, I missed a patch Jakub submit to net [1], which prevents
dumping TX-only NAPIs.

So, I think this RFC as-is (only calling netif_queue_set_napi
for RX NAPIs) should be fine without changes.

Please let me know.

[1]: https://lore.kernel.org/netdev/20250103183207.1216004-1-kuba@kernel.org/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-27 19:31                   ` Joe Damato
@ 2025-01-27 21:33                     ` Jakub Kicinski
  2025-01-27 22:07                       ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2025-01-27 21:33 UTC (permalink / raw)
  To: Joe Damato
  Cc: Jason Wang, netdev, gerhard, leiyang, xuanzhuo, mkarsten,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Mon, 27 Jan 2025 14:31:21 -0500 Joe Damato wrote:
> Actually, I missed a patch Jakub submit to net [1], which prevents
> dumping TX-only NAPIs.

That patch only addresses NAPI ops, here I think we're talking about
attributes of the queue object.

> So, I think this RFC as-is (only calling netif_queue_set_napi
> for RX NAPIs) should be fine without changes.

Weak preference towards making netdev_nl_queue_fill_one() "do the right
thing" when NAPI does not have ID assigned. And right thing IMO would
be to skip reporting the NAPI_ID attribute.

Tx NAPIs are one aspect, whether they have ID or not we may want direct
access to the struct somewhere in the core, via txq, at some point, and
then people may forget the linking has an unintended effect of also
changing the netlink attrs. The other aspect is that driver may link
queue to a Rx NAPI instance before napi_enable(), so before ID is
assigned. Again, we don't want to report ID of 0 in that case.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock()
  2025-01-21 19:10 ` [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
@ 2025-01-27 21:37   ` Jakub Kicinski
  2025-01-27 22:21     ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2025-01-27 21:37 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, gerhard, jasowang, leiyang, xuanzhuo, mkarsten,
	Andrew Lunn, David S. Miller, Eric Dumazet, Paolo Abeni,
	Simon Horman, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Lorenzo Bianconi, Alexander Lobakin, open list

On Tue, 21 Jan 2025 19:10:41 +0000 Joe Damato wrote:
> From: Jakub Kicinski <kuba@kernel.org>
> 
> netdev netlink is the only reader of netdev_{,rx_}queue->napi,
> and it already holds netdev->lock. Switch protection of the
> writes to netdev->lock as well.
> 
> Add netif_queue_set_napi_locked() for API completeness,
> but the expectation is that most current drivers won't have
> to worry about locking any more. Today they jump thru hoops
> to take rtnl_lock.

I started having second thoughts about this patch, sorry to say.
NAPI objects were easy to protect with the lock because there's
a clear registration and unregistration API. Queues OTOH are made
visible by the netif_set_real_num_queues() call, which is tricky 
to protect with the instance lock. Queues are made visible, then
we configure them.

My thinking changed a bit, I think we should aim to protect all
ndos and ethtool ops with the instance lock. Stanislav and Saeed
seem to be working on that:
https://lore.kernel.org/all/Z5LhKdNMO5CvAvZf@mini-arch/
so hopefully that doesn't cause too much of a delay.
But you may need to rework this series further :(

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-27 21:33                     ` Jakub Kicinski
@ 2025-01-27 22:07                       ` Joe Damato
  2025-01-27 22:24                         ` Jakub Kicinski
  0 siblings, 1 reply; 24+ messages in thread
From: Joe Damato @ 2025-01-27 22:07 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jason Wang, netdev, gerhard, leiyang, xuanzhuo, mkarsten,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Mon, Jan 27, 2025 at 01:33:04PM -0800, Jakub Kicinski wrote:
> On Mon, 27 Jan 2025 14:31:21 -0500 Joe Damato wrote:
> > Actually, I missed a patch Jakub submit to net [1], which prevents
> > dumping TX-only NAPIs.
> 
> That patch only addresses NAPI ops, here I think we're talking about
> attributes of the queue object.

Yea, that's true.

> > So, I think this RFC as-is (only calling netif_queue_set_napi
> > for RX NAPIs) should be fine without changes.
> 
> Weak preference towards making netdev_nl_queue_fill_one() "do the right
> thing" when NAPI does not have ID assigned. And right thing IMO would
> be to skip reporting the NAPI_ID attribute.

Ah, right. Your patch was for netdev_nl_napi_fill_one, so presumably
a similar patch would need to be written for
netdev_nl_queue_fill_one.

> Tx NAPIs are one aspect, whether they have ID or not we may want direct
> access to the struct somewhere in the core, via txq, at some point, and
> then people may forget the linking has an unintended effect of also
> changing the netlink attrs. The other aspect is that driver may link
> queue to a Rx NAPI instance before napi_enable(), so before ID is
> assigned. Again, we don't want to report ID of 0 in that case.

I'm sorry I'm not sure I'm following what you are saying here; I
think there might be separate threads concurrently and I'm probably
just confused :)

I think you are saying that netdev_nl_napi_fill_one should not
report 0, which I think is fine but probably a separate patch?

I think, but am not sure, that Jason was asking for guidance on
TX-only NAPIs and linking them with calls to netif_queue_set_napi.
It seems that Jason may be suggesting that the driver shouldn't have
to know that TX-only NAPIs have a NAPI ID of 0 and thus should call
netif_queue_set_napi for all NAPIs and not have to deal think about
TX-only NAPIs at all.

From you've written, Jakub, I think you are suggesting you agree
with that, but with the caveat that netdev_nl_napi_fill_one should
not report 0.

Then, one day in the future, if TX-only NAPIs get an ID they will
magically start to show up.

Is that right?

If so, I'll re-spin the RFC to call netif_queue_set_napi for all
NAPIs in virtio_net, including TX-only NAPIs and see about including
a patch to tweak netdev_nl_napi_fill_one, if necessary.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock()
  2025-01-27 21:37   ` Jakub Kicinski
@ 2025-01-27 22:21     ` Joe Damato
  0 siblings, 0 replies; 24+ messages in thread
From: Joe Damato @ 2025-01-27 22:21 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, gerhard, jasowang, leiyang, xuanzhuo, mkarsten,
	Andrew Lunn, David S. Miller, Eric Dumazet, Paolo Abeni,
	Simon Horman, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Lorenzo Bianconi, Alexander Lobakin, open list

On Mon, Jan 27, 2025 at 01:37:56PM -0800, Jakub Kicinski wrote:
> On Tue, 21 Jan 2025 19:10:41 +0000 Joe Damato wrote:
> > From: Jakub Kicinski <kuba@kernel.org>
> > 
> > netdev netlink is the only reader of netdev_{,rx_}queue->napi,
> > and it already holds netdev->lock. Switch protection of the
> > writes to netdev->lock as well.
> > 
> > Add netif_queue_set_napi_locked() for API completeness,
> > but the expectation is that most current drivers won't have
> > to worry about locking any more. Today they jump thru hoops
> > to take rtnl_lock.
> 
> I started having second thoughts about this patch, sorry to say.
> NAPI objects were easy to protect with the lock because there's
> a clear registration and unregistration API. Queues OTOH are made
> visible by the netif_set_real_num_queues() call, which is tricky 
> to protect with the instance lock. Queues are made visible, then
> we configure them.
> 
> My thinking changed a bit, I think we should aim to protect all
> ndos and ethtool ops with the instance lock. Stanislav and Saeed
> seem to be working on that:
> https://lore.kernel.org/all/Z5LhKdNMO5CvAvZf@mini-arch/
> so hopefully that doesn't cause too much of a delay.
> But you may need to rework this series further :(

OK, I'll wait for something to emerge from that work before
re-spinning this.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-27 22:07                       ` Joe Damato
@ 2025-01-27 22:24                         ` Jakub Kicinski
  2025-01-27 22:32                           ` Joe Damato
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2025-01-27 22:24 UTC (permalink / raw)
  To: Joe Damato
  Cc: Jason Wang, netdev, gerhard, leiyang, xuanzhuo, mkarsten,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Mon, 27 Jan 2025 17:07:54 -0500 Joe Damato wrote:
> > Tx NAPIs are one aspect, whether they have ID or not we may want direct
> > access to the struct somewhere in the core, via txq, at some point, and
> > then people may forget the linking has an unintended effect of also
> > changing the netlink attrs. The other aspect is that driver may link
> > queue to a Rx NAPI instance before napi_enable(), so before ID is
> > assigned. Again, we don't want to report ID of 0 in that case.  
> 
> I'm sorry I'm not sure I'm following what you are saying here; I
> think there might be separate threads concurrently and I'm probably
> just confused :)
> 
> I think you are saying that netdev_nl_napi_fill_one should not
> report 0, which I think is fine but probably a separate patch?
> 
> I think, but am not sure, that Jason was asking for guidance on
> TX-only NAPIs and linking them with calls to netif_queue_set_napi.
> It seems that Jason may be suggesting that the driver shouldn't have
> to know that TX-only NAPIs have a NAPI ID of 0 and thus should call
> netif_queue_set_napi for all NAPIs and not have to deal think about
> TX-only NAPIs at all.
> 
> From you've written, Jakub, I think you are suggesting you agree
> with that, but with the caveat that netdev_nl_napi_fill_one should
> not report 0.

Right up to this point.

> Then, one day in the future, if TX-only NAPIs get an ID they will
> magically start to show up.
> 
> Is that right?

Sort of. I was trying to point out corner cases which would also
benefit from netdev_nl_queue_fill_one() being more careful about 
the NAPI IDs it reports. But the conclusion is the same.

> If so, I'll re-spin the RFC to call netif_queue_set_napi for all
> NAPIs in virtio_net, including TX-only NAPIs and see about including
> a patch to tweak netdev_nl_napi_fill_one, if necessary.

netdev_nl_queue_fill_one(), not netdev_nl_napi_fill_one()

Otherwise SG.

After net-next reopens I think the patch to netdev_nl_queue_fill_one()
could be posted separately. There may be drivers out there which already
link Tx NAPIs, we shouldn't delay making the reporting more careful.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-27 22:24                         ` Jakub Kicinski
@ 2025-01-27 22:32                           ` Joe Damato
  0 siblings, 0 replies; 24+ messages in thread
From: Joe Damato @ 2025-01-27 22:32 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jason Wang, netdev, gerhard, leiyang, xuanzhuo, mkarsten,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

On Mon, Jan 27, 2025 at 02:24:00PM -0800, Jakub Kicinski wrote:
> On Mon, 27 Jan 2025 17:07:54 -0500 Joe Damato wrote:
> > > Tx NAPIs are one aspect, whether they have ID or not we may want direct
> > > access to the struct somewhere in the core, via txq, at some point, and
> > > then people may forget the linking has an unintended effect of also
> > > changing the netlink attrs. The other aspect is that driver may link
> > > queue to a Rx NAPI instance before napi_enable(), so before ID is
> > > assigned. Again, we don't want to report ID of 0 in that case.  
> > 
> > I'm sorry I'm not sure I'm following what you are saying here; I
> > think there might be separate threads concurrently and I'm probably
> > just confused :)
> > 
> > I think you are saying that netdev_nl_napi_fill_one should not
> > report 0, which I think is fine but probably a separate patch?
> > 
> > I think, but am not sure, that Jason was asking for guidance on
> > TX-only NAPIs and linking them with calls to netif_queue_set_napi.
> > It seems that Jason may be suggesting that the driver shouldn't have
> > to know that TX-only NAPIs have a NAPI ID of 0 and thus should call
> > netif_queue_set_napi for all NAPIs and not have to deal think about
> > TX-only NAPIs at all.
> > 
> > From you've written, Jakub, I think you are suggesting you agree
> > with that, but with the caveat that netdev_nl_napi_fill_one should
> > not report 0.
> 
> Right up to this point.
> 
> > Then, one day in the future, if TX-only NAPIs get an ID they will
> > magically start to show up.
> > 
> > Is that right?
> 
> Sort of. I was trying to point out corner cases which would also
> benefit from netdev_nl_queue_fill_one() being more careful about 
> the NAPI IDs it reports. But the conclusion is the same.
> 
> > If so, I'll re-spin the RFC to call netif_queue_set_napi for all
> > NAPIs in virtio_net, including TX-only NAPIs and see about including
> > a patch to tweak netdev_nl_napi_fill_one, if necessary.
> 
> netdev_nl_queue_fill_one(), not netdev_nl_napi_fill_one()

Right, sorry for the typo/added confusion.
 
> Otherwise SG.
> 
> After net-next reopens I think the patch to netdev_nl_queue_fill_one()
> could be posted separately. There may be drivers out there which already
> link Tx NAPIs, we shouldn't delay making the reporting more careful.

OK, I'll start with that when net-next reopens while waiting on the
locking changes to come later and do the actual linking.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2025-01-27 22:32 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-21 19:10 [RFC net-next v3 0/4] virtio_net: Link queues to NAPIs Joe Damato
2025-01-21 19:10 ` [RFC net-next v3 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
2025-01-27 21:37   ` Jakub Kicinski
2025-01-27 22:21     ` Joe Damato
2025-01-21 19:10 ` [RFC net-next v3 2/4] virtio_net: Prepare for NAPI to queue mapping Joe Damato
2025-01-22  6:12   ` Jason Wang
2025-01-22 17:40     ` Joe Damato
2025-01-23  2:40       ` Jason Wang
2025-01-23  2:47         ` Joe Damato
2025-01-24  1:14           ` Jason Wang
2025-01-24 20:19             ` Joe Damato
2025-01-26  8:04               ` Jason Wang
2025-01-27 17:52                 ` Joe Damato
2025-01-27 19:31                   ` Joe Damato
2025-01-27 21:33                     ` Jakub Kicinski
2025-01-27 22:07                       ` Joe Damato
2025-01-27 22:24                         ` Jakub Kicinski
2025-01-27 22:32                           ` Joe Damato
2025-01-21 19:10 ` [RFC net-next v3 3/4] virtio_net: Map NAPIs to queues Joe Damato
2025-01-21 20:13   ` Gerhard Engleder
2025-01-22  6:13   ` Jason Wang
2025-01-21 19:10 ` [RFC net-next v3 4/4] virtio_net: Use persistent NAPI config Joe Damato
2025-01-21 20:18   ` Gerhard Engleder
2025-01-22  6:13   ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).