netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2 0/4] virtio_net: Link queues to NAPIs
@ 2025-01-16  5:52 Joe Damato
  2025-01-16  5:52 ` [PATCH net-next v2 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Joe Damato @ 2025-01-16  5:52 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, mkarsten, Joe Damato,
	Alexander Lobakin, Alexei Starovoitov, Andrew Lunn,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	Daniel Borkmann, David S. Miller, Eric Dumazet,
	Eugenio Pérez, Jakub Kicinski, Jesper Dangaard Brouer,
	John Fastabend, open list, Lorenzo Bianconi, Michael S. Tsirkin,
	Paolo Abeni, Sebastian Andrzej Siewior, Simon Horman,
	open list:VIRTIO CORE AND NET DRIVERS, Xuan Zhuo

Greetings:

Welcome to v2.

Recently [1], Jakub mentioned that there were a few drivers that are not
yet mapping queues to NAPIs.

While I don't have any of the other hardware mentioned, I do happen to
have a virtio_net laying around ;)

I've attempted to link queues to NAPIs, using the new locking Jakub
introduced avoiding RTNL.

Note: It seems virtio_net uses TX-only NAPIs which do not have NAPI IDs.
As such, I've left the TX NAPIs unset (as opposed to setting them to 0).

Note: I tried to handle the XDP case correctly (namely XDP queues should
not have NAPIs registered, but AF_XDP/XSK should have NAPIs registered,
IIUC). I would appreciate reviewers familiar with virtio_net double
checking me on that.

See the commit message of patch 3 for an example of how to get the NAPI
to queue mapping information.

See the commit message of patch 4 for an example of how NAPI IDs are
persistent despite queue count changes.

Thanks,
Joe

[1]: https://lore.kernel.org/netdev/20250109084301.2445a3e3@kernel.org/

v2:
  - patch 1:
    - New in the v2 from Jakub.

  - patch 2:
    - Previously patch 1, unchanged from v1.
    - Added Gerhard Engleder's Reviewed-by.
    - Added Lei Yang's Tested-by.

  - patch 3:
    - Introduced virtnet_napi_disable to eliminate duplicated code
      in virtnet_xdp_set, virtnet_rx_pause, virtnet_disable_queue_pair,
      refill_work as suggested by Jason Wang.
    - As a result of the above refactor, dropped Reviewed-by and
      Tested-by from patch 3.

  - patch 4:
    - New in v2. Adds persistent NAPI configuration. See commit message
      for more details.

Jakub Kicinski (1):
  net: protect queue -> napi linking with netdev_lock()

Joe Damato (3):
  virtio_net: Prepare for NAPI to queue mapping
  virtio_net: Map NAPIs to queues
  virtio_net: Use persistent NAPI config

 drivers/net/virtio_net.c      | 47 +++++++++++++++++++++++++++++------
 include/linux/netdevice.h     |  9 +++++--
 include/net/netdev_rx_queue.h |  2 +-
 net/core/dev.c                | 16 +++++++++---
 4 files changed, 60 insertions(+), 14 deletions(-)


base-commit: 0b21051a4a6208c721615bb0285a035b416a4383
-- 
2.25.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH net-next v2 1/4] net: protect queue -> napi linking with netdev_lock()
  2025-01-16  5:52 [PATCH net-next v2 0/4] virtio_net: Link queues to NAPIs Joe Damato
@ 2025-01-16  5:52 ` Joe Damato
  2025-01-16  5:52 ` [PATCH net-next v2 2/4] virtio_net: Prepare for NAPI to queue mapping Joe Damato
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Joe Damato @ 2025-01-16  5:52 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, mkarsten, Jakub Kicinski, Joe Damato,
	David S. Miller, Eric Dumazet, Paolo Abeni, Simon Horman,
	Andrew Lunn, Sebastian Andrzej Siewior, Lorenzo Bianconi,
	Alexander Lobakin, open list

From: Jakub Kicinski <kuba@kernel.org>

netdev netlink is the only reader of netdev_{,rx_}queue->napi,
and it already holds netdev->lock. Switch protection of the
writes to netdev->lock as well.

Add netif_queue_set_napi_locked() for API completeness,
but the expectation is that most current drivers won't have
to worry about locking any more. Today they jump thru hoops
to take rtnl_lock.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Joe Damato <jdamato@fastly.com>
---
 v2:
   - Added in v2 from Jakub.

 include/linux/netdevice.h     |  9 +++++++--
 include/net/netdev_rx_queue.h |  2 +-
 net/core/dev.c                | 16 +++++++++++++---
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 8308d9c75918..c7201642e9fb 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -690,7 +690,7 @@ struct netdev_queue {
  * slow- / control-path part
  */
 	/* NAPI instance for the queue
-	 * Readers and writers must hold RTNL
+	 * Readers and writers must hold netdev->lock
 	 */
 	struct napi_struct	*napi;
 
@@ -2458,7 +2458,8 @@ struct net_device {
 	 * Partially protects (writers must hold both @lock and rtnl_lock):
 	 *	@up
 	 *
-	 * Also protects some fields in struct napi_struct.
+	 * Also protects some fields in:
+	 *	struct napi_struct, struct netdev_queue, struct netdev_rx_queue
 	 *
 	 * Ordering: take after rtnl_lock.
 	 */
@@ -2685,6 +2686,10 @@ static inline void *netdev_priv(const struct net_device *dev)
 void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index,
 			  enum netdev_queue_type type,
 			  struct napi_struct *napi);
+void netif_queue_set_napi_locked(struct net_device *dev,
+				 unsigned int queue_index,
+				 enum netdev_queue_type type,
+				 struct napi_struct *napi);
 
 static inline void netdev_lock(struct net_device *dev)
 {
diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h
index 596836abf7bf..9fcac0b43b71 100644
--- a/include/net/netdev_rx_queue.h
+++ b/include/net/netdev_rx_queue.h
@@ -23,7 +23,7 @@ struct netdev_rx_queue {
 	struct xsk_buff_pool            *pool;
 #endif
 	/* NAPI instance for the queue
-	 * Readers and writers must hold RTNL
+	 * Readers and writers must hold netdev->lock
 	 */
 	struct napi_struct		*napi;
 	struct pp_memory_provider_params mp_params;
diff --git a/net/core/dev.c b/net/core/dev.c
index 782ae3ff3f8d..528478cd8615 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6851,14 +6851,24 @@ EXPORT_SYMBOL(dev_set_threaded);
  */
 void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index,
 			  enum netdev_queue_type type, struct napi_struct *napi)
+{
+	netdev_lock(dev);
+	netif_queue_set_napi_locked(dev, queue_index, type, napi);
+	netdev_unlock(dev);
+}
+EXPORT_SYMBOL(netif_queue_set_napi);
+
+void netif_queue_set_napi_locked(struct net_device *dev,
+				 unsigned int queue_index,
+				 enum netdev_queue_type type,
+				 struct napi_struct *napi)
 {
 	struct netdev_rx_queue *rxq;
 	struct netdev_queue *txq;
 
 	if (WARN_ON_ONCE(napi && !napi->dev))
 		return;
-	if (dev->reg_state >= NETREG_REGISTERED)
-		ASSERT_RTNL();
+	netdev_assert_locked_or_invisible(dev);
 
 	switch (type) {
 	case NETDEV_QUEUE_TYPE_RX:
@@ -6873,7 +6883,7 @@ void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index,
 		return;
 	}
 }
-EXPORT_SYMBOL(netif_queue_set_napi);
+EXPORT_SYMBOL(netif_queue_set_napi_locked);
 
 static void napi_restore_config(struct napi_struct *n)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v2 2/4] virtio_net: Prepare for NAPI to queue mapping
  2025-01-16  5:52 [PATCH net-next v2 0/4] virtio_net: Link queues to NAPIs Joe Damato
  2025-01-16  5:52 ` [PATCH net-next v2 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
@ 2025-01-16  5:52 ` Joe Damato
  2025-01-16  5:52 ` [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues Joe Damato
  2025-01-16  5:52 ` [PATCH net-next v2 4/4] virtio_net: Use persistent NAPI config Joe Damato
  3 siblings, 0 replies; 12+ messages in thread
From: Joe Damato @ 2025-01-16  5:52 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, mkarsten, Joe Damato,
	Michael S. Tsirkin, Xuan Zhuo, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

Slight refactor to prepare the code for NAPI to queue mapping. No
functional changes.

Signed-off-by: Joe Damato <jdamato@fastly.com>
Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
Tested-by: Lei Yang <leiyang@redhat.com>
---
 v2:
   - Previously patch 1 in the v1.
   - Added Reviewed-by and Tested-by tags to commit message. No
     functional changes.

 drivers/net/virtio_net.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7646ddd9bef7..cff18c66b54a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
 	virtqueue_napi_schedule(&rq->napi, rvq);
 }
 
-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+static void virtnet_napi_do_enable(struct virtqueue *vq,
+				   struct napi_struct *napi)
 {
 	napi_enable(napi);
 
@@ -2802,6 +2803,11 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
 	local_bh_enable();
 }
 
+static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+{
+	virtnet_napi_do_enable(vq, napi);
+}
+
 static void virtnet_napi_tx_enable(struct virtnet_info *vi,
 				   struct virtqueue *vq,
 				   struct napi_struct *napi)
@@ -2817,7 +2823,7 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
 		return;
 	}
 
-	return virtnet_napi_enable(vq, napi);
+	virtnet_napi_do_enable(vq, napi);
 }
 
 static void virtnet_napi_tx_disable(struct napi_struct *napi)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues
  2025-01-16  5:52 [PATCH net-next v2 0/4] virtio_net: Link queues to NAPIs Joe Damato
  2025-01-16  5:52 ` [PATCH net-next v2 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
  2025-01-16  5:52 ` [PATCH net-next v2 2/4] virtio_net: Prepare for NAPI to queue mapping Joe Damato
@ 2025-01-16  5:52 ` Joe Damato
  2025-01-16  7:53   ` Xuan Zhuo
  2025-01-16  5:52 ` [PATCH net-next v2 4/4] virtio_net: Use persistent NAPI config Joe Damato
  3 siblings, 1 reply; 12+ messages in thread
From: Joe Damato @ 2025-01-16  5:52 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, mkarsten, Joe Damato,
	Michael S. Tsirkin, Xuan Zhuo, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, open list:VIRTIO CORE AND NET DRIVERS, open list,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_)

Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
can be accessed by user apps.

$ ethtool -i ens4 | grep driver
driver: virtio_net

$ sudo ethtool -L ens4 combined 4

$ ./tools/net/ynl/pyynl/cli.py \
       --spec Documentation/netlink/specs/netdev.yaml \
       --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'type': 'tx'},
 {'id': 1, 'ifindex': 2, 'type': 'tx'},
 {'id': 2, 'ifindex': 2, 'type': 'tx'},
 {'id': 3, 'ifindex': 2, 'type': 'tx'}]

Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
the lack of 'napi-id' in the above output is expected.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 v2:
   - Eliminate RTNL code paths using the API Jakub introduced in patch 1
     of this v2.
   - Added virtnet_napi_disable to reduce code duplication as
     suggested by Jason Wang.

 drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
 1 file changed, 29 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index cff18c66b54a..c6fda756dd07 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
 	local_bh_enable();
 }
 
-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+static void virtnet_napi_enable(struct virtqueue *vq,
+				struct napi_struct *napi)
 {
+	struct virtnet_info *vi = vq->vdev->priv;
+	int q = vq2rxq(vq);
+	u16 curr_qs;
+
 	virtnet_napi_do_enable(vq, napi);
+
+	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
+	if (!vi->xdp_enabled || q < curr_qs)
+		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
 }
 
 static void virtnet_napi_tx_enable(struct virtnet_info *vi,
@@ -2826,6 +2835,20 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
 	virtnet_napi_do_enable(vq, napi);
 }
 
+static void virtnet_napi_disable(struct virtqueue *vq,
+				 struct napi_struct *napi)
+{
+	struct virtnet_info *vi = vq->vdev->priv;
+	int q = vq2rxq(vq);
+	u16 curr_qs;
+
+	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
+	if (!vi->xdp_enabled || q < curr_qs)
+		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, NULL);
+
+	napi_disable(napi);
+}
+
 static void virtnet_napi_tx_disable(struct napi_struct *napi)
 {
 	if (napi->weight)
@@ -2842,7 +2865,8 @@ static void refill_work(struct work_struct *work)
 	for (i = 0; i < vi->curr_queue_pairs; i++) {
 		struct receive_queue *rq = &vi->rq[i];
 
-		napi_disable(&rq->napi);
+		virtnet_napi_disable(rq->vq, &rq->napi);
+
 		still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
 		virtnet_napi_enable(rq->vq, &rq->napi);
 
@@ -3042,7 +3066,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
 {
 	virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
-	napi_disable(&vi->rq[qp_index].napi);
+	virtnet_napi_disable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
 	xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
 }
 
@@ -3313,7 +3337,7 @@ static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
 	bool running = netif_running(vi->dev);
 
 	if (running) {
-		napi_disable(&rq->napi);
+		virtnet_napi_disable(rq->vq, &rq->napi);
 		virtnet_cancel_dim(vi, &rq->dim);
 	}
 }
@@ -5932,7 +5956,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 	/* Make sure NAPI is not using any XDP TX queues for RX. */
 	if (netif_running(dev)) {
 		for (i = 0; i < vi->max_queue_pairs; i++) {
-			napi_disable(&vi->rq[i].napi);
+			virtnet_napi_disable(vi->rq[i].vq, &vi->rq[i].napi);
 			virtnet_napi_tx_disable(&vi->sq[i].napi);
 		}
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v2 4/4] virtio_net: Use persistent NAPI config
  2025-01-16  5:52 [PATCH net-next v2 0/4] virtio_net: Link queues to NAPIs Joe Damato
                   ` (2 preceding siblings ...)
  2025-01-16  5:52 ` [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues Joe Damato
@ 2025-01-16  5:52 ` Joe Damato
  2025-01-16  7:56   ` Xuan Zhuo
  3 siblings, 1 reply; 12+ messages in thread
From: Joe Damato @ 2025-01-16  5:52 UTC (permalink / raw)
  To: netdev
  Cc: gerhard, jasowang, leiyang, mkarsten, Joe Damato,
	Michael S. Tsirkin, Xuan Zhuo, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list

Use persistent NAPI config so that NAPI IDs are not renumbered as queue
counts change.

$ sudo ethtool -l ens4  | tail -5 | egrep -i '(current|combined)'
Current hardware settings:
Combined:	4

$ ./tools/net/ynl/pyynl/cli.py \
    --spec Documentation/netlink/specs/netdev.yaml \
    --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'type': 'tx'},
 {'id': 1, 'ifindex': 2, 'type': 'tx'},
 {'id': 2, 'ifindex': 2, 'type': 'tx'},
 {'id': 3, 'ifindex': 2, 'type': 'tx'}]

Now adjust the queue count, note that the NAPI IDs are not renumbered:

$ sudo ethtool -L ens4 combined 1
$ ./tools/net/ynl/pyynl/cli.py \
    --spec Documentation/netlink/specs/netdev.yaml \
    --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
 {'id': 0, 'ifindex': 2, 'type': 'tx'}]

$ sudo ethtool -L ens4 combined 8
$ ./tools/net/ynl/pyynl/cli.py \
    --spec Documentation/netlink/specs/netdev.yaml \
    --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
 {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
 {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
 {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
 {'id': 4, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'},
 {'id': 5, 'ifindex': 2, 'napi-id': 8198, 'type': 'rx'},
 {'id': 6, 'ifindex': 2, 'napi-id': 8199, 'type': 'rx'},
 {'id': 7, 'ifindex': 2, 'napi-id': 8200, 'type': 'rx'},
 [...]

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 v2:
   - New in this v2. Adds persistent NAPI config so that NAPI IDs are
     not renumbered and napi_config settings are persisted.

 drivers/net/virtio_net.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index c6fda756dd07..52094596e94b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -6420,8 +6420,9 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
 	INIT_DELAYED_WORK(&vi->refill, refill_work);
 	for (i = 0; i < vi->max_queue_pairs; i++) {
 		vi->rq[i].pages = NULL;
-		netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll,
-				      napi_weight);
+		netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,
+				      i);
+		vi->rq[i].napi.weight = napi_weight;
 		netif_napi_add_tx_weight(vi->dev, &vi->sq[i].napi,
 					 virtnet_poll_tx,
 					 napi_tx ? napi_weight : 0);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues
  2025-01-16  5:52 ` [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues Joe Damato
@ 2025-01-16  7:53   ` Xuan Zhuo
  2025-01-16 16:09     ` Joe Damato
  2025-01-20  1:58     ` Jason Wang
  0 siblings, 2 replies; 12+ messages in thread
From: Xuan Zhuo @ 2025-01-16  7:53 UTC (permalink / raw)
  To: Joe Damato
  Cc: gerhard, jasowang, leiyang, mkarsten, Joe Damato,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, open list:VIRTIO CORE AND NET DRIVERS, open list,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	netdev

On Thu, 16 Jan 2025 05:52:58 +0000, Joe Damato <jdamato@fastly.com> wrote:
> Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> can be accessed by user apps.
>
> $ ethtool -i ens4 | grep driver
> driver: virtio_net
>
> $ sudo ethtool -L ens4 combined 4
>
> $ ./tools/net/ynl/pyynl/cli.py \
>        --spec Documentation/netlink/specs/netdev.yaml \
>        --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'type': 'tx'},
>  {'id': 1, 'ifindex': 2, 'type': 'tx'},
>  {'id': 2, 'ifindex': 2, 'type': 'tx'},
>  {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>
> Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> the lack of 'napi-id' in the above output is expected.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---
>  v2:
>    - Eliminate RTNL code paths using the API Jakub introduced in patch 1
>      of this v2.
>    - Added virtnet_napi_disable to reduce code duplication as
>      suggested by Jason Wang.
>
>  drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
>  1 file changed, 29 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index cff18c66b54a..c6fda756dd07 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
>  	local_bh_enable();
>  }
>
> -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +static void virtnet_napi_enable(struct virtqueue *vq,
> +				struct napi_struct *napi)
>  {
> +	struct virtnet_info *vi = vq->vdev->priv;
> +	int q = vq2rxq(vq);
> +	u16 curr_qs;
> +
>  	virtnet_napi_do_enable(vq, napi);
> +
> +	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> +	if (!vi->xdp_enabled || q < curr_qs)
> +		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);

So what case the check of xdp_enabled is for?

And I think we should merge this to last commit.

Thanks.

>  }
>
>  static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> @@ -2826,6 +2835,20 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
>  	virtnet_napi_do_enable(vq, napi);
>  }
>
> +static void virtnet_napi_disable(struct virtqueue *vq,
> +				 struct napi_struct *napi)
> +{
> +	struct virtnet_info *vi = vq->vdev->priv;
> +	int q = vq2rxq(vq);
> +	u16 curr_qs;
> +
> +	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> +	if (!vi->xdp_enabled || q < curr_qs)
> +		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, NULL);
> +
> +	napi_disable(napi);
> +}
> +
>  static void virtnet_napi_tx_disable(struct napi_struct *napi)
>  {
>  	if (napi->weight)
> @@ -2842,7 +2865,8 @@ static void refill_work(struct work_struct *work)
>  	for (i = 0; i < vi->curr_queue_pairs; i++) {
>  		struct receive_queue *rq = &vi->rq[i];
>
> -		napi_disable(&rq->napi);
> +		virtnet_napi_disable(rq->vq, &rq->napi);
> +
>  		still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
>  		virtnet_napi_enable(rq->vq, &rq->napi);
>
> @@ -3042,7 +3066,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
>  static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
>  {
>  	virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
> -	napi_disable(&vi->rq[qp_index].napi);
> +	virtnet_napi_disable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
>  	xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
>  }
>
> @@ -3313,7 +3337,7 @@ static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
>  	bool running = netif_running(vi->dev);
>
>  	if (running) {
> -		napi_disable(&rq->napi);
> +		virtnet_napi_disable(rq->vq, &rq->napi);
>  		virtnet_cancel_dim(vi, &rq->dim);
>  	}
>  }
> @@ -5932,7 +5956,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
>  	/* Make sure NAPI is not using any XDP TX queues for RX. */
>  	if (netif_running(dev)) {
>  		for (i = 0; i < vi->max_queue_pairs; i++) {
> -			napi_disable(&vi->rq[i].napi);
> +			virtnet_napi_disable(vi->rq[i].vq, &vi->rq[i].napi);
>  			virtnet_napi_tx_disable(&vi->sq[i].napi);
>  		}
>  	}
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v2 4/4] virtio_net: Use persistent NAPI config
  2025-01-16  5:52 ` [PATCH net-next v2 4/4] virtio_net: Use persistent NAPI config Joe Damato
@ 2025-01-16  7:56   ` Xuan Zhuo
  0 siblings, 0 replies; 12+ messages in thread
From: Xuan Zhuo @ 2025-01-16  7:56 UTC (permalink / raw)
  To: Joe Damato
  Cc: gerhard, jasowang, leiyang, mkarsten, Joe Damato,
	Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	open list:VIRTIO CORE AND NET DRIVERS, open list, netdev

On Thu, 16 Jan 2025 05:52:59 +0000, Joe Damato <jdamato@fastly.com> wrote:
> Use persistent NAPI config so that NAPI IDs are not renumbered as queue
> counts change.
>
> $ sudo ethtool -l ens4  | tail -5 | egrep -i '(current|combined)'
> Current hardware settings:
> Combined:	4
>
> $ ./tools/net/ynl/pyynl/cli.py \
>     --spec Documentation/netlink/specs/netdev.yaml \
>     --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'type': 'tx'},
>  {'id': 1, 'ifindex': 2, 'type': 'tx'},
>  {'id': 2, 'ifindex': 2, 'type': 'tx'},
>  {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>
> Now adjust the queue count, note that the NAPI IDs are not renumbered:
>
> $ sudo ethtool -L ens4 combined 1
> $ ./tools/net/ynl/pyynl/cli.py \
>     --spec Documentation/netlink/specs/netdev.yaml \
>     --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'type': 'tx'}]
>
> $ sudo ethtool -L ens4 combined 8
> $ ./tools/net/ynl/pyynl/cli.py \
>     --spec Documentation/netlink/specs/netdev.yaml \
>     --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
>  {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
>  {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
>  {'id': 4, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'},
>  {'id': 5, 'ifindex': 2, 'napi-id': 8198, 'type': 'rx'},
>  {'id': 6, 'ifindex': 2, 'napi-id': 8199, 'type': 'rx'},
>  {'id': 7, 'ifindex': 2, 'napi-id': 8200, 'type': 'rx'},
>  [...]
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>

Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

> ---
>  v2:
>    - New in this v2. Adds persistent NAPI config so that NAPI IDs are
>      not renumbered and napi_config settings are persisted.
>
>  drivers/net/virtio_net.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index c6fda756dd07..52094596e94b 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -6420,8 +6420,9 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
>  	INIT_DELAYED_WORK(&vi->refill, refill_work);
>  	for (i = 0; i < vi->max_queue_pairs; i++) {
>  		vi->rq[i].pages = NULL;
> -		netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll,
> -				      napi_weight);
> +		netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,
> +				      i);
> +		vi->rq[i].napi.weight = napi_weight;
>  		netif_napi_add_tx_weight(vi->dev, &vi->sq[i].napi,
>  					 virtnet_poll_tx,
>  					 napi_tx ? napi_weight : 0);
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues
  2025-01-16  7:53   ` Xuan Zhuo
@ 2025-01-16 16:09     ` Joe Damato
  2025-01-16 20:28       ` Gerhard Engleder
  2025-01-20  1:58     ` Jason Wang
  1 sibling, 1 reply; 12+ messages in thread
From: Joe Damato @ 2025-01-16 16:09 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: gerhard, jasowang, leiyang, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend,
	open list:VIRTIO CORE AND NET DRIVERS, open list,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	netdev

On Thu, Jan 16, 2025 at 03:53:14PM +0800, Xuan Zhuo wrote:
> On Thu, 16 Jan 2025 05:52:58 +0000, Joe Damato <jdamato@fastly.com> wrote:
> > Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> > can be accessed by user apps.
> >
> > $ ethtool -i ens4 | grep driver
> > driver: virtio_net
> >
> > $ sudo ethtool -L ens4 combined 4
> >
> > $ ./tools/net/ynl/pyynl/cli.py \
> >        --spec Documentation/netlink/specs/netdev.yaml \
> >        --dump queue-get --json='{"ifindex": 2}'
> > [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> >  {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> >  {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> >  {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> >  {'id': 0, 'ifindex': 2, 'type': 'tx'},
> >  {'id': 1, 'ifindex': 2, 'type': 'tx'},
> >  {'id': 2, 'ifindex': 2, 'type': 'tx'},
> >  {'id': 3, 'ifindex': 2, 'type': 'tx'}]
> >
> > Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> > the lack of 'napi-id' in the above output is expected.
> >
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > ---
> >  v2:
> >    - Eliminate RTNL code paths using the API Jakub introduced in patch 1
> >      of this v2.
> >    - Added virtnet_napi_disable to reduce code duplication as
> >      suggested by Jason Wang.
> >
> >  drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
> >  1 file changed, 29 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index cff18c66b54a..c6fda756dd07 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> >  	local_bh_enable();
> >  }
> >
> > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > +static void virtnet_napi_enable(struct virtqueue *vq,
> > +				struct napi_struct *napi)
> >  {
> > +	struct virtnet_info *vi = vq->vdev->priv;
> > +	int q = vq2rxq(vq);
> > +	u16 curr_qs;
> > +
> >  	virtnet_napi_do_enable(vq, napi);
> > +
> > +	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> > +	if (!vi->xdp_enabled || q < curr_qs)
> > +		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
> 
> So what case the check of xdp_enabled is for?

Based on a previous discussion [1], the NAPIs should not be linked
for in-kernel XDP, but they _should_ be linked for XSK.

I could certainly have misread the virtio_net code (please let me
know if I've gotten it wrong, I'm not an expert), but the three
cases I have in mind are:

  - vi->xdp_enabled = false, which happens when no XDP is being
    used, so the queue number will be < vi->curr_queue_pairs.

  - vi->xdp_enabled = false, which I believe is what happens in the
    XSK case. In this case, the NAPI is linked.

  - vi->xdp_enabled = true, which I believe only happens for
    in-kernel XDP - but not XSK - and in this case, the NAPI should
    NOT be linked.

Thank you for your review and questions about this, I definitely
want to make sure I've gotten it right :)

> And I think we should merge this to last commit.

I kept them separate for two reasons:
  1. Easier to review :)
  2. If a bug were to appear, it'll be easier to bisect the code to
     determine if the bug is being caused either from linking the
     queues to NAPIs or from adding support for persistent NAPI
     config parameters.

Having the two features separated makes it easier to understand and
fix, as there have been minor bugs in other drivers with NAPI config
[2].

[1]: https://lore.kernel.org/netdev/20250113135609.13883897@kernel.org/
[2]: https://lore.kernel.org/lkml/38d019dd-b876-4fc1-ba7e-f1eb85ad7360@nvidia.com/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues
  2025-01-16 16:09     ` Joe Damato
@ 2025-01-16 20:28       ` Gerhard Engleder
  2025-01-21 17:57         ` Joe Damato
  0 siblings, 1 reply; 12+ messages in thread
From: Gerhard Engleder @ 2025-01-16 20:28 UTC (permalink / raw)
  To: Joe Damato, Xuan Zhuo
  Cc: open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	jasowang, leiyang, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend,
	open list:VIRTIO CORE AND NET DRIVERS, open list, netdev

On 16.01.25 17:09, Joe Damato wrote:
> On Thu, Jan 16, 2025 at 03:53:14PM +0800, Xuan Zhuo wrote:
>> On Thu, 16 Jan 2025 05:52:58 +0000, Joe Damato <jdamato@fastly.com> wrote:
>>> Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
>>> can be accessed by user apps.
>>>
>>> $ ethtool -i ens4 | grep driver
>>> driver: virtio_net
>>>
>>> $ sudo ethtool -L ens4 combined 4
>>>
>>> $ ./tools/net/ynl/pyynl/cli.py \
>>>         --spec Documentation/netlink/specs/netdev.yaml \
>>>         --dump queue-get --json='{"ifindex": 2}'
>>> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
>>>   {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
>>>   {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
>>>   {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
>>>   {'id': 0, 'ifindex': 2, 'type': 'tx'},
>>>   {'id': 1, 'ifindex': 2, 'type': 'tx'},
>>>   {'id': 2, 'ifindex': 2, 'type': 'tx'},
>>>   {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>>>
>>> Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
>>> the lack of 'napi-id' in the above output is expected.
>>>
>>> Signed-off-by: Joe Damato <jdamato@fastly.com>
>>> ---
>>>   v2:
>>>     - Eliminate RTNL code paths using the API Jakub introduced in patch 1
>>>       of this v2.
>>>     - Added virtnet_napi_disable to reduce code duplication as
>>>       suggested by Jason Wang.
>>>
>>>   drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
>>>   1 file changed, 29 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>> index cff18c66b54a..c6fda756dd07 100644
>>> --- a/drivers/net/virtio_net.c
>>> +++ b/drivers/net/virtio_net.c
>>> @@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
>>>   	local_bh_enable();
>>>   }
>>>
>>> -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
>>> +static void virtnet_napi_enable(struct virtqueue *vq,
>>> +				struct napi_struct *napi)
>>>   {
>>> +	struct virtnet_info *vi = vq->vdev->priv;
>>> +	int q = vq2rxq(vq);
>>> +	u16 curr_qs;
>>> +
>>>   	virtnet_napi_do_enable(vq, napi);
>>> +
>>> +	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
>>> +	if (!vi->xdp_enabled || q < curr_qs)
>>> +		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
>>
>> So what case the check of xdp_enabled is for?
> 
> Based on a previous discussion [1], the NAPIs should not be linked
> for in-kernel XDP, but they _should_ be linked for XSK.
> 
> I could certainly have misread the virtio_net code (please let me
> know if I've gotten it wrong, I'm not an expert), but the three
> cases I have in mind are:
> 
>    - vi->xdp_enabled = false, which happens when no XDP is being
>      used, so the queue number will be < vi->curr_queue_pairs.
> 
>    - vi->xdp_enabled = false, which I believe is what happens in the
>      XSK case. In this case, the NAPI is linked.
> 
>    - vi->xdp_enabled = true, which I believe only happens for
>      in-kernel XDP - but not XSK - and in this case, the NAPI should
>      NOT be linked.

My interpretation based on [1] is that an in-kernel XDP Tx queue is a
queue that is only used if XDP is attached and is not visible to
userspace. The in-kernel XDP Tx queue is used to not load stack Tx
queues with XDP packets. IIRC fbnic has additional queues only for
XDP Tx. So for stack RX queues I would always link napi, no matter if
XDP is attached or not. I think most driver do not have in-kernel XDP
Tx queues. But I'm also not an expert.

Gerhard

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues
  2025-01-16  7:53   ` Xuan Zhuo
  2025-01-16 16:09     ` Joe Damato
@ 2025-01-20  1:58     ` Jason Wang
  2025-01-21 17:55       ` Joe Damato
  1 sibling, 1 reply; 12+ messages in thread
From: Jason Wang @ 2025-01-20  1:58 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Joe Damato, gerhard, leiyang, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend,
	open list:VIRTIO CORE AND NET DRIVERS, open list,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	netdev

On Thu, Jan 16, 2025 at 3:57 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Thu, 16 Jan 2025 05:52:58 +0000, Joe Damato <jdamato@fastly.com> wrote:
> > Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> > can be accessed by user apps.
> >
> > $ ethtool -i ens4 | grep driver
> > driver: virtio_net
> >
> > $ sudo ethtool -L ens4 combined 4
> >
> > $ ./tools/net/ynl/pyynl/cli.py \
> >        --spec Documentation/netlink/specs/netdev.yaml \
> >        --dump queue-get --json='{"ifindex": 2}'
> > [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> >  {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> >  {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> >  {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> >  {'id': 0, 'ifindex': 2, 'type': 'tx'},
> >  {'id': 1, 'ifindex': 2, 'type': 'tx'},
> >  {'id': 2, 'ifindex': 2, 'type': 'tx'},
> >  {'id': 3, 'ifindex': 2, 'type': 'tx'}]
> >
> > Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> > the lack of 'napi-id' in the above output is expected.
> >
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > ---
> >  v2:
> >    - Eliminate RTNL code paths using the API Jakub introduced in patch 1
> >      of this v2.
> >    - Added virtnet_napi_disable to reduce code duplication as
> >      suggested by Jason Wang.
> >
> >  drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
> >  1 file changed, 29 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index cff18c66b54a..c6fda756dd07 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> >       local_bh_enable();
> >  }
> >
> > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > +static void virtnet_napi_enable(struct virtqueue *vq,
> > +                             struct napi_struct *napi)
> >  {
> > +     struct virtnet_info *vi = vq->vdev->priv;
> > +     int q = vq2rxq(vq);
> > +     u16 curr_qs;
> > +
> >       virtnet_napi_do_enable(vq, napi);
> > +
> > +     curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> > +     if (!vi->xdp_enabled || q < curr_qs)
> > +             netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
>
> So what case the check of xdp_enabled is for?

+1 and I think the XDP related checks should be done by the caller not here.

>
> And I think we should merge this to last commit.
>
> Thanks.
>

Thanks

> >  }
> >
> >  static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> > @@ -2826,6 +2835,20 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> >       virtnet_napi_do_enable(vq, napi);
> >  }
> >
> > +static void virtnet_napi_disable(struct virtqueue *vq,
> > +                              struct napi_struct *napi)
> > +{
> > +     struct virtnet_info *vi = vq->vdev->priv;
> > +     int q = vq2rxq(vq);
> > +     u16 curr_qs;
> > +
> > +     curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> > +     if (!vi->xdp_enabled || q < curr_qs)
> > +             netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, NULL);
> > +
> > +     napi_disable(napi);
> > +}
> > +
> >  static void virtnet_napi_tx_disable(struct napi_struct *napi)
> >  {
> >       if (napi->weight)
> > @@ -2842,7 +2865,8 @@ static void refill_work(struct work_struct *work)
> >       for (i = 0; i < vi->curr_queue_pairs; i++) {
> >               struct receive_queue *rq = &vi->rq[i];
> >
> > -             napi_disable(&rq->napi);
> > +             virtnet_napi_disable(rq->vq, &rq->napi);
> > +
> >               still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> >               virtnet_napi_enable(rq->vq, &rq->napi);
> >
> > @@ -3042,7 +3066,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
> >  static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
> >  {
> >       virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
> > -     napi_disable(&vi->rq[qp_index].napi);
> > +     virtnet_napi_disable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
> >       xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
> >  }
> >
> > @@ -3313,7 +3337,7 @@ static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
> >       bool running = netif_running(vi->dev);
> >
> >       if (running) {
> > -             napi_disable(&rq->napi);
> > +             virtnet_napi_disable(rq->vq, &rq->napi);
> >               virtnet_cancel_dim(vi, &rq->dim);
> >       }
> >  }
> > @@ -5932,7 +5956,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
> >       /* Make sure NAPI is not using any XDP TX queues for RX. */
> >       if (netif_running(dev)) {
> >               for (i = 0; i < vi->max_queue_pairs; i++) {
> > -                     napi_disable(&vi->rq[i].napi);
> > +                     virtnet_napi_disable(vi->rq[i].vq, &vi->rq[i].napi);
> >                       virtnet_napi_tx_disable(&vi->sq[i].napi);
> >               }
> >       }
> > --
> > 2.25.1
> >
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues
  2025-01-20  1:58     ` Jason Wang
@ 2025-01-21 17:55       ` Joe Damato
  0 siblings, 0 replies; 12+ messages in thread
From: Joe Damato @ 2025-01-21 17:55 UTC (permalink / raw)
  To: Jason Wang
  Cc: Xuan Zhuo, gerhard, leiyang, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend,
	open list:VIRTIO CORE AND NET DRIVERS, open list,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	netdev

On Mon, Jan 20, 2025 at 09:58:13AM +0800, Jason Wang wrote:
> On Thu, Jan 16, 2025 at 3:57 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Thu, 16 Jan 2025 05:52:58 +0000, Joe Damato <jdamato@fastly.com> wrote:
> > > Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> > > can be accessed by user apps.
> > >
> > > $ ethtool -i ens4 | grep driver
> > > driver: virtio_net
> > >
> > > $ sudo ethtool -L ens4 combined 4
> > >
> > > $ ./tools/net/ynl/pyynl/cli.py \
> > >        --spec Documentation/netlink/specs/netdev.yaml \
> > >        --dump queue-get --json='{"ifindex": 2}'
> > > [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> > >  {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> > >  {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> > >  {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> > >  {'id': 0, 'ifindex': 2, 'type': 'tx'},
> > >  {'id': 1, 'ifindex': 2, 'type': 'tx'},
> > >  {'id': 2, 'ifindex': 2, 'type': 'tx'},
> > >  {'id': 3, 'ifindex': 2, 'type': 'tx'}]
> > >
> > > Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> > > the lack of 'napi-id' in the above output is expected.
> > >
> > > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > > ---
> > >  v2:
> > >    - Eliminate RTNL code paths using the API Jakub introduced in patch 1
> > >      of this v2.
> > >    - Added virtnet_napi_disable to reduce code duplication as
> > >      suggested by Jason Wang.
> > >
> > >  drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
> > >  1 file changed, 29 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index cff18c66b54a..c6fda756dd07 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> > >       local_bh_enable();
> > >  }
> > >
> > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > +static void virtnet_napi_enable(struct virtqueue *vq,
> > > +                             struct napi_struct *napi)
> > >  {
> > > +     struct virtnet_info *vi = vq->vdev->priv;
> > > +     int q = vq2rxq(vq);
> > > +     u16 curr_qs;
> > > +
> > >       virtnet_napi_do_enable(vq, napi);
> > > +
> > > +     curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> > > +     if (!vi->xdp_enabled || q < curr_qs)
> > > +             netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
> >
> > So what case the check of xdp_enabled is for?
> 
> +1 and I think the XDP related checks should be done by the caller not here.

Based on the reply further down in the thread, it seems that these
queues should be mapped regardless of whether an XDP program is
attached or not, IIUC.

Feel free to reply there, if you disagree/have comments.

> >
> > And I think we should merge this to last commit.
> >
> > Thanks.
> >
> 
> Thanks

FWIW, I don't plan to merge the commits, due to the reason mentioned
further down in the thread.

Thanks.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues
  2025-01-16 20:28       ` Gerhard Engleder
@ 2025-01-21 17:57         ` Joe Damato
  0 siblings, 0 replies; 12+ messages in thread
From: Joe Damato @ 2025-01-21 17:57 UTC (permalink / raw)
  To: Gerhard Engleder
  Cc: Xuan Zhuo,
	open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
	jasowang, leiyang, mkarsten, Michael S. Tsirkin,
	Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend,
	open list:VIRTIO CORE AND NET DRIVERS, open list, netdev

On Thu, Jan 16, 2025 at 09:28:07PM +0100, Gerhard Engleder wrote:
> On 16.01.25 17:09, Joe Damato wrote:
> > On Thu, Jan 16, 2025 at 03:53:14PM +0800, Xuan Zhuo wrote:
> > > On Thu, 16 Jan 2025 05:52:58 +0000, Joe Damato <jdamato@fastly.com> wrote:
> > > > Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> > > > can be accessed by user apps.
> > > > 
> > > > $ ethtool -i ens4 | grep driver
> > > > driver: virtio_net
> > > > 
> > > > $ sudo ethtool -L ens4 combined 4
> > > > 
> > > > $ ./tools/net/ynl/pyynl/cli.py \
> > > >         --spec Documentation/netlink/specs/netdev.yaml \
> > > >         --dump queue-get --json='{"ifindex": 2}'
> > > > [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> > > >   {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> > > >   {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> > > >   {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> > > >   {'id': 0, 'ifindex': 2, 'type': 'tx'},
> > > >   {'id': 1, 'ifindex': 2, 'type': 'tx'},
> > > >   {'id': 2, 'ifindex': 2, 'type': 'tx'},
> > > >   {'id': 3, 'ifindex': 2, 'type': 'tx'}]
> > > > 
> > > > Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> > > > the lack of 'napi-id' in the above output is expected.
> > > > 
> > > > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > > > ---
> > > >   v2:
> > > >     - Eliminate RTNL code paths using the API Jakub introduced in patch 1
> > > >       of this v2.
> > > >     - Added virtnet_napi_disable to reduce code duplication as
> > > >       suggested by Jason Wang.
> > > > 
> > > >   drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
> > > >   1 file changed, 29 insertions(+), 5 deletions(-)
> > > > 
> > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > index cff18c66b54a..c6fda756dd07 100644
> > > > --- a/drivers/net/virtio_net.c
> > > > +++ b/drivers/net/virtio_net.c
> > > > @@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > >   	local_bh_enable();
> > > >   }
> > > > 
> > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > > +static void virtnet_napi_enable(struct virtqueue *vq,
> > > > +				struct napi_struct *napi)
> > > >   {
> > > > +	struct virtnet_info *vi = vq->vdev->priv;
> > > > +	int q = vq2rxq(vq);
> > > > +	u16 curr_qs;
> > > > +
> > > >   	virtnet_napi_do_enable(vq, napi);
> > > > +
> > > > +	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> > > > +	if (!vi->xdp_enabled || q < curr_qs)
> > > > +		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
> > > 
> > > So what case the check of xdp_enabled is for?
> > 
> > Based on a previous discussion [1], the NAPIs should not be linked
> > for in-kernel XDP, but they _should_ be linked for XSK.
> > 
> > I could certainly have misread the virtio_net code (please let me
> > know if I've gotten it wrong, I'm not an expert), but the three
> > cases I have in mind are:
> > 
> >    - vi->xdp_enabled = false, which happens when no XDP is being
> >      used, so the queue number will be < vi->curr_queue_pairs.
> > 
> >    - vi->xdp_enabled = false, which I believe is what happens in the
> >      XSK case. In this case, the NAPI is linked.
> > 
> >    - vi->xdp_enabled = true, which I believe only happens for
> >      in-kernel XDP - but not XSK - and in this case, the NAPI should
> >      NOT be linked.
> 
> My interpretation based on [1] is that an in-kernel XDP Tx queue is a
> queue that is only used if XDP is attached and is not visible to
> userspace. The in-kernel XDP Tx queue is used to not load stack Tx
> queues with XDP packets. IIRC fbnic has additional queues only for
> XDP Tx. So for stack RX queues I would always link napi, no matter if
> XDP is attached or not. I think most driver do not have in-kernel XDP
> Tx queues. But I'm also not an expert.

I think you are probably right, so I'll send an RFC (since net-next
is now closed) with a change as you've suggested after I test it.

In this case, it'll be simply removing the if statement altogether
and mapping the NAPIs to queues.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-01-21 17:57 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-16  5:52 [PATCH net-next v2 0/4] virtio_net: Link queues to NAPIs Joe Damato
2025-01-16  5:52 ` [PATCH net-next v2 1/4] net: protect queue -> napi linking with netdev_lock() Joe Damato
2025-01-16  5:52 ` [PATCH net-next v2 2/4] virtio_net: Prepare for NAPI to queue mapping Joe Damato
2025-01-16  5:52 ` [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues Joe Damato
2025-01-16  7:53   ` Xuan Zhuo
2025-01-16 16:09     ` Joe Damato
2025-01-16 20:28       ` Gerhard Engleder
2025-01-21 17:57         ` Joe Damato
2025-01-20  1:58     ` Jason Wang
2025-01-21 17:55       ` Joe Damato
2025-01-16  5:52 ` [PATCH net-next v2 4/4] virtio_net: Use persistent NAPI config Joe Damato
2025-01-16  7:56   ` Xuan Zhuo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).