* [PATCH net-next 0/3] virtio_net: Link queues to NAPIs
@ 2025-01-10 20:26 Joe Damato
2025-01-10 20:26 ` [PATCH net-next 1/3] virtio_net: Prepare for NAPI to queue mapping Joe Damato
` (3 more replies)
0 siblings, 4 replies; 13+ messages in thread
From: Joe Damato @ 2025-01-10 20:26 UTC (permalink / raw)
To: netdev
Cc: mkarsten, Joe Damato, Andrew Lunn, David S. Miller, Eric Dumazet,
Eugenio Pérez, Jakub Kicinski, Jason Wang, open list,
Michael S. Tsirkin, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, Xuan Zhuo
Greetings:
Recently [1], Jakub mentioned that there were a few drivers that are not
yet mapping queues to NAPIs.
While I don't have any of the other hardware mentioned, I do happen to
have a virtio_net laying around ;)
I've attempted to link queues to NAPIs, taking care to hold RTNL when it
seemed that the path was not already holding it.
Note: It seems virtio_net uses TX-only NAPIs which do not have NAPI IDs.
As such, I've left the TX NAPIs unset (as opposed to setting them to 0).
See the commit message of patch 3 for example out to see what I mean.
Thanks,
Joe
[1]: https://lore.kernel.org/netdev/20250109084301.2445a3e3@kernel.org/
Joe Damato (3):
virtio_net: Prepare for NAPI to queue mapping
virtio_net: Hold RTNL for NAPI to queue mapping
virtio_net: Map NAPIs to queues
drivers/net/virtio_net.c | 48 ++++++++++++++++++++++++++++++++++++----
1 file changed, 44 insertions(+), 4 deletions(-)
base-commit: 7b24f164cf005b9649138ef6de94aaac49c9f3d1
--
2.25.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH net-next 1/3] virtio_net: Prepare for NAPI to queue mapping
2025-01-10 20:26 [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Joe Damato
@ 2025-01-10 20:26 ` Joe Damato
2025-01-10 22:21 ` Gerhard Engleder
2025-01-10 20:26 ` [PATCH net-next 2/3] virtio_net: Hold RTNL " Joe Damato
` (2 subsequent siblings)
3 siblings, 1 reply; 13+ messages in thread
From: Joe Damato @ 2025-01-10 20:26 UTC (permalink / raw)
To: netdev
Cc: mkarsten, Joe Damato, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
Slight refactor to prepare the code for NAPI to queue mapping. No
functional changes.
Signed-off-by: Joe Damato <jdamato@fastly.com>
---
drivers/net/virtio_net.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7646ddd9bef7..cff18c66b54a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
virtqueue_napi_schedule(&rq->napi, rvq);
}
-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+static void virtnet_napi_do_enable(struct virtqueue *vq,
+ struct napi_struct *napi)
{
napi_enable(napi);
@@ -2802,6 +2803,11 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
local_bh_enable();
}
+static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+{
+ virtnet_napi_do_enable(vq, napi);
+}
+
static void virtnet_napi_tx_enable(struct virtnet_info *vi,
struct virtqueue *vq,
struct napi_struct *napi)
@@ -2817,7 +2823,7 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
return;
}
- return virtnet_napi_enable(vq, napi);
+ virtnet_napi_do_enable(vq, napi);
}
static void virtnet_napi_tx_disable(struct napi_struct *napi)
--
2.25.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net-next 2/3] virtio_net: Hold RTNL for NAPI to queue mapping
2025-01-10 20:26 [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Joe Damato
2025-01-10 20:26 ` [PATCH net-next 1/3] virtio_net: Prepare for NAPI to queue mapping Joe Damato
@ 2025-01-10 20:26 ` Joe Damato
2025-01-10 22:22 ` Gerhard Engleder
2025-01-10 20:26 ` [PATCH net-next 3/3] virtio_net: Map NAPIs to queues Joe Damato
2025-01-13 6:03 ` [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Lei Yang
3 siblings, 1 reply; 13+ messages in thread
From: Joe Damato @ 2025-01-10 20:26 UTC (permalink / raw)
To: netdev
Cc: mkarsten, Joe Damato, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
Prepare for NAPI to queue mapping by holding RTNL in code paths where
NAPIs will be mapped to queue IDs and RTNL is not currently held.
Signed-off-by: Joe Damato <jdamato@fastly.com>
---
drivers/net/virtio_net.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index cff18c66b54a..4e88d352d3eb 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2803,11 +2803,17 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
local_bh_enable();
}
-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+static void virtnet_napi_enable_lock(struct virtqueue *vq,
+ struct napi_struct *napi)
{
virtnet_napi_do_enable(vq, napi);
}
+static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+{
+ virtnet_napi_enable_lock(vq, napi);
+}
+
static void virtnet_napi_tx_enable(struct virtnet_info *vi,
struct virtqueue *vq,
struct napi_struct *napi)
@@ -2844,7 +2850,7 @@ static void refill_work(struct work_struct *work)
napi_disable(&rq->napi);
still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
- virtnet_napi_enable(rq->vq, &rq->napi);
+ virtnet_napi_enable_lock(rq->vq, &rq->napi);
/* In theory, this can happen: if we don't get any buffers in
* we will *never* try to fill again.
@@ -5621,8 +5627,11 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
netif_tx_lock_bh(vi->dev);
netif_device_detach(vi->dev);
netif_tx_unlock_bh(vi->dev);
- if (netif_running(vi->dev))
+ if (netif_running(vi->dev)) {
+ rtnl_lock();
virtnet_close(vi->dev);
+ rtnl_unlock();
+ }
}
static int init_vqs(struct virtnet_info *vi);
@@ -5642,7 +5651,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
enable_rx_mode_work(vi);
if (netif_running(vi->dev)) {
+ rtnl_lock();
err = virtnet_open(vi->dev);
+ rtnl_unlock();
if (err)
return err;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
2025-01-10 20:26 [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Joe Damato
2025-01-10 20:26 ` [PATCH net-next 1/3] virtio_net: Prepare for NAPI to queue mapping Joe Damato
2025-01-10 20:26 ` [PATCH net-next 2/3] virtio_net: Hold RTNL " Joe Damato
@ 2025-01-10 20:26 ` Joe Damato
2025-01-10 22:25 ` Gerhard Engleder
2025-01-13 4:05 ` Jason Wang
2025-01-13 6:03 ` [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Lei Yang
3 siblings, 2 replies; 13+ messages in thread
From: Joe Damato @ 2025-01-10 20:26 UTC (permalink / raw)
To: netdev
Cc: mkarsten, Joe Damato, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
can be accessed by user apps.
$ ethtool -i ens4 | grep driver
driver: virtio_net
$ sudo ethtool -L ens4 combined 4
$ ./tools/net/ynl/pyynl/cli.py \
--spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
{'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
{'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'type': 'tx'},
{'id': 1, 'ifindex': 2, 'type': 'tx'},
{'id': 2, 'ifindex': 2, 'type': 'tx'},
{'id': 3, 'ifindex': 2, 'type': 'tx'}]
Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
the lack of 'napi-id' in the above output is expected.
Signed-off-by: Joe Damato <jdamato@fastly.com>
---
drivers/net/virtio_net.c | 29 ++++++++++++++++++++++++++---
1 file changed, 26 insertions(+), 3 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 4e88d352d3eb..8f0f26cc5a94 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2804,14 +2804,28 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
}
static void virtnet_napi_enable_lock(struct virtqueue *vq,
- struct napi_struct *napi)
+ struct napi_struct *napi,
+ bool need_rtnl)
{
+ struct virtnet_info *vi = vq->vdev->priv;
+ int q = vq2rxq(vq);
+
virtnet_napi_do_enable(vq, napi);
+
+ if (q < vi->curr_queue_pairs) {
+ if (need_rtnl)
+ rtnl_lock();
+
+ netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
+
+ if (need_rtnl)
+ rtnl_unlock();
+ }
}
static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
{
- virtnet_napi_enable_lock(vq, napi);
+ virtnet_napi_enable_lock(vq, napi, false);
}
static void virtnet_napi_tx_enable(struct virtnet_info *vi,
@@ -2848,9 +2862,13 @@ static void refill_work(struct work_struct *work)
for (i = 0; i < vi->curr_queue_pairs; i++) {
struct receive_queue *rq = &vi->rq[i];
+ rtnl_lock();
+ netif_queue_set_napi(vi->dev, i, NETDEV_QUEUE_TYPE_RX, NULL);
+ rtnl_unlock();
napi_disable(&rq->napi);
+
still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
- virtnet_napi_enable_lock(rq->vq, &rq->napi);
+ virtnet_napi_enable_lock(rq->vq, &rq->napi, true);
/* In theory, this can happen: if we don't get any buffers in
* we will *never* try to fill again.
@@ -3048,6 +3066,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
{
virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
+ netif_queue_set_napi(vi->dev, qp_index, NETDEV_QUEUE_TYPE_RX, NULL);
napi_disable(&vi->rq[qp_index].napi);
xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
}
@@ -3317,8 +3336,10 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
{
bool running = netif_running(vi->dev);
+ int q = vq2rxq(rq->vq);
if (running) {
+ netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, NULL);
napi_disable(&rq->napi);
virtnet_cancel_dim(vi, &rq->dim);
}
@@ -5943,6 +5964,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
/* Make sure NAPI is not using any XDP TX queues for RX. */
if (netif_running(dev)) {
for (i = 0; i < vi->max_queue_pairs; i++) {
+ netif_queue_set_napi(vi->dev, i, NETDEV_QUEUE_TYPE_RX,
+ NULL);
napi_disable(&vi->rq[i].napi);
virtnet_napi_tx_disable(&vi->sq[i].napi);
}
--
2.25.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 1/3] virtio_net: Prepare for NAPI to queue mapping
2025-01-10 20:26 ` [PATCH net-next 1/3] virtio_net: Prepare for NAPI to queue mapping Joe Damato
@ 2025-01-10 22:21 ` Gerhard Engleder
0 siblings, 0 replies; 13+ messages in thread
From: Gerhard Engleder @ 2025-01-10 22:21 UTC (permalink / raw)
To: Joe Damato, netdev
Cc: mkarsten, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
On 10.01.25 21:26, Joe Damato wrote:
> Slight refactor to prepare the code for NAPI to queue mapping. No
> functional changes.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---
> drivers/net/virtio_net.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 7646ddd9bef7..cff18c66b54a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
> virtqueue_napi_schedule(&rq->napi, rvq);
> }
>
> -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +static void virtnet_napi_do_enable(struct virtqueue *vq,
> + struct napi_struct *napi)
> {
> napi_enable(napi);
>
> @@ -2802,6 +2803,11 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> local_bh_enable();
> }
>
> +static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +{
> + virtnet_napi_do_enable(vq, napi);
> +}
> +
> static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> struct virtqueue *vq,
> struct napi_struct *napi)
> @@ -2817,7 +2823,7 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> return;
> }
>
> - return virtnet_napi_enable(vq, napi);
> + virtnet_napi_do_enable(vq, napi);
> }
>
> static void virtnet_napi_tx_disable(struct napi_struct *napi)
Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 2/3] virtio_net: Hold RTNL for NAPI to queue mapping
2025-01-10 20:26 ` [PATCH net-next 2/3] virtio_net: Hold RTNL " Joe Damato
@ 2025-01-10 22:22 ` Gerhard Engleder
0 siblings, 0 replies; 13+ messages in thread
From: Gerhard Engleder @ 2025-01-10 22:22 UTC (permalink / raw)
To: Joe Damato, netdev
Cc: mkarsten, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
On 10.01.25 21:26, Joe Damato wrote:
> Prepare for NAPI to queue mapping by holding RTNL in code paths where
> NAPIs will be mapped to queue IDs and RTNL is not currently held.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---
> drivers/net/virtio_net.c | 17 ++++++++++++++---
> 1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index cff18c66b54a..4e88d352d3eb 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -2803,11 +2803,17 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> local_bh_enable();
> }
>
> -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +static void virtnet_napi_enable_lock(struct virtqueue *vq,
> + struct napi_struct *napi)
> {
> virtnet_napi_do_enable(vq, napi);
> }
>
> +static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +{
> + virtnet_napi_enable_lock(vq, napi);
> +}
> +
> static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> struct virtqueue *vq,
> struct napi_struct *napi)
> @@ -2844,7 +2850,7 @@ static void refill_work(struct work_struct *work)
>
> napi_disable(&rq->napi);
> still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> - virtnet_napi_enable(rq->vq, &rq->napi);
> + virtnet_napi_enable_lock(rq->vq, &rq->napi);
>
> /* In theory, this can happen: if we don't get any buffers in
> * we will *never* try to fill again.
> @@ -5621,8 +5627,11 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
> netif_tx_lock_bh(vi->dev);
> netif_device_detach(vi->dev);
> netif_tx_unlock_bh(vi->dev);
> - if (netif_running(vi->dev))
> + if (netif_running(vi->dev)) {
> + rtnl_lock();
> virtnet_close(vi->dev);
> + rtnl_unlock();
> + }
> }
>
> static int init_vqs(struct virtnet_info *vi);
> @@ -5642,7 +5651,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
> enable_rx_mode_work(vi);
>
> if (netif_running(vi->dev)) {
> + rtnl_lock();
> err = virtnet_open(vi->dev);
> + rtnl_unlock();
> if (err)
> return err;
> }
Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
2025-01-10 20:26 ` [PATCH net-next 3/3] virtio_net: Map NAPIs to queues Joe Damato
@ 2025-01-10 22:25 ` Gerhard Engleder
2025-01-13 4:05 ` Jason Wang
1 sibling, 0 replies; 13+ messages in thread
From: Gerhard Engleder @ 2025-01-10 22:25 UTC (permalink / raw)
To: Joe Damato, netdev
Cc: mkarsten, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
On 10.01.25 21:26, Joe Damato wrote:
> Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> can be accessed by user apps.
>
> $ ethtool -i ens4 | grep driver
> driver: virtio_net
>
> $ sudo ethtool -L ens4 combined 4
>
> $ ./tools/net/ynl/pyynl/cli.py \
> --spec Documentation/netlink/specs/netdev.yaml \
> --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> {'id': 0, 'ifindex': 2, 'type': 'tx'},
> {'id': 1, 'ifindex': 2, 'type': 'tx'},
> {'id': 2, 'ifindex': 2, 'type': 'tx'},
> {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>
> Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> the lack of 'napi-id' in the above output is expected.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---
> drivers/net/virtio_net.c | 29 ++++++++++++++++++++++++++---
> 1 file changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 4e88d352d3eb..8f0f26cc5a94 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -2804,14 +2804,28 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> }
>
> static void virtnet_napi_enable_lock(struct virtqueue *vq,
> - struct napi_struct *napi)
> + struct napi_struct *napi,
> + bool need_rtnl)
> {
> + struct virtnet_info *vi = vq->vdev->priv;
> + int q = vq2rxq(vq);
> +
> virtnet_napi_do_enable(vq, napi);
> +
> + if (q < vi->curr_queue_pairs) {
> + if (need_rtnl)
> + rtnl_lock();
> +
> + netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
> +
> + if (need_rtnl)
> + rtnl_unlock();
> + }
> }
>
> static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> {
> - virtnet_napi_enable_lock(vq, napi);
> + virtnet_napi_enable_lock(vq, napi, false);
> }
>
> static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> @@ -2848,9 +2862,13 @@ static void refill_work(struct work_struct *work)
> for (i = 0; i < vi->curr_queue_pairs; i++) {
> struct receive_queue *rq = &vi->rq[i];
>
> + rtnl_lock();
> + netif_queue_set_napi(vi->dev, i, NETDEV_QUEUE_TYPE_RX, NULL);
> + rtnl_unlock();
> napi_disable(&rq->napi);
> +
> still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> - virtnet_napi_enable_lock(rq->vq, &rq->napi);
> + virtnet_napi_enable_lock(rq->vq, &rq->napi, true);
>
> /* In theory, this can happen: if we don't get any buffers in
> * we will *never* try to fill again.
> @@ -3048,6 +3066,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
> static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
> {
> virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
> + netif_queue_set_napi(vi->dev, qp_index, NETDEV_QUEUE_TYPE_RX, NULL);
> napi_disable(&vi->rq[qp_index].napi);
> xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
> }
> @@ -3317,8 +3336,10 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
> {
> bool running = netif_running(vi->dev);
> + int q = vq2rxq(rq->vq);
>
> if (running) {
> + netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, NULL);
> napi_disable(&rq->napi);
> virtnet_cancel_dim(vi, &rq->dim);
> }
> @@ -5943,6 +5964,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
> /* Make sure NAPI is not using any XDP TX queues for RX. */
> if (netif_running(dev)) {
> for (i = 0; i < vi->max_queue_pairs; i++) {
> + netif_queue_set_napi(vi->dev, i, NETDEV_QUEUE_TYPE_RX,
> + NULL);
> napi_disable(&vi->rq[i].napi);
> virtnet_napi_tx_disable(&vi->sq[i].napi);
> }
Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
2025-01-10 20:26 ` [PATCH net-next 3/3] virtio_net: Map NAPIs to queues Joe Damato
2025-01-10 22:25 ` Gerhard Engleder
@ 2025-01-13 4:05 ` Jason Wang
2025-01-13 17:30 ` Joe Damato
1 sibling, 1 reply; 13+ messages in thread
From: Jason Wang @ 2025-01-13 4:05 UTC (permalink / raw)
To: Joe Damato
Cc: netdev, mkarsten, Michael S. Tsirkin, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
On Sat, Jan 11, 2025 at 4:26 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> can be accessed by user apps.
>
> $ ethtool -i ens4 | grep driver
> driver: virtio_net
>
> $ sudo ethtool -L ens4 combined 4
>
> $ ./tools/net/ynl/pyynl/cli.py \
> --spec Documentation/netlink/specs/netdev.yaml \
> --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> {'id': 0, 'ifindex': 2, 'type': 'tx'},
> {'id': 1, 'ifindex': 2, 'type': 'tx'},
> {'id': 2, 'ifindex': 2, 'type': 'tx'},
> {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>
> Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> the lack of 'napi-id' in the above output is expected.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---
> drivers/net/virtio_net.c | 29 ++++++++++++++++++++++++++---
> 1 file changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 4e88d352d3eb..8f0f26cc5a94 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -2804,14 +2804,28 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> }
>
> static void virtnet_napi_enable_lock(struct virtqueue *vq,
> - struct napi_struct *napi)
> + struct napi_struct *napi,
> + bool need_rtnl)
> {
> + struct virtnet_info *vi = vq->vdev->priv;
> + int q = vq2rxq(vq);
> +
> virtnet_napi_do_enable(vq, napi);
> +
> + if (q < vi->curr_queue_pairs) {
> + if (need_rtnl)
> + rtnl_lock();
Can we tweak the caller to call rtnl_lock() instead to avoid this trick?
> +
> + netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
> +
> + if (need_rtnl)
> + rtnl_unlock();
> + }
> }
>
> static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> {
> - virtnet_napi_enable_lock(vq, napi);
> + virtnet_napi_enable_lock(vq, napi, false);
> }
>
> static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> @@ -2848,9 +2862,13 @@ static void refill_work(struct work_struct *work)
> for (i = 0; i < vi->curr_queue_pairs; i++) {
> struct receive_queue *rq = &vi->rq[i];
>
> + rtnl_lock();
> + netif_queue_set_napi(vi->dev, i, NETDEV_QUEUE_TYPE_RX, NULL);
> + rtnl_unlock();
> napi_disable(&rq->napi);
I wonder if it's better to have a helper to do set napi to NULL as
well as napi_disable().
> +
> still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> - virtnet_napi_enable_lock(rq->vq, &rq->napi);
> + virtnet_napi_enable_lock(rq->vq, &rq->napi, true);
>
> /* In theory, this can happen: if we don't get any buffers in
> * we will *never* try to fill again.
> @@ -3048,6 +3066,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
> static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
> {
> virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
> + netif_queue_set_napi(vi->dev, qp_index, NETDEV_QUEUE_TYPE_RX, NULL);
> napi_disable(&vi->rq[qp_index].napi);
> xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
> }
> @@ -3317,8 +3336,10 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
> {
> bool running = netif_running(vi->dev);
> + int q = vq2rxq(rq->vq);
>
> if (running) {
> + netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, NULL);
> napi_disable(&rq->napi);
> virtnet_cancel_dim(vi, &rq->dim);
> }
> @@ -5943,6 +5964,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
> /* Make sure NAPI is not using any XDP TX queues for RX. */
> if (netif_running(dev)) {
> for (i = 0; i < vi->max_queue_pairs; i++) {
> + netif_queue_set_napi(vi->dev, i, NETDEV_QUEUE_TYPE_RX,
> + NULL);
> napi_disable(&vi->rq[i].napi);
> virtnet_napi_tx_disable(&vi->sq[i].napi);
> }
> --
> 2.25.1
>
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 0/3] virtio_net: Link queues to NAPIs
2025-01-10 20:26 [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Joe Damato
` (2 preceding siblings ...)
2025-01-10 20:26 ` [PATCH net-next 3/3] virtio_net: Map NAPIs to queues Joe Damato
@ 2025-01-13 6:03 ` Lei Yang
3 siblings, 0 replies; 13+ messages in thread
From: Lei Yang @ 2025-01-13 6:03 UTC (permalink / raw)
To: Joe Damato
Cc: netdev, mkarsten, Andrew Lunn, David S. Miller, Eric Dumazet,
Eugenio Pérez, Jakub Kicinski, Jason Wang, open list,
Michael S. Tsirkin, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, Xuan Zhuo
I tested this series of patches with virtio-net regression tests,
everything works fine.
Tested-by: Lei Yang <leiyang@redhat.com>
On Sat, Jan 11, 2025 at 4:26 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Greetings:
>
> Recently [1], Jakub mentioned that there were a few drivers that are not
> yet mapping queues to NAPIs.
>
> While I don't have any of the other hardware mentioned, I do happen to
> have a virtio_net laying around ;)
>
> I've attempted to link queues to NAPIs, taking care to hold RTNL when it
> seemed that the path was not already holding it.
>
> Note: It seems virtio_net uses TX-only NAPIs which do not have NAPI IDs.
> As such, I've left the TX NAPIs unset (as opposed to setting them to 0).
>
> See the commit message of patch 3 for example out to see what I mean.
>
> Thanks,
> Joe
>
> [1]: https://lore.kernel.org/netdev/20250109084301.2445a3e3@kernel.org/
>
> Joe Damato (3):
> virtio_net: Prepare for NAPI to queue mapping
> virtio_net: Hold RTNL for NAPI to queue mapping
> virtio_net: Map NAPIs to queues
>
> drivers/net/virtio_net.c | 48 ++++++++++++++++++++++++++++++++++++----
> 1 file changed, 44 insertions(+), 4 deletions(-)
>
>
> base-commit: 7b24f164cf005b9649138ef6de94aaac49c9f3d1
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
2025-01-13 4:05 ` Jason Wang
@ 2025-01-13 17:30 ` Joe Damato
2025-01-13 22:04 ` Jakub Kicinski
0 siblings, 1 reply; 13+ messages in thread
From: Joe Damato @ 2025-01-13 17:30 UTC (permalink / raw)
To: Jason Wang
Cc: netdev, mkarsten, Michael S. Tsirkin, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:VIRTIO CORE AND NET DRIVERS, open list
On Mon, Jan 13, 2025 at 12:05:51PM +0800, Jason Wang wrote:
> On Sat, Jan 11, 2025 at 4:26 AM Joe Damato <jdamato@fastly.com> wrote:
> >
> > Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> > can be accessed by user apps.
> >
> > $ ethtool -i ens4 | grep driver
> > driver: virtio_net
> >
> > $ sudo ethtool -L ens4 combined 4
> >
> > $ ./tools/net/ynl/pyynl/cli.py \
> > --spec Documentation/netlink/specs/netdev.yaml \
> > --dump queue-get --json='{"ifindex": 2}'
> > [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> > {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> > {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> > {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> > {'id': 0, 'ifindex': 2, 'type': 'tx'},
> > {'id': 1, 'ifindex': 2, 'type': 'tx'},
> > {'id': 2, 'ifindex': 2, 'type': 'tx'},
> > {'id': 3, 'ifindex': 2, 'type': 'tx'}]
> >
> > Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> > the lack of 'napi-id' in the above output is expected.
> >
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > ---
> > drivers/net/virtio_net.c | 29 ++++++++++++++++++++++++++---
> > 1 file changed, 26 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 4e88d352d3eb..8f0f26cc5a94 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -2804,14 +2804,28 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> > }
> >
> > static void virtnet_napi_enable_lock(struct virtqueue *vq,
> > - struct napi_struct *napi)
> > + struct napi_struct *napi,
> > + bool need_rtnl)
> > {
> > + struct virtnet_info *vi = vq->vdev->priv;
> > + int q = vq2rxq(vq);
> > +
> > virtnet_napi_do_enable(vq, napi);
> > +
> > + if (q < vi->curr_queue_pairs) {
> > + if (need_rtnl)
> > + rtnl_lock();
>
> Can we tweak the caller to call rtnl_lock() instead to avoid this trick?
The major problem is that if the caller calls rtnl_lock() before
calling virtnet_napi_enable_lock, then virtnet_napi_do_enable (and
thus napi_enable) happen under the lock.
Jakub mentioned in a recent change [1] that napi_enable may soon
need to sleep.
Given the above constraints, the only way to avoid the "need_rtnl"
would be to refactor the code much more, placing calls (or wrappers)
to netif_queue_set_napi in many locations.
IMHO: This implementation seemed cleaner than putting calls to
netif_queue_set_napi throughout the driver.
Please let me know how you'd like to proceed on this.
[1]: https://lore.kernel.org/netdev/20250111024742.3680902-1-kuba@kernel.org/
> > +
> > + netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);
> > +
> > + if (need_rtnl)
> > + rtnl_unlock();
> > + }
> > }
> >
> > static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > {
> > - virtnet_napi_enable_lock(vq, napi);
> > + virtnet_napi_enable_lock(vq, napi, false);
> > }
> >
> > static void virtnet_napi_tx_enable(struct virtnet_info *vi,
> > @@ -2848,9 +2862,13 @@ static void refill_work(struct work_struct *work)
> > for (i = 0; i < vi->curr_queue_pairs; i++) {
> > struct receive_queue *rq = &vi->rq[i];
> >
> > + rtnl_lock();
> > + netif_queue_set_napi(vi->dev, i, NETDEV_QUEUE_TYPE_RX, NULL);
> > + rtnl_unlock();
> > napi_disable(&rq->napi);
>
> I wonder if it's better to have a helper to do set napi to NULL as
> well as napi_disable().
There are a couple places where this code is repeated, so I could do
that, but I'd probably employ the same "trick" as above with a flag
for "need_rtnl" in the helper.
I can send a v2 which adds a virtnet_napi_disable_lock and call it
from the 4 sites I see that can use it (virtnet_xdp_set,
virtnet_rx_pause, virtnet_disable_queue_pair, refill_work).
But first.... we need to agree on the flag being passed in to hold
rtnl :)
Please let me know.
Thanks for the review.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
2025-01-13 17:30 ` Joe Damato
@ 2025-01-13 22:04 ` Jakub Kicinski
2025-01-13 22:23 ` Joe Damato
0 siblings, 1 reply; 13+ messages in thread
From: Jakub Kicinski @ 2025-01-13 22:04 UTC (permalink / raw)
To: Joe Damato
Cc: Jason Wang, netdev, mkarsten, Michael S. Tsirkin, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS, open list
On Mon, 13 Jan 2025 09:30:20 -0800 Joe Damato wrote:
> > > static void virtnet_napi_enable_lock(struct virtqueue *vq,
> > > - struct napi_struct *napi)
> > > + struct napi_struct *napi,
> > > + bool need_rtnl)
> > > {
> > > + struct virtnet_info *vi = vq->vdev->priv;
> > > + int q = vq2rxq(vq);
> > > +
> > > virtnet_napi_do_enable(vq, napi);
> > > +
> > > + if (q < vi->curr_queue_pairs) {
> > > + if (need_rtnl)
> > > + rtnl_lock();
> >
> > Can we tweak the caller to call rtnl_lock() instead to avoid this trick?
>
> The major problem is that if the caller calls rtnl_lock() before
> calling virtnet_napi_enable_lock, then virtnet_napi_do_enable (and
> thus napi_enable) happen under the lock.
>
> Jakub mentioned in a recent change [1] that napi_enable may soon
> need to sleep.
>
> Given the above constraints, the only way to avoid the "need_rtnl"
> would be to refactor the code much more, placing calls (or wrappers)
> to netif_queue_set_napi in many locations.
>
> IMHO: This implementation seemed cleaner than putting calls to
> netif_queue_set_napi throughout the driver.
>
> Please let me know how you'd like to proceed on this.
>
> [1]: https://lore.kernel.org/netdev/20250111024742.3680902-1-kuba@kernel.org/
I'm going to make netif_queue_set_napi() take netdev->lock, and remove
the rtnl_lock requirement ~this week. If we need conditional locking
perhaps we're better off waiting?
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
2025-01-13 22:04 ` Jakub Kicinski
@ 2025-01-13 22:23 ` Joe Damato
2025-01-13 22:32 ` Jakub Kicinski
0 siblings, 1 reply; 13+ messages in thread
From: Joe Damato @ 2025-01-13 22:23 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Jason Wang, netdev, mkarsten, Michael S. Tsirkin, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS, open list
On Mon, Jan 13, 2025 at 02:04:46PM -0800, Jakub Kicinski wrote:
> On Mon, 13 Jan 2025 09:30:20 -0800 Joe Damato wrote:
> > > > static void virtnet_napi_enable_lock(struct virtqueue *vq,
> > > > - struct napi_struct *napi)
> > > > + struct napi_struct *napi,
> > > > + bool need_rtnl)
> > > > {
> > > > + struct virtnet_info *vi = vq->vdev->priv;
> > > > + int q = vq2rxq(vq);
> > > > +
> > > > virtnet_napi_do_enable(vq, napi);
> > > > +
> > > > + if (q < vi->curr_queue_pairs) {
> > > > + if (need_rtnl)
> > > > + rtnl_lock();
> > >
> > > Can we tweak the caller to call rtnl_lock() instead to avoid this trick?
> >
> > The major problem is that if the caller calls rtnl_lock() before
> > calling virtnet_napi_enable_lock, then virtnet_napi_do_enable (and
> > thus napi_enable) happen under the lock.
> >
> > Jakub mentioned in a recent change [1] that napi_enable may soon
> > need to sleep.
> >
> > Given the above constraints, the only way to avoid the "need_rtnl"
> > would be to refactor the code much more, placing calls (or wrappers)
> > to netif_queue_set_napi in many locations.
> >
> > IMHO: This implementation seemed cleaner than putting calls to
> > netif_queue_set_napi throughout the driver.
> >
> > Please let me know how you'd like to proceed on this.
> >
> > [1]: https://lore.kernel.org/netdev/20250111024742.3680902-1-kuba@kernel.org/
>
> I'm going to make netif_queue_set_napi() take netdev->lock, and remove
> the rtnl_lock requirement ~this week. If we need conditional locking
> perhaps we're better off waiting?
That seems reasonable to me and I can wait.
Please CC me on that series so I can take a look and I'll adjust the
v2 of this series to avoid the locking once your series is merged.
Thanks!
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next 3/3] virtio_net: Map NAPIs to queues
2025-01-13 22:23 ` Joe Damato
@ 2025-01-13 22:32 ` Jakub Kicinski
0 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2025-01-13 22:32 UTC (permalink / raw)
To: Joe Damato
Cc: Jason Wang, netdev, mkarsten, Michael S. Tsirkin, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS, open list
On Mon, 13 Jan 2025 14:23:56 -0800 Joe Damato wrote:
> Please CC me on that series so I can take a look and I'll adjust the
> v2 of this series to avoid the locking once your series is merged.
Will do! I'll send the first chunk as soon as Comcast restores
the internet at my home :|
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-01-13 22:32 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-10 20:26 [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Joe Damato
2025-01-10 20:26 ` [PATCH net-next 1/3] virtio_net: Prepare for NAPI to queue mapping Joe Damato
2025-01-10 22:21 ` Gerhard Engleder
2025-01-10 20:26 ` [PATCH net-next 2/3] virtio_net: Hold RTNL " Joe Damato
2025-01-10 22:22 ` Gerhard Engleder
2025-01-10 20:26 ` [PATCH net-next 3/3] virtio_net: Map NAPIs to queues Joe Damato
2025-01-10 22:25 ` Gerhard Engleder
2025-01-13 4:05 ` Jason Wang
2025-01-13 17:30 ` Joe Damato
2025-01-13 22:04 ` Jakub Kicinski
2025-01-13 22:23 ` Joe Damato
2025-01-13 22:32 ` Jakub Kicinski
2025-01-13 6:03 ` [PATCH net-next 0/3] virtio_net: Link queues to NAPIs Lei Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).