* [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs
@ 2025-03-07 1:12 Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 1/4] virtio-net: Refactor napi_enable paths Joe Damato
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: Joe Damato @ 2025-03-07 1:12 UTC (permalink / raw)
To: netdev
Cc: mkarsten, gerhard, jasowang, xuanzhuo, kuba, mst, leiyang,
Joe Damato, Alexei Starovoitov, Andrew Lunn,
open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
Daniel Borkmann, David S. Miller, Eric Dumazet,
Eugenio Pérez, Jesper Dangaard Brouer, John Fastabend,
open list, Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS
Greetings:
Welcome to v6. Only patch updated is patch 3. See changelog below.
Jakub recently commented [1] that I should not hold this series on
virtio-net linking queues to NAPIs behind other important work that is
on-going and suggested I re-spin, so here we are :)
As per the discussion on the v3 [2], now both RX and TX NAPIs use the
API to link queues to NAPIs. Since TX-only NAPIs don't have a NAPI ID,
commit 6597e8d35851 ("netdev-genl: Elide napi_id when not present") now
correctly elides the TX-only NAPIs (instead of printing zero) when the
queues and NAPIs are linked.
As per the discussion on the v4 [3], patch 3 has been refactored to hold
RTNL only in the specific locations which need it as Jason requested.
As per the discussion on the v5 [4], patch 3 now leaves refill_work
as-is and does not use the API to unlink and relink queues and NAPIs. A
comment has been left as suggested by Jakub [5] for future work.
See the commit message of patch 3 for an example of how to get the NAPI
to queue mapping information.
See the commit message of patch 4 for an example of how NAPI IDs are
persistent despite queue count changes.
Thanks,
Joe
[1]: https://lore.kernel.org/netdev/20250221142650.3c74dcac@kernel.org/
[2]: https://lore.kernel.org/netdev/20250127142400.24eca319@kernel.org/
[3]: https://lore.kernel.org/netdev/CACGkMEv=ejJnOWDnAu7eULLvrqXjkMkTL4cbi-uCTUhCpKN_GA@mail.gmail.com/
[4]: https://lore.kernel.org/lkml/Z8X15hxz8t-vXpPU@LQ3V64L9R2/
[5]: https://lore.kernel.org/lkml/20250303160355.5f8d82d8@kernel.org/
v6:
- Patch 3 has been updated to avoid using the queue linking API from
refill_work and a comment has been added to instruct future
work on the code.
v5: https://lore.kernel.org/lkml/20250227185017.206785-1-jdamato@fastly.com/
- Patch 1 added Acked-by's from Michael and Jason. Added Tested-by
from Lei. No functional changes.
- Patch 2 added Acked-by's from Michael and Jason. Added Tested-by
from Lei. No functional changes.
- Patch 3:
- Refactored as Jason requested, eliminating the
virtnet_queue_set_napi helper entirely, and explicitly holding
RTNL in the 3 locations where needed (refill_work, freeze, and
restore).
- Commit message updated to outline the known paths at the time the
commit was written.
- Patch 4 added Acked-by from Michael. Added Tested-by from Lei. No
functional changes.
v4: https://lore.kernel.org/lkml/20250225020455.212895-1-jdamato@fastly.com/
- Dropped Jakub's patch (previously patch 1).
- Significant refactor from v3 affecting patches 1-3.
- Patch 4 added tags from Jason and Gerhard.
rfcv3: https://lore.kernel.org/netdev/20250121191047.269844-1-jdamato@fastly.com/
- patch 3:
- Removed the xdp checks completely, as Gerhard Engleder pointed
out, they are likely not necessary.
- patch 4:
- Added Xuan Zhuo's Reviewed-by.
v2: https://lore.kernel.org/netdev/20250116055302.14308-1-jdamato@fastly.com/
- patch 1:
- New in the v2 from Jakub.
- patch 2:
- Previously patch 1, unchanged from v1.
- Added Gerhard Engleder's Reviewed-by.
- Added Lei Yang's Tested-by.
- patch 3:
- Introduced virtnet_napi_disable to eliminate duplicated code
in virtnet_xdp_set, virtnet_rx_pause, virtnet_disable_queue_pair,
refill_work as suggested by Jason Wang.
- As a result of the above refactor, dropped Reviewed-by and
Tested-by from patch 3.
- patch 4:
- New in v2. Adds persistent NAPI configuration. See commit message
for more details.
v1: https://lore.kernel.org/netdev/20250110202605.429475-1-jdamato@fastly.com/
Joe Damato (4):
virtio-net: Refactor napi_enable paths
virtio-net: Refactor napi_disable paths
virtio-net: Map NAPIs to queues
virtio_net: Use persistent NAPI config
drivers/net/virtio_net.c | 101 ++++++++++++++++++++++++++++-----------
1 file changed, 74 insertions(+), 27 deletions(-)
base-commit: 8e0e8bef484160ac01ea7bcc3122cc1f0405c982
--
2.45.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH net-next v6 1/4] virtio-net: Refactor napi_enable paths
2025-03-07 1:12 [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Joe Damato
@ 2025-03-07 1:12 ` Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 2/4] virtio-net: Refactor napi_disable paths Joe Damato
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Joe Damato @ 2025-03-07 1:12 UTC (permalink / raw)
To: netdev
Cc: mkarsten, gerhard, jasowang, xuanzhuo, kuba, mst, leiyang,
Joe Damato, Eugenio Pérez, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS,
open list
Refactor virtnet_napi_enable and virtnet_napi_tx_enable to take a struct
receive_queue. Create a helper, virtnet_napi_do_enable, which contains
the logic to enable a NAPI.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Tested-by: Lei Yang <leiyang@redhat.com>
---
drivers/net/virtio_net.c | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index ac26a6201c44..133b004c7a9a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2807,7 +2807,8 @@ static void skb_recv_done(struct virtqueue *rvq)
virtqueue_napi_schedule(&rq->napi, rvq);
}
-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+static void virtnet_napi_do_enable(struct virtqueue *vq,
+ struct napi_struct *napi)
{
napi_enable(napi);
@@ -2820,10 +2821,16 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
local_bh_enable();
}
-static void virtnet_napi_tx_enable(struct virtnet_info *vi,
- struct virtqueue *vq,
- struct napi_struct *napi)
+static void virtnet_napi_enable(struct receive_queue *rq)
{
+ virtnet_napi_do_enable(rq->vq, &rq->napi);
+}
+
+static void virtnet_napi_tx_enable(struct send_queue *sq)
+{
+ struct virtnet_info *vi = sq->vq->vdev->priv;
+ struct napi_struct *napi = &sq->napi;
+
if (!napi->weight)
return;
@@ -2835,7 +2842,7 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
return;
}
- return virtnet_napi_enable(vq, napi);
+ virtnet_napi_do_enable(sq->vq, napi);
}
static void virtnet_napi_tx_disable(struct napi_struct *napi)
@@ -2856,7 +2863,7 @@ static void refill_work(struct work_struct *work)
napi_disable(&rq->napi);
still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
- virtnet_napi_enable(rq->vq, &rq->napi);
+ virtnet_napi_enable(rq);
/* In theory, this can happen: if we don't get any buffers in
* we will *never* try to fill again.
@@ -3073,8 +3080,8 @@ static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
if (err < 0)
goto err_xdp_reg_mem_model;
- virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
- virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi);
+ virtnet_napi_enable(&vi->rq[qp_index]);
+ virtnet_napi_tx_enable(&vi->sq[qp_index]);
return 0;
@@ -3339,7 +3346,7 @@ static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)
schedule_delayed_work(&vi->refill, 0);
if (running)
- virtnet_napi_enable(rq->vq, &rq->napi);
+ virtnet_napi_enable(rq);
}
static int virtnet_rx_resize(struct virtnet_info *vi,
@@ -3402,7 +3409,7 @@ static void virtnet_tx_resume(struct virtnet_info *vi, struct send_queue *sq)
__netif_tx_unlock_bh(txq);
if (running)
- virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
+ virtnet_napi_tx_enable(sq);
}
static int virtnet_tx_resize(struct virtnet_info *vi, struct send_queue *sq,
@@ -5983,9 +5990,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
if (old_prog)
bpf_prog_put(old_prog);
if (netif_running(dev)) {
- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
- virtnet_napi_tx_enable(vi, vi->sq[i].vq,
- &vi->sq[i].napi);
+ virtnet_napi_enable(&vi->rq[i]);
+ virtnet_napi_tx_enable(&vi->sq[i]);
}
}
@@ -6000,9 +6006,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
if (netif_running(dev)) {
for (i = 0; i < vi->max_queue_pairs; i++) {
- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
- virtnet_napi_tx_enable(vi, vi->sq[i].vq,
- &vi->sq[i].napi);
+ virtnet_napi_enable(&vi->rq[i]);
+ virtnet_napi_tx_enable(&vi->sq[i]);
}
}
if (prog)
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next v6 2/4] virtio-net: Refactor napi_disable paths
2025-03-07 1:12 [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 1/4] virtio-net: Refactor napi_enable paths Joe Damato
@ 2025-03-07 1:12 ` Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 3/4] virtio-net: Map NAPIs to queues Joe Damato
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Joe Damato @ 2025-03-07 1:12 UTC (permalink / raw)
To: netdev
Cc: mkarsten, gerhard, jasowang, xuanzhuo, kuba, mst, leiyang,
Joe Damato, Eugenio Pérez, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS,
open list
Create virtnet_napi_disable helper and refactor virtnet_napi_tx_disable
to take a struct send_queue.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Tested-by: Lei Yang <leiyang@redhat.com>
---
drivers/net/virtio_net.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 133b004c7a9a..e578885c1093 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2845,12 +2845,21 @@ static void virtnet_napi_tx_enable(struct send_queue *sq)
virtnet_napi_do_enable(sq->vq, napi);
}
-static void virtnet_napi_tx_disable(struct napi_struct *napi)
+static void virtnet_napi_tx_disable(struct send_queue *sq)
{
+ struct napi_struct *napi = &sq->napi;
+
if (napi->weight)
napi_disable(napi);
}
+static void virtnet_napi_disable(struct receive_queue *rq)
+{
+ struct napi_struct *napi = &rq->napi;
+
+ napi_disable(napi);
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi =
@@ -2861,7 +2870,7 @@ static void refill_work(struct work_struct *work)
for (i = 0; i < vi->curr_queue_pairs; i++) {
struct receive_queue *rq = &vi->rq[i];
- napi_disable(&rq->napi);
+ virtnet_napi_disable(rq);
still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
virtnet_napi_enable(rq);
@@ -3060,8 +3069,8 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
{
- virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
- napi_disable(&vi->rq[qp_index].napi);
+ virtnet_napi_tx_disable(&vi->sq[qp_index]);
+ virtnet_napi_disable(&vi->rq[qp_index]);
xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
}
@@ -3333,7 +3342,7 @@ static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
bool running = netif_running(vi->dev);
if (running) {
- napi_disable(&rq->napi);
+ virtnet_napi_disable(rq);
virtnet_cancel_dim(vi, &rq->dim);
}
}
@@ -3375,7 +3384,7 @@ static void virtnet_tx_pause(struct virtnet_info *vi, struct send_queue *sq)
qindex = sq - vi->sq;
if (running)
- virtnet_napi_tx_disable(&sq->napi);
+ virtnet_napi_tx_disable(sq);
txq = netdev_get_tx_queue(vi->dev, qindex);
@@ -5952,8 +5961,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
/* Make sure NAPI is not using any XDP TX queues for RX. */
if (netif_running(dev)) {
for (i = 0; i < vi->max_queue_pairs; i++) {
- napi_disable(&vi->rq[i].napi);
- virtnet_napi_tx_disable(&vi->sq[i].napi);
+ virtnet_napi_disable(&vi->rq[i]);
+ virtnet_napi_tx_disable(&vi->sq[i]);
}
}
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next v6 3/4] virtio-net: Map NAPIs to queues
2025-03-07 1:12 [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 1/4] virtio-net: Refactor napi_enable paths Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 2/4] virtio-net: Refactor napi_disable paths Joe Damato
@ 2025-03-07 1:12 ` Joe Damato
2025-03-10 2:19 ` Jason Wang
2025-03-07 1:12 ` [PATCH net-next v6 4/4] virtio_net: Use persistent NAPI config Joe Damato
` (2 subsequent siblings)
5 siblings, 1 reply; 8+ messages in thread
From: Joe Damato @ 2025-03-07 1:12 UTC (permalink / raw)
To: netdev
Cc: mkarsten, gerhard, jasowang, xuanzhuo, kuba, mst, leiyang,
Joe Damato, Eugenio Pérez, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS,
open list
Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
can be accessed by user apps. Note that the netif_queue_set_napi
currently requires RTNL, so care must be taken to ensure RTNL is held on
paths where this API might be reached.
The paths in the driver where this API can be reached appear to be:
- ndo_open, ndo_close, which hold RTNL so no driver change is needed.
- rx_pause, rx_resume, tx_pause, tx_resume are reached either via
an ethtool ioctl or via XSK - neither path requires a driver change.
- power management paths (which call open and close), which have been
updated to hold/release RTNL.
$ ethtool -i ens4 | grep driver
driver: virtio_net
$ sudo ethtool -L ens4 combined 4
$ ./tools/net/ynl/pyynl/cli.py \
--spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
{'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
{'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'type': 'tx'},
{'id': 1, 'ifindex': 2, 'type': 'tx'},
{'id': 2, 'ifindex': 2, 'type': 'tx'},
{'id': 3, 'ifindex': 2, 'type': 'tx'}]
Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
the lack of 'napi-id' in the above output is expected.
Signed-off-by: Joe Damato <jdamato@fastly.com>
---
drivers/net/virtio_net.c | 40 ++++++++++++++++++++++++++++++++++++----
1 file changed, 36 insertions(+), 4 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index e578885c1093..7bd63a677123 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2823,13 +2823,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
static void virtnet_napi_enable(struct receive_queue *rq)
{
+ struct virtnet_info *vi = rq->vq->vdev->priv;
+ int qidx = vq2rxq(rq->vq);
+
virtnet_napi_do_enable(rq->vq, &rq->napi);
+ netif_queue_set_napi(vi->dev, qidx, NETDEV_QUEUE_TYPE_RX, &rq->napi);
}
static void virtnet_napi_tx_enable(struct send_queue *sq)
{
struct virtnet_info *vi = sq->vq->vdev->priv;
struct napi_struct *napi = &sq->napi;
+ int qidx = vq2txq(sq->vq);
if (!napi->weight)
return;
@@ -2843,20 +2848,28 @@ static void virtnet_napi_tx_enable(struct send_queue *sq)
}
virtnet_napi_do_enable(sq->vq, napi);
+ netif_queue_set_napi(vi->dev, qidx, NETDEV_QUEUE_TYPE_TX, napi);
}
static void virtnet_napi_tx_disable(struct send_queue *sq)
{
+ struct virtnet_info *vi = sq->vq->vdev->priv;
struct napi_struct *napi = &sq->napi;
+ int qidx = vq2txq(sq->vq);
- if (napi->weight)
+ if (napi->weight) {
+ netif_queue_set_napi(vi->dev, qidx, NETDEV_QUEUE_TYPE_TX, NULL);
napi_disable(napi);
+ }
}
static void virtnet_napi_disable(struct receive_queue *rq)
{
+ struct virtnet_info *vi = rq->vq->vdev->priv;
struct napi_struct *napi = &rq->napi;
+ int qidx = vq2rxq(rq->vq);
+ netif_queue_set_napi(vi->dev, qidx, NETDEV_QUEUE_TYPE_RX, NULL);
napi_disable(napi);
}
@@ -2870,9 +2883,23 @@ static void refill_work(struct work_struct *work)
for (i = 0; i < vi->curr_queue_pairs; i++) {
struct receive_queue *rq = &vi->rq[i];
- virtnet_napi_disable(rq);
+ /*
+ * When queue API support is added in the future and the call
+ * below becomes napi_disable_locked, this driver will need to
+ * be refactored.
+ *
+ * One possible solution would be to:
+ * - cancel refill_work with cancel_delayed_work (note:
+ * non-sync)
+ * - cancel refill_work with cancel_delayed_work_sync in
+ * virtnet_remove after the netdev is unregistered
+ * - wrap all of the work in a lock (perhaps the netdev
+ * instance lock)
+ * - check netif_running() and return early to avoid a race
+ */
+ napi_disable(&rq->napi);
still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
- virtnet_napi_enable(rq);
+ virtnet_napi_do_enable(rq->vq, &rq->napi);
/* In theory, this can happen: if we don't get any buffers in
* we will *never* try to fill again.
@@ -5650,8 +5677,11 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
netif_tx_lock_bh(vi->dev);
netif_device_detach(vi->dev);
netif_tx_unlock_bh(vi->dev);
- if (netif_running(vi->dev))
+ if (netif_running(vi->dev)) {
+ rtnl_lock();
virtnet_close(vi->dev);
+ rtnl_unlock();
+ }
}
static int init_vqs(struct virtnet_info *vi);
@@ -5671,7 +5701,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
enable_rx_mode_work(vi);
if (netif_running(vi->dev)) {
+ rtnl_lock();
err = virtnet_open(vi->dev);
+ rtnl_unlock();
if (err)
return err;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next v6 4/4] virtio_net: Use persistent NAPI config
2025-03-07 1:12 [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Joe Damato
` (2 preceding siblings ...)
2025-03-07 1:12 ` [PATCH net-next v6 3/4] virtio-net: Map NAPIs to queues Joe Damato
@ 2025-03-07 1:12 ` Joe Damato
2025-03-07 6:48 ` [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Xuan Zhuo
2025-03-10 20:20 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 8+ messages in thread
From: Joe Damato @ 2025-03-07 1:12 UTC (permalink / raw)
To: netdev
Cc: mkarsten, gerhard, jasowang, xuanzhuo, kuba, mst, leiyang,
Joe Damato, Eugenio Pérez, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS,
open list
Use persistent NAPI config so that NAPI IDs are not renumbered as queue
counts change.
$ sudo ethtool -l ens4 | tail -5 | egrep -i '(current|combined)'
Current hardware settings:
Combined: 4
$ ./tools/net/ynl/pyynl/cli.py \
--spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
{'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
{'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'type': 'tx'},
{'id': 1, 'ifindex': 2, 'type': 'tx'},
{'id': 2, 'ifindex': 2, 'type': 'tx'},
{'id': 3, 'ifindex': 2, 'type': 'tx'}]
Now adjust the queue count, note that the NAPI IDs are not renumbered:
$ sudo ethtool -L ens4 combined 1
$ ./tools/net/ynl/pyynl/cli.py \
--spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'type': 'tx'}]
$ sudo ethtool -L ens4 combined 8
$ ./tools/net/ynl/pyynl/cli.py \
--spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
{'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
{'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
{'id': 4, 'ifindex': 2, 'napi-id': 8197, 'type': 'rx'},
{'id': 5, 'ifindex': 2, 'napi-id': 8198, 'type': 'rx'},
{'id': 6, 'ifindex': 2, 'napi-id': 8199, 'type': 'rx'},
{'id': 7, 'ifindex': 2, 'napi-id': 8200, 'type': 'rx'},
[...]
Signed-off-by: Joe Damato <jdamato@fastly.com>
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Lei Yang <leiyang@redhat.com>
---
drivers/net/virtio_net.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7bd63a677123..34cec2b11b74 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -6455,8 +6455,9 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
INIT_DELAYED_WORK(&vi->refill, refill_work);
for (i = 0; i < vi->max_queue_pairs; i++) {
vi->rq[i].pages = NULL;
- netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll,
- napi_weight);
+ netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,
+ i);
+ vi->rq[i].napi.weight = napi_weight;
netif_napi_add_tx_weight(vi->dev, &vi->sq[i].napi,
virtnet_poll_tx,
napi_tx ? napi_weight : 0);
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs
2025-03-07 1:12 [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Joe Damato
` (3 preceding siblings ...)
2025-03-07 1:12 ` [PATCH net-next v6 4/4] virtio_net: Use persistent NAPI config Joe Damato
@ 2025-03-07 6:48 ` Xuan Zhuo
2025-03-10 20:20 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 8+ messages in thread
From: Xuan Zhuo @ 2025-03-07 6:48 UTC (permalink / raw)
To: Joe Damato
Cc: mkarsten, gerhard, jasowang, kuba, mst, leiyang, Joe Damato,
Alexei Starovoitov, Andrew Lunn,
open list:XDP (eXpress Data Path):Keyword:(?:b|_)xdp(?:b|_),
Daniel Borkmann, David S. Miller, Eric Dumazet,
Eugenio Pérez, Jesper Dangaard Brouer, John Fastabend,
open list, Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS,
netdev
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
On Fri, 7 Mar 2025 01:12:08 +0000, Joe Damato <jdamato@fastly.com> wrote:
> Greetings:
>
> Welcome to v6. Only patch updated is patch 3. See changelog below.
>
> Jakub recently commented [1] that I should not hold this series on
> virtio-net linking queues to NAPIs behind other important work that is
> on-going and suggested I re-spin, so here we are :)
>
> As per the discussion on the v3 [2], now both RX and TX NAPIs use the
> API to link queues to NAPIs. Since TX-only NAPIs don't have a NAPI ID,
> commit 6597e8d35851 ("netdev-genl: Elide napi_id when not present") now
> correctly elides the TX-only NAPIs (instead of printing zero) when the
> queues and NAPIs are linked.
>
> As per the discussion on the v4 [3], patch 3 has been refactored to hold
> RTNL only in the specific locations which need it as Jason requested.
>
> As per the discussion on the v5 [4], patch 3 now leaves refill_work
> as-is and does not use the API to unlink and relink queues and NAPIs. A
> comment has been left as suggested by Jakub [5] for future work.
>
> See the commit message of patch 3 for an example of how to get the NAPI
> to queue mapping information.
>
> See the commit message of patch 4 for an example of how NAPI IDs are
> persistent despite queue count changes.
>
> Thanks,
> Joe
>
> [1]: https://lore.kernel.org/netdev/20250221142650.3c74dcac@kernel.org/
> [2]: https://lore.kernel.org/netdev/20250127142400.24eca319@kernel.org/
> [3]: https://lore.kernel.org/netdev/CACGkMEv=ejJnOWDnAu7eULLvrqXjkMkTL4cbi-uCTUhCpKN_GA@mail.gmail.com/
> [4]: https://lore.kernel.org/lkml/Z8X15hxz8t-vXpPU@LQ3V64L9R2/
> [5]: https://lore.kernel.org/lkml/20250303160355.5f8d82d8@kernel.org/
>
> v6:
> - Patch 3 has been updated to avoid using the queue linking API from
> refill_work and a comment has been added to instruct future
> work on the code.
>
> v5: https://lore.kernel.org/lkml/20250227185017.206785-1-jdamato@fastly.com/
> - Patch 1 added Acked-by's from Michael and Jason. Added Tested-by
> from Lei. No functional changes.
> - Patch 2 added Acked-by's from Michael and Jason. Added Tested-by
> from Lei. No functional changes.
> - Patch 3:
> - Refactored as Jason requested, eliminating the
> virtnet_queue_set_napi helper entirely, and explicitly holding
> RTNL in the 3 locations where needed (refill_work, freeze, and
> restore).
> - Commit message updated to outline the known paths at the time the
> commit was written.
> - Patch 4 added Acked-by from Michael. Added Tested-by from Lei. No
> functional changes.
>
> v4: https://lore.kernel.org/lkml/20250225020455.212895-1-jdamato@fastly.com/
> - Dropped Jakub's patch (previously patch 1).
> - Significant refactor from v3 affecting patches 1-3.
> - Patch 4 added tags from Jason and Gerhard.
>
> rfcv3: https://lore.kernel.org/netdev/20250121191047.269844-1-jdamato@fastly.com/
> - patch 3:
> - Removed the xdp checks completely, as Gerhard Engleder pointed
> out, they are likely not necessary.
>
> - patch 4:
> - Added Xuan Zhuo's Reviewed-by.
>
> v2: https://lore.kernel.org/netdev/20250116055302.14308-1-jdamato@fastly.com/
> - patch 1:
> - New in the v2 from Jakub.
>
> - patch 2:
> - Previously patch 1, unchanged from v1.
> - Added Gerhard Engleder's Reviewed-by.
> - Added Lei Yang's Tested-by.
>
> - patch 3:
> - Introduced virtnet_napi_disable to eliminate duplicated code
> in virtnet_xdp_set, virtnet_rx_pause, virtnet_disable_queue_pair,
> refill_work as suggested by Jason Wang.
> - As a result of the above refactor, dropped Reviewed-by and
> Tested-by from patch 3.
>
> - patch 4:
> - New in v2. Adds persistent NAPI configuration. See commit message
> for more details.
>
> v1: https://lore.kernel.org/netdev/20250110202605.429475-1-jdamato@fastly.com/
>
>
> Joe Damato (4):
> virtio-net: Refactor napi_enable paths
> virtio-net: Refactor napi_disable paths
> virtio-net: Map NAPIs to queues
> virtio_net: Use persistent NAPI config
>
> drivers/net/virtio_net.c | 101 ++++++++++++++++++++++++++++-----------
> 1 file changed, 74 insertions(+), 27 deletions(-)
>
>
> base-commit: 8e0e8bef484160ac01ea7bcc3122cc1f0405c982
> --
> 2.45.2
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v6 3/4] virtio-net: Map NAPIs to queues
2025-03-07 1:12 ` [PATCH net-next v6 3/4] virtio-net: Map NAPIs to queues Joe Damato
@ 2025-03-10 2:19 ` Jason Wang
0 siblings, 0 replies; 8+ messages in thread
From: Jason Wang @ 2025-03-10 2:19 UTC (permalink / raw)
To: Joe Damato
Cc: netdev, mkarsten, gerhard, xuanzhuo, kuba, mst, leiyang,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, open list:VIRTIO CORE AND NET DRIVERS, open list
On Fri, Mar 7, 2025 at 9:12 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> can be accessed by user apps. Note that the netif_queue_set_napi
> currently requires RTNL, so care must be taken to ensure RTNL is held on
> paths where this API might be reached.
>
> The paths in the driver where this API can be reached appear to be:
>
> - ndo_open, ndo_close, which hold RTNL so no driver change is needed.
> - rx_pause, rx_resume, tx_pause, tx_resume are reached either via
> an ethtool ioctl or via XSK - neither path requires a driver change.
> - power management paths (which call open and close), which have been
> updated to hold/release RTNL.
>
> $ ethtool -i ens4 | grep driver
> driver: virtio_net
>
> $ sudo ethtool -L ens4 combined 4
>
> $ ./tools/net/ynl/pyynl/cli.py \
> --spec Documentation/netlink/specs/netdev.yaml \
> --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
> {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
> {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
> {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
> {'id': 0, 'ifindex': 2, 'type': 'tx'},
> {'id': 1, 'ifindex': 2, 'type': 'tx'},
> {'id': 2, 'ifindex': 2, 'type': 'tx'},
> {'id': 3, 'ifindex': 2, 'type': 'tx'}]
>
> Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
> the lack of 'napi-id' in the above output is expected.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs
2025-03-07 1:12 [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Joe Damato
` (4 preceding siblings ...)
2025-03-07 6:48 ` [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Xuan Zhuo
@ 2025-03-10 20:20 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 8+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-03-10 20:20 UTC (permalink / raw)
To: Joe Damato
Cc: netdev, mkarsten, gerhard, jasowang, xuanzhuo, kuba, mst, leiyang,
ast, andrew+netdev, bpf, daniel, davem, edumazet, eperezma, hawk,
john.fastabend, linux-kernel, pabeni, virtualization
Hello:
This series was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Fri, 7 Mar 2025 01:12:08 +0000 you wrote:
> Greetings:
>
> Welcome to v6. Only patch updated is patch 3. See changelog below.
>
> Jakub recently commented [1] that I should not hold this series on
> virtio-net linking queues to NAPIs behind other important work that is
> on-going and suggested I re-spin, so here we are :)
>
> [...]
Here is the summary with links:
- [net-next,v6,1/4] virtio-net: Refactor napi_enable paths
https://git.kernel.org/netdev/net-next/c/2af5adf962d4
- [net-next,v6,2/4] virtio-net: Refactor napi_disable paths
https://git.kernel.org/netdev/net-next/c/986a93045183
- [net-next,v6,3/4] virtio-net: Map NAPIs to queues
https://git.kernel.org/netdev/net-next/c/e7231f49d526
- [net-next,v6,4/4] virtio_net: Use persistent NAPI config
https://git.kernel.org/netdev/net-next/c/d5d715207e29
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-03-10 20:20 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-07 1:12 [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 1/4] virtio-net: Refactor napi_enable paths Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 2/4] virtio-net: Refactor napi_disable paths Joe Damato
2025-03-07 1:12 ` [PATCH net-next v6 3/4] virtio-net: Map NAPIs to queues Joe Damato
2025-03-10 2:19 ` Jason Wang
2025-03-07 1:12 ` [PATCH net-next v6 4/4] virtio_net: Use persistent NAPI config Joe Damato
2025-03-07 6:48 ` [PATCH net-next v6 0/4] virtio-net: Link queues to NAPIs Xuan Zhuo
2025-03-10 20:20 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).