From: Jason Wang <jasowang@redhat.com>
To: qemu-devel@nongnu.org
Cc: Akihiko Odaki <akihiko.odaki@daynix.com>,
Lei Yang <leiyang@redhat.com>,
qemu-stable@nongnu.org, Jason Wang <jasowang@redhat.com>
Subject: [PULL 02/13] virtio-net: Add queues for RSS during migration
Date: Mon, 14 Jul 2025 13:34:12 +0800 [thread overview]
Message-ID: <20250714053423.10415-3-jasowang@redhat.com> (raw)
In-Reply-To: <20250714053423.10415-1-jasowang@redhat.com>
From: Akihiko Odaki <akihiko.odaki@daynix.com>
virtio_net_pre_load_queues() inspects vdev->guest_features to tell if
VIRTIO_NET_F_RSS or VIRTIO_NET_F_MQ is enabled to infer the required
number of queues. This works for VIRTIO_NET_F_MQ but it doesn't for
VIRTIO_NET_F_RSS because only the lowest 32 bits of vdev->guest_features
is set at the point and VIRTIO_NET_F_RSS uses bit 60 while
VIRTIO_NET_F_MQ uses bit 22.
Instead of inferring the required number of queues from
vdev->guest_features, use the number loaded from the vm state. This
change also has a nice side effect to remove a duplicate peer queue
pair change by circumventing virtio_net_set_multiqueue().
Also update the comment in include/hw/virtio/virtio.h to prevent an
implementation of pre_load_queues() from refering to any fields being
loaded during migration by accident in the future.
Fixes: 8c49756825da ("virtio-net: Add only one queue pair when realizing")
Tested-by: Lei Yang <leiyang@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/virtio-net.c | 11 ++++-------
hw/virtio/virtio.c | 14 +++++++-------
include/hw/virtio/virtio.h | 10 ++++++++--
3 files changed, 19 insertions(+), 16 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index eb93607b8c..351377c025 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3022,11 +3022,10 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
virtio_del_queue(vdev, index * 2 + 1);
}
-static void virtio_net_change_num_queue_pairs(VirtIONet *n, int new_max_queue_pairs)
+static void virtio_net_change_num_queues(VirtIONet *n, int new_num_queues)
{
VirtIODevice *vdev = VIRTIO_DEVICE(n);
int old_num_queues = virtio_get_num_queues(vdev);
- int new_num_queues = new_max_queue_pairs * 2 + 1;
int i;
assert(old_num_queues >= 3);
@@ -3062,16 +3061,14 @@ static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue)
int max = multiqueue ? n->max_queue_pairs : 1;
n->multiqueue = multiqueue;
- virtio_net_change_num_queue_pairs(n, max);
+ virtio_net_change_num_queues(n, max * 2 + 1);
virtio_net_set_queue_pairs(n);
}
-static int virtio_net_pre_load_queues(VirtIODevice *vdev)
+static int virtio_net_pre_load_queues(VirtIODevice *vdev, uint32_t n)
{
- virtio_net_set_multiqueue(VIRTIO_NET(vdev),
- virtio_has_feature(vdev->guest_features, VIRTIO_NET_F_RSS) ||
- virtio_has_feature(vdev->guest_features, VIRTIO_NET_F_MQ));
+ virtio_net_change_num_queues(VIRTIO_NET(vdev), n);
return 0;
}
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 82a285a31d..7e38b1ca97 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3270,13 +3270,6 @@ virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
config_len--;
}
- if (vdc->pre_load_queues) {
- ret = vdc->pre_load_queues(vdev);
- if (ret) {
- return ret;
- }
- }
-
num = qemu_get_be32(f);
if (num > VIRTIO_QUEUE_MAX) {
@@ -3284,6 +3277,13 @@ virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
return -1;
}
+ if (vdc->pre_load_queues) {
+ ret = vdc->pre_load_queues(vdev, num);
+ if (ret) {
+ return ret;
+ }
+ }
+
for (i = 0; i < num; i++) {
vdev->vq[i].vring.num = qemu_get_be32(f);
if (k->has_variable_vring_alignment) {
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 214d4a77e9..c594764f23 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -210,8 +210,14 @@ struct VirtioDeviceClass {
void (*guest_notifier_mask)(VirtIODevice *vdev, int n, bool mask);
int (*start_ioeventfd)(VirtIODevice *vdev);
void (*stop_ioeventfd)(VirtIODevice *vdev);
- /* Called before loading queues. Useful to add queues before loading. */
- int (*pre_load_queues)(VirtIODevice *vdev);
+ /*
+ * Called before loading queues.
+ * If the number of queues change at runtime, use @n to know the
+ * number and add or remove queues accordingly.
+ * Note that this function is called in the middle of loading vmsd;
+ * no assumption should be made on states being loaded from vmsd.
+ */
+ int (*pre_load_queues)(VirtIODevice *vdev, uint32_t n);
/* Saving and loading of a device; trying to deprecate save/load
* use vmsd for new devices.
*/
--
2.42.0
next prev parent reply other threads:[~2025-07-14 5:36 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-14 5:34 [PULL 00/13] Net patches Jason Wang
2025-07-14 5:34 ` [PULL 01/13] net: fix buffer overflow in af_xdp_umem_create() Jason Wang
2025-07-14 5:34 ` Jason Wang [this message]
2025-07-14 5:34 ` [PULL 03/13] net: Refactor stream logic for reuse in '-net passt' Jason Wang
2025-07-14 5:34 ` [PULL 04/13] net: Define net_client_set_link() Jason Wang
2025-07-14 5:34 ` [PULL 05/13] vhost_net: Rename vhost_set_vring_enable() for clarity Jason Wang
2025-07-14 5:34 ` [PULL 06/13] net: Add get_vhost_net callback to NetClientInfo Jason Wang
2025-07-14 5:34 ` [PULL 07/13] net: Consolidate vhost feature bits into vhost_net structure Jason Wang
2025-07-14 5:34 ` [PULL 08/13] net: Add get_acked_features callback to VhostNetOptions Jason Wang
2025-07-14 5:34 ` [PULL 09/13] net: Add save_acked_features callback to vhost_net Jason Wang
2025-07-14 5:34 ` [PULL 10/13] net: Allow network backends to advertise max TX queue size Jason Wang
2025-07-14 5:34 ` [PULL 11/13] net: Add is_vhost_user flag to vhost_net struct Jason Wang
2025-07-14 5:34 ` [PULL 12/13] net: Add passt network backend Jason Wang
2025-07-14 5:34 ` [PULL 13/13] net/passt: Implement vhost-user backend support Jason Wang
2025-07-15 4:13 ` [PULL 00/13] Net patches Jason Wang
2025-07-15 19:50 ` Stefan Hajnoczi
2025-07-16 2:21 ` Jason Wang
2025-07-16 4:40 ` Philippe Mathieu-Daudé
2025-07-16 10:26 ` Stefan Hajnoczi
2025-07-16 10:30 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250714053423.10415-3-jasowang@redhat.com \
--to=jasowang@redhat.com \
--cc=akihiko.odaki@daynix.com \
--cc=leiyang@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-stable@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).