* [PATCH v3 1/5] vdpa: use first queue SVQ state for CVQ default
2023-08-22 8:53 [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
@ 2023-08-22 8:53 ` Eugenio Pérez
2023-08-22 8:53 ` [PATCH v3 2/5] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
` (5 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Eugenio Pérez @ 2023-08-22 8:53 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Lei Yang, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, si-wei.liu, Zhu Lingshan,
Jason Wang, Parav Pandit, Cindy Lu
Previous to this patch the only way CVQ would be shadowed is if it does
support to isolate CVQ group or if all vqs were shadowed from the
beginning. The second condition was checked at the beginning, and no
more configuration was done.
After this series we need to check if data queues are shadowed because
they are in the middle of the migration. As checking if they are
shadowed already covers the previous case, let's just mimic it.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 9795306742..a772540250 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -505,7 +505,7 @@ static int vhost_vdpa_net_cvq_start(NetClientState *nc)
s0 = vhost_vdpa_net_first_nc_vdpa(s);
v->shadow_data = s0->vhost_vdpa.shadow_vqs_enabled;
- v->shadow_vqs_enabled = s->always_svq;
+ v->shadow_vqs_enabled = s0->vhost_vdpa.shadow_vqs_enabled;
s->vhost_vdpa.address_space_id = VHOST_VDPA_GUEST_PA_ASID;
if (s->vhost_vdpa.shadow_data) {
--
2.39.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/5] vdpa: export vhost_vdpa_set_vring_ready
2023-08-22 8:53 [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
2023-08-22 8:53 ` [PATCH v3 1/5] vdpa: use first queue SVQ state for CVQ default Eugenio Pérez
@ 2023-08-22 8:53 ` Eugenio Pérez
2023-08-22 8:53 ` [PATCH v3 3/5] vdpa: rename vhost_vdpa_net_load to vhost_vdpa_net_cvq_load Eugenio Pérez
` (4 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Eugenio Pérez @ 2023-08-22 8:53 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Lei Yang, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, si-wei.liu, Zhu Lingshan,
Jason Wang, Parav Pandit, Cindy Lu
The vhost-vdpa net backend needs to enable vrings in a different order
than default, so export it.
No functional change intended except for tracing, that now includes the
(virtio) index being enabled and the return value of the ioctl.
Still ignoring return value of this function if called from
vhost_vdpa_dev_start, as reorganize calling code around it is out of
the scope of this series.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
include/hw/virtio/vhost-vdpa.h | 1 +
hw/virtio/vhost-vdpa.c | 25 +++++++++++++------------
hw/virtio/trace-events | 2 +-
3 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index e64bfc7f98..5407d54fd7 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -57,6 +57,7 @@ typedef struct vhost_vdpa {
} VhostVDPA;
int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
+int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx);
int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova,
hwaddr size, void *vaddr, bool readonly);
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 42f2a4bae9..0d9975b5b5 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -876,18 +876,17 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
return idx;
}
-static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
+int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx)
{
- int i;
- trace_vhost_vdpa_set_vring_ready(dev);
- for (i = 0; i < dev->nvqs; ++i) {
- struct vhost_vring_state state = {
- .index = dev->vq_index + i,
- .num = 1,
- };
- vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
- }
- return 0;
+ struct vhost_dev *dev = v->dev;
+ struct vhost_vring_state state = {
+ .index = idx,
+ .num = 1,
+ };
+ int r = vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
+
+ trace_vhost_vdpa_set_vring_ready(dev, idx, r);
+ return r;
}
static int vhost_vdpa_set_config_call(struct vhost_dev *dev,
@@ -1298,7 +1297,9 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
if (unlikely(!ok)) {
return -1;
}
- vhost_vdpa_set_vring_ready(dev);
+ for (int i = 0; i < dev->nvqs; ++i) {
+ vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
+ }
} else {
vhost_vdpa_suspend(dev);
vhost_vdpa_svqs_stop(dev);
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 7109cf1a3b..1cb9027d1e 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -48,7 +48,7 @@ vhost_vdpa_set_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRI
vhost_vdpa_get_device_id(void *dev, uint32_t device_id) "dev: %p device_id %"PRIu32
vhost_vdpa_reset_device(void *dev) "dev: %p"
vhost_vdpa_get_vq_index(void *dev, int idx, int vq_idx) "dev: %p idx: %d vq idx: %d"
-vhost_vdpa_set_vring_ready(void *dev) "dev: %p"
+vhost_vdpa_set_vring_ready(void *dev, unsigned i, int r) "dev: %p, idx: %u, r: %d"
vhost_vdpa_dump_config(void *dev, const char *line) "dev: %p %s"
vhost_vdpa_set_config(void *dev, uint32_t offset, uint32_t size, uint32_t flags) "dev: %p offset: %"PRIu32" size: %"PRIu32" flags: 0x%"PRIx32
vhost_vdpa_get_config(void *dev, void *config, uint32_t config_len) "dev: %p config: %p config_len: %"PRIu32
--
2.39.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 3/5] vdpa: rename vhost_vdpa_net_load to vhost_vdpa_net_cvq_load
2023-08-22 8:53 [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
2023-08-22 8:53 ` [PATCH v3 1/5] vdpa: use first queue SVQ state for CVQ default Eugenio Pérez
2023-08-22 8:53 ` [PATCH v3 2/5] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
@ 2023-08-22 8:53 ` Eugenio Pérez
2023-08-22 8:53 ` [PATCH v3 4/5] vdpa: move vhost_vdpa_set_vring_ready to the caller Eugenio Pérez
` (3 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Eugenio Pérez @ 2023-08-22 8:53 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Lei Yang, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, si-wei.liu, Zhu Lingshan,
Jason Wang, Parav Pandit, Cindy Lu
Next patches will add the corresponding data load.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index a772540250..9251351b4b 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -965,7 +965,7 @@ static int vhost_vdpa_net_load_rx(VhostVDPAState *s,
return 0;
}
-static int vhost_vdpa_net_load(NetClientState *nc)
+static int vhost_vdpa_net_cvq_load(NetClientState *nc)
{
VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
struct vhost_vdpa *v = &s->vhost_vdpa;
@@ -1004,7 +1004,7 @@ static NetClientInfo net_vhost_vdpa_cvq_info = {
.size = sizeof(VhostVDPAState),
.receive = vhost_vdpa_receive,
.start = vhost_vdpa_net_cvq_start,
- .load = vhost_vdpa_net_load,
+ .load = vhost_vdpa_net_cvq_load,
.stop = vhost_vdpa_net_cvq_stop,
.cleanup = vhost_vdpa_cleanup,
.has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
--
2.39.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 4/5] vdpa: move vhost_vdpa_set_vring_ready to the caller
2023-08-22 8:53 [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
` (2 preceding siblings ...)
2023-08-22 8:53 ` [PATCH v3 3/5] vdpa: rename vhost_vdpa_net_load to vhost_vdpa_net_cvq_load Eugenio Pérez
@ 2023-08-22 8:53 ` Eugenio Pérez
2023-08-22 8:55 ` Jason Wang
2023-08-22 8:53 ` [PATCH v3 5/5] vdpa: remove net cvq migration blocker Eugenio Pérez
` (2 subsequent siblings)
6 siblings, 1 reply; 9+ messages in thread
From: Eugenio Pérez @ 2023-08-22 8:53 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Lei Yang, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, si-wei.liu, Zhu Lingshan,
Jason Wang, Parav Pandit, Cindy Lu
Doing that way allows CVQ to be enabled before the dataplane vqs,
restoring the state as MQ or MAC addresses properly in the case of a
migration.
The patch does it by defining a ->load NetClientInfo callback also for
dataplane. Ideally, this should be done by an independent patch, but
the function is already static so it would only add an empty
vhost_vdpa_net_data_load stub.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v3:
* Fix subject typo
* Expand patch message so it explains why
---
hw/virtio/vdpa-dev.c | 3 +++
hw/virtio/vhost-vdpa.c | 3 ---
net/vhost-vdpa.c | 57 +++++++++++++++++++++++++++++-------------
3 files changed, 42 insertions(+), 21 deletions(-)
diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
index 363b625243..f22d5d5bc0 100644
--- a/hw/virtio/vdpa-dev.c
+++ b/hw/virtio/vdpa-dev.c
@@ -255,6 +255,9 @@ static int vhost_vdpa_device_start(VirtIODevice *vdev, Error **errp)
error_setg_errno(errp, -ret, "Error starting vhost");
goto err_guest_notifiers;
}
+ for (i = 0; i < s->dev.nvqs; ++i) {
+ vhost_vdpa_set_vring_ready(&s->vdpa, i);
+ }
s->started = true;
/*
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 0d9975b5b5..8ca2e3800c 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1297,9 +1297,6 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
if (unlikely(!ok)) {
return -1;
}
- for (int i = 0; i < dev->nvqs; ++i) {
- vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
- }
} else {
vhost_vdpa_suspend(dev);
vhost_vdpa_svqs_stop(dev);
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 9251351b4b..3bf60f9431 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -371,6 +371,22 @@ static int vhost_vdpa_net_data_start(NetClientState *nc)
return 0;
}
+static int vhost_vdpa_net_data_load(NetClientState *nc)
+{
+ VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
+ struct vhost_vdpa *v = &s->vhost_vdpa;
+ bool has_cvq = v->dev->vq_index_end % 2;
+
+ if (has_cvq) {
+ return 0;
+ }
+
+ for (int i = 0; i < v->dev->nvqs; ++i) {
+ vhost_vdpa_set_vring_ready(v, i + v->dev->vq_index);
+ }
+ return 0;
+}
+
static void vhost_vdpa_net_client_stop(NetClientState *nc)
{
VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
@@ -393,6 +409,7 @@ static NetClientInfo net_vhost_vdpa_info = {
.size = sizeof(VhostVDPAState),
.receive = vhost_vdpa_receive,
.start = vhost_vdpa_net_data_start,
+ .load = vhost_vdpa_net_data_load,
.stop = vhost_vdpa_net_client_stop,
.cleanup = vhost_vdpa_cleanup,
.has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
@@ -974,26 +991,30 @@ static int vhost_vdpa_net_cvq_load(NetClientState *nc)
assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
- if (!v->shadow_vqs_enabled) {
- return 0;
- }
+ vhost_vdpa_set_vring_ready(v, v->dev->vq_index);
- n = VIRTIO_NET(v->dev->vdev);
- r = vhost_vdpa_net_load_mac(s, n);
- if (unlikely(r < 0)) {
- return r;
- }
- r = vhost_vdpa_net_load_mq(s, n);
- if (unlikely(r)) {
- return r;
- }
- r = vhost_vdpa_net_load_offloads(s, n);
- if (unlikely(r)) {
- return r;
+ if (v->shadow_vqs_enabled) {
+ n = VIRTIO_NET(v->dev->vdev);
+ r = vhost_vdpa_net_load_mac(s, n);
+ if (unlikely(r < 0)) {
+ return r;
+ }
+ r = vhost_vdpa_net_load_mq(s, n);
+ if (unlikely(r)) {
+ return r;
+ }
+ r = vhost_vdpa_net_load_offloads(s, n);
+ if (unlikely(r)) {
+ return r;
+ }
+ r = vhost_vdpa_net_load_rx(s, n);
+ if (unlikely(r)) {
+ return r;
+ }
}
- r = vhost_vdpa_net_load_rx(s, n);
- if (unlikely(r)) {
- return r;
+
+ for (int i = 0; i < v->dev->vq_index; ++i) {
+ vhost_vdpa_set_vring_ready(v, i);
}
return 0;
--
2.39.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 4/5] vdpa: move vhost_vdpa_set_vring_ready to the caller
2023-08-22 8:53 ` [PATCH v3 4/5] vdpa: move vhost_vdpa_set_vring_ready to the caller Eugenio Pérez
@ 2023-08-22 8:55 ` Jason Wang
0 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2023-08-22 8:55 UTC (permalink / raw)
To: Eugenio Pérez
Cc: qemu-devel, Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Lei Yang, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, si-wei.liu, Zhu Lingshan,
Parav Pandit, Cindy Lu
On Tue, Aug 22, 2023 at 4:53 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Doing that way allows CVQ to be enabled before the dataplane vqs,
> restoring the state as MQ or MAC addresses properly in the case of a
> migration.
>
> The patch does it by defining a ->load NetClientInfo callback also for
> dataplane. Ideally, this should be done by an independent patch, but
> the function is already static so it would only add an empty
> vhost_vdpa_net_data_load stub.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
> ---
> v3:
> * Fix subject typo
> * Expand patch message so it explains why
> ---
> hw/virtio/vdpa-dev.c | 3 +++
> hw/virtio/vhost-vdpa.c | 3 ---
> net/vhost-vdpa.c | 57 +++++++++++++++++++++++++++++-------------
> 3 files changed, 42 insertions(+), 21 deletions(-)
>
> diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
> index 363b625243..f22d5d5bc0 100644
> --- a/hw/virtio/vdpa-dev.c
> +++ b/hw/virtio/vdpa-dev.c
> @@ -255,6 +255,9 @@ static int vhost_vdpa_device_start(VirtIODevice *vdev, Error **errp)
> error_setg_errno(errp, -ret, "Error starting vhost");
> goto err_guest_notifiers;
> }
> + for (i = 0; i < s->dev.nvqs; ++i) {
> + vhost_vdpa_set_vring_ready(&s->vdpa, i);
> + }
> s->started = true;
>
> /*
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 0d9975b5b5..8ca2e3800c 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -1297,9 +1297,6 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
> if (unlikely(!ok)) {
> return -1;
> }
> - for (int i = 0; i < dev->nvqs; ++i) {
> - vhost_vdpa_set_vring_ready(v, dev->vq_index + i);
> - }
> } else {
> vhost_vdpa_suspend(dev);
> vhost_vdpa_svqs_stop(dev);
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 9251351b4b..3bf60f9431 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -371,6 +371,22 @@ static int vhost_vdpa_net_data_start(NetClientState *nc)
> return 0;
> }
>
> +static int vhost_vdpa_net_data_load(NetClientState *nc)
> +{
> + VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> + struct vhost_vdpa *v = &s->vhost_vdpa;
> + bool has_cvq = v->dev->vq_index_end % 2;
> +
> + if (has_cvq) {
> + return 0;
> + }
> +
> + for (int i = 0; i < v->dev->nvqs; ++i) {
> + vhost_vdpa_set_vring_ready(v, i + v->dev->vq_index);
> + }
> + return 0;
> +}
> +
> static void vhost_vdpa_net_client_stop(NetClientState *nc)
> {
> VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> @@ -393,6 +409,7 @@ static NetClientInfo net_vhost_vdpa_info = {
> .size = sizeof(VhostVDPAState),
> .receive = vhost_vdpa_receive,
> .start = vhost_vdpa_net_data_start,
> + .load = vhost_vdpa_net_data_load,
> .stop = vhost_vdpa_net_client_stop,
> .cleanup = vhost_vdpa_cleanup,
> .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
> @@ -974,26 +991,30 @@ static int vhost_vdpa_net_cvq_load(NetClientState *nc)
>
> assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>
> - if (!v->shadow_vqs_enabled) {
> - return 0;
> - }
> + vhost_vdpa_set_vring_ready(v, v->dev->vq_index);
>
> - n = VIRTIO_NET(v->dev->vdev);
> - r = vhost_vdpa_net_load_mac(s, n);
> - if (unlikely(r < 0)) {
> - return r;
> - }
> - r = vhost_vdpa_net_load_mq(s, n);
> - if (unlikely(r)) {
> - return r;
> - }
> - r = vhost_vdpa_net_load_offloads(s, n);
> - if (unlikely(r)) {
> - return r;
> + if (v->shadow_vqs_enabled) {
> + n = VIRTIO_NET(v->dev->vdev);
> + r = vhost_vdpa_net_load_mac(s, n);
> + if (unlikely(r < 0)) {
> + return r;
> + }
> + r = vhost_vdpa_net_load_mq(s, n);
> + if (unlikely(r)) {
> + return r;
> + }
> + r = vhost_vdpa_net_load_offloads(s, n);
> + if (unlikely(r)) {
> + return r;
> + }
> + r = vhost_vdpa_net_load_rx(s, n);
> + if (unlikely(r)) {
> + return r;
> + }
> }
> - r = vhost_vdpa_net_load_rx(s, n);
> - if (unlikely(r)) {
> - return r;
> +
> + for (int i = 0; i < v->dev->vq_index; ++i) {
> + vhost_vdpa_set_vring_ready(v, i);
> }
>
> return 0;
> --
> 2.39.3
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 5/5] vdpa: remove net cvq migration blocker
2023-08-22 8:53 [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
` (3 preceding siblings ...)
2023-08-22 8:53 ` [PATCH v3 4/5] vdpa: move vhost_vdpa_set_vring_ready to the caller Eugenio Pérez
@ 2023-08-22 8:53 ` Eugenio Pérez
2023-08-28 6:10 ` [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Lei Yang
2023-09-15 6:39 ` Si-Wei Liu
6 siblings, 0 replies; 9+ messages in thread
From: Eugenio Pérez @ 2023-08-22 8:53 UTC (permalink / raw)
To: qemu-devel
Cc: Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Lei Yang, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, si-wei.liu, Zhu Lingshan,
Jason Wang, Parav Pandit, Cindy Lu
Now that we have add migration blockers if the device does not support
all the needed features, remove the general blocker applied to all net
devices with CVQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
net/vhost-vdpa.c | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3bf60f9431..6bb56f7d94 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -1413,18 +1413,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
s->vhost_vdpa.shadow_vq_ops_opaque = s;
s->cvq_isolated = cvq_isolated;
-
- /*
- * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
- * there is no way to set the device state (MAC, MQ, etc) before
- * starting the datapath.
- *
- * Migration blocker ownership now belongs to s->vhost_vdpa.
- */
- if (!svq) {
- error_setg(&s->vhost_vdpa.migration_blocker,
- "net vdpa cannot migrate with CVQ feature");
- }
}
ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
if (ret) {
--
2.39.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ
2023-08-22 8:53 [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
` (4 preceding siblings ...)
2023-08-22 8:53 ` [PATCH v3 5/5] vdpa: remove net cvq migration blocker Eugenio Pérez
@ 2023-08-28 6:10 ` Lei Yang
2023-09-15 6:39 ` Si-Wei Liu
6 siblings, 0 replies; 9+ messages in thread
From: Lei Yang @ 2023-08-28 6:10 UTC (permalink / raw)
To: Eugenio Pérez
Cc: qemu-devel, Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, si-wei.liu, Zhu Lingshan,
Jason Wang, Parav Pandit, Cindy Lu
QE tested this series with MAC and MQ changes, and the guest migrated
successfully with "x-svq=off" or without this parameter.
Tested-by: Lei Yang <leiyang@redhat.com>
On Tue, Aug 22, 2023 at 4:53 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> At this moment the migration of net features that depends on CVQ is not
> possible, as there is no reliable way to restore the device state like mac
> address, number of enabled queues, etc to the destination. This is mainly
> caused because the device must only read CVQ, and process all the commands
> before resuming the dataplane.
>
> This series lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE
> ioctl for dataplane vqs only after the device has processed all commands.
> ---
> v3:
> * Fix subject typo and expand message of patch ("vdpa: move
> vhost_vdpa_set_vring_ready to the caller").
>
> v2:
> * Factor out VRING_ENABLE ioctls from vhost_vdpa_dev_start to the caller,
> instead of providing a callback to know if it must be called or not.
> * at https://lists.nongnu.org/archive/html/qemu-devel/2023-07/msg05447.html
>
> RFC:
> * Enable vqs early in case CVQ cannot be shadowed.
> * at https://lists.gnu.org/archive/html/qemu-devel/2023-07/msg01325.html
>
> Eugenio Pérez (5):
> vdpa: use first queue SVQ state for CVQ default
> vdpa: export vhost_vdpa_set_vring_ready
> vdpa: rename vhost_vdpa_net_load to vhost_vdpa_net_cvq_load
> vdpa: move vhost_vdpa_set_vring_ready to the caller
> vdpa: remove net cvq migration blocker
>
> include/hw/virtio/vhost-vdpa.h | 1 +
> hw/virtio/vdpa-dev.c | 3 ++
> hw/virtio/vhost-vdpa.c | 22 +++++-----
> net/vhost-vdpa.c | 75 +++++++++++++++++++---------------
> hw/virtio/trace-events | 2 +-
> 5 files changed, 57 insertions(+), 46 deletions(-)
>
> --
> 2.39.3
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ
2023-08-22 8:53 [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
` (5 preceding siblings ...)
2023-08-28 6:10 ` [PATCH v3 0/5] Enable vdpa net migration with features depending on CVQ Lei Yang
@ 2023-09-15 6:39 ` Si-Wei Liu
6 siblings, 0 replies; 9+ messages in thread
From: Si-Wei Liu @ 2023-09-15 6:39 UTC (permalink / raw)
To: Eugenio Pérez, qemu-devel
Cc: Laurent Vivier, Harpreet Singh Anand, Shannon Nelson,
Stefano Garzarella, Lei Yang, Michael S. Tsirkin, Hawkins Jiawei,
Dragos Tatulea, Gautam Dawar, Zhu Lingshan, Jason Wang,
Parav Pandit, Cindy Lu
Does this series need to work with the recently merged
ENABLE_AFTER_DRIVER_OK series from kernel?
-Siwei
On 8/22/2023 1:53 AM, Eugenio Pérez wrote:
> At this moment the migration of net features that depends on CVQ is not
>
> possible, as there is no reliable way to restore the device state like mac
>
> address, number of enabled queues, etc to the destination. This is mainly
>
> caused because the device must only read CVQ, and process all the commands
>
> before resuming the dataplane.
>
>
>
> This series lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE
>
> ioctl for dataplane vqs only after the device has processed all commands.
>
> ---
>
> v3:
>
> * Fix subject typo and expand message of patch ("vdpa: move
>
> vhost_vdpa_set_vring_ready to the caller").
>
>
>
> v2:
>
> * Factor out VRING_ENABLE ioctls from vhost_vdpa_dev_start to the caller,
>
> instead of providing a callback to know if it must be called or not.
>
> * at https://lists.nongnu.org/archive/html/qemu-devel/2023-07/msg05447.html
>
>
>
> RFC:
>
> * Enable vqs early in case CVQ cannot be shadowed.
>
> * at https://lists.gnu.org/archive/html/qemu-devel/2023-07/msg01325.html
>
>
>
> Eugenio Pérez (5):
>
> vdpa: use first queue SVQ state for CVQ default
>
> vdpa: export vhost_vdpa_set_vring_ready
>
> vdpa: rename vhost_vdpa_net_load to vhost_vdpa_net_cvq_load
>
> vdpa: move vhost_vdpa_set_vring_ready to the caller
>
> vdpa: remove net cvq migration blocker
>
>
>
> include/hw/virtio/vhost-vdpa.h | 1 +
>
> hw/virtio/vdpa-dev.c | 3 ++
>
> hw/virtio/vhost-vdpa.c | 22 +++++-----
>
> net/vhost-vdpa.c | 75 +++++++++++++++++++---------------
>
> hw/virtio/trace-events | 2 +-
>
> 5 files changed, 57 insertions(+), 46 deletions(-)
>
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread