From: Si-Wei Liu <si-wei.liu@oracle.com>
To: "Eugenio Pérez" <eperezma@redhat.com>, qemu-devel@nongnu.org
Cc: Harpreet Singh Anand <hanand@xilinx.com>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>, Cindy Lu <lulu@redhat.com>,
alvaro.karsz@solid-run.com, Zhu Lingshan <lingshan.zhu@intel.com>,
Lei Yang <leiyang@redhat.com>,
Liuxiangdong <liuxiangdong5@huawei.com>,
Shannon Nelson <snelson@pensando.io>,
Parav Pandit <parav@mellanox.com>,
Gautam Dawar <gdawar@xilinx.com>, Eli Cohen <eli@mellanox.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
longpeng2@huawei.com, virtualization@lists.linux-foundation.org,
Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH v2 07/13] vdpa: add vdpa net migration state notifier
Date: Sun, 12 Feb 2023 22:50:34 -0800 [thread overview]
Message-ID: <2901fd82-5c0e-c830-5288-a72b8c08d628@oracle.com> (raw)
In-Reply-To: <20230208094253.702672-8-eperezma@redhat.com>
On 2/8/2023 1:42 AM, Eugenio Pérez wrote:
> This allows net to restart the device backend to configure SVQ on it.
>
> Ideally, these changes should not be net specific. However, the vdpa net
> backend is the one with enough knowledge to configure everything because
> of some reasons:
> * Queues might need to be shadowed or not depending on its kind (control
> vs data).
> * Queues need to share the same map translations (iova tree).
>
> Because of that it is cleaner to restart the whole net backend and
> configure again as expected, similar to how vhost-kernel moves between
> userspace and passthrough.
>
> If more kinds of devices need dynamic switching to SVQ we can create a
> callback struct like VhostOps and move most of the code there.
> VhostOps cannot be reused since all vdpa backend share them, and to
> personalize just for networking would be too heavy.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
> v3:
> * Add TODO to use the resume operation in the future.
> * Use migration_in_setup and migration_has_failed instead of a
> complicated switch case.
> ---
> net/vhost-vdpa.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 76 insertions(+)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index dd686b4514..bca13f97fd 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -26,12 +26,14 @@
> #include <err.h>
> #include "standard-headers/linux/virtio_net.h"
> #include "monitor/monitor.h"
> +#include "migration/misc.h"
> #include "hw/virtio/vhost.h"
>
> /* Todo:need to add the multiqueue support here */
> typedef struct VhostVDPAState {
> NetClientState nc;
> struct vhost_vdpa vhost_vdpa;
> + Notifier migration_state;
> VHostNetState *vhost_net;
>
> /* Control commands shadow buffers */
> @@ -241,10 +243,79 @@ static VhostVDPAState *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s)
> return DO_UPCAST(VhostVDPAState, nc, nc0);
> }
>
> +static void vhost_vdpa_net_log_global_enable(VhostVDPAState *s, bool enable)
> +{
> + struct vhost_vdpa *v = &s->vhost_vdpa;
> + VirtIONet *n;
> + VirtIODevice *vdev;
> + int data_queue_pairs, cvq, r;
> + NetClientState *peer;
> +
> + /* We are only called on the first data vqs and only if x-svq is not set */
> + if (s->vhost_vdpa.shadow_vqs_enabled == enable) {
> + return;
> + }
> +
> + vdev = v->dev->vdev;
> + n = VIRTIO_NET(vdev);
> + if (!n->vhost_started) {
> + return;
What if vhost gets started after migration is started, will svq still be
(dynamically) enabled during vhost_dev_start()? I don't see relevant
code to deal with it?
> + }
> +
> + data_queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
> + cvq = virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ?
> + n->max_ncs - n->max_queue_pairs : 0;
> + /*
> + * TODO: vhost_net_stop does suspend, get_base and reset. We can be smarter
> + * in the future and resume the device if read-only operations between
> + * suspend and reset goes wrong.
> + */
> + vhost_net_stop(vdev, n->nic->ncs, data_queue_pairs, cvq);
> +
> + peer = s->nc.peer;
> + for (int i = 0; i < data_queue_pairs + cvq; i++) {
> + VhostVDPAState *vdpa_state;
> + NetClientState *nc;
> +
> + if (i < data_queue_pairs) {
> + nc = qemu_get_peer(peer, i);
> + } else {
> + nc = qemu_get_peer(peer, n->max_queue_pairs);
> + }
> +
> + vdpa_state = DO_UPCAST(VhostVDPAState, nc, nc);
> + vdpa_state->vhost_vdpa.shadow_data = enable;
Don't get why shadow_data is set on cvq's vhost_vdpa? This may result in
address space collision: data vq's iova getting improperly allocated on
cvq's address space in vhost_vdpa_listener_region_{add,del}(). Noted
currently there's an issue where guest VM's memory listener registration
is always hooked to the last vq, which could be on the cvq in a
different iova address space VHOST_VDPA_NET_CVQ_ASID.
Thanks,
-Siwei
> +
> + if (i < data_queue_pairs) {
> + /* Do not override CVQ shadow_vqs_enabled */
> + vdpa_state->vhost_vdpa.shadow_vqs_enabled = enable;
> + }
> + }
> +
> + r = vhost_net_start(vdev, n->nic->ncs, data_queue_pairs, cvq);
> + if (unlikely(r < 0)) {
> + error_report("unable to start vhost net: %s(%d)", g_strerror(-r), -r);
> + }
> +}
> +
> +static void vdpa_net_migration_state_notifier(Notifier *notifier, void *data)
> +{
> + MigrationState *migration = data;
> + VhostVDPAState *s = container_of(notifier, VhostVDPAState,
> + migration_state);
> +
> + if (migration_in_setup(migration)) {
> + vhost_vdpa_net_log_global_enable(s, true);
> + } else if (migration_has_failed(migration)) {
> + vhost_vdpa_net_log_global_enable(s, false);
> + }
> +}
> +
> static void vhost_vdpa_net_data_start_first(VhostVDPAState *s)
> {
> struct vhost_vdpa *v = &s->vhost_vdpa;
>
> + add_migration_state_change_notifier(&s->migration_state);
> if (v->shadow_vqs_enabled) {
> v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
> v->iova_range.last);
> @@ -278,6 +349,10 @@ static void vhost_vdpa_net_client_stop(NetClientState *nc)
>
> assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>
> + if (s->vhost_vdpa.index == 0) {
> + remove_migration_state_change_notifier(&s->migration_state);
> + }
> +
> dev = s->vhost_vdpa.dev;
> if (dev->vq_index + dev->nvqs == dev->vq_index_end) {
> g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
> @@ -741,6 +816,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> s->vhost_vdpa.device_fd = vdpa_device_fd;
> s->vhost_vdpa.index = queue_pair_index;
> s->always_svq = svq;
> + s->migration_state.notify = vdpa_net_migration_state_notifier;
> s->vhost_vdpa.shadow_vqs_enabled = svq;
> s->vhost_vdpa.iova_range = iova_range;
> s->vhost_vdpa.shadow_data = svq;
next prev parent reply other threads:[~2023-02-13 6:51 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-08 9:42 [PATCH v2 00/13] Dynamycally switch to vhost shadow virtqueues at vdpa net migration Eugenio Pérez
2023-02-08 9:42 ` [PATCH v2 01/13] vdpa net: move iova tree creation from init to start Eugenio Pérez
2023-02-13 6:50 ` Si-Wei Liu
2023-02-13 11:14 ` Eugenio Perez Martin
2023-02-14 1:45 ` Si-Wei Liu
2023-02-14 19:07 ` Eugenio Perez Martin
2023-02-16 2:14 ` Si-Wei Liu
2023-02-16 7:35 ` Eugenio Perez Martin
2023-02-17 7:38 ` Si-Wei Liu
2023-02-17 13:55 ` Eugenio Perez Martin
2023-02-08 9:42 ` [PATCH v2 02/13] vdpa: Negotiate _F_SUSPEND feature Eugenio Pérez
2023-02-08 9:42 ` [PATCH v2 03/13] vdpa: add vhost_vdpa_suspend Eugenio Pérez
2023-02-21 5:27 ` Jason Wang
2023-02-21 5:33 ` Jason Wang
2023-02-21 7:05 ` Eugenio Perez Martin
2023-02-08 9:42 ` [PATCH v2 04/13] vdpa: move vhost reset after get vring base Eugenio Pérez
2023-02-21 5:36 ` Jason Wang
2023-02-21 7:07 ` Eugenio Perez Martin
2023-02-22 3:43 ` Jason Wang
2023-02-08 9:42 ` [PATCH v2 05/13] vdpa: rewind at get_base, not set_base Eugenio Pérez
2023-02-21 5:40 ` Jason Wang
2023-02-08 9:42 ` [PATCH v2 06/13] vdpa net: allow VHOST_F_LOG_ALL Eugenio Pérez
2023-02-08 9:42 ` [PATCH v2 07/13] vdpa: add vdpa net migration state notifier Eugenio Pérez
2023-02-13 6:50 ` Si-Wei Liu [this message]
2023-02-13 15:51 ` Eugenio Perez Martin
2023-02-22 3:55 ` Jason Wang
2023-02-22 7:23 ` Eugenio Perez Martin
2023-02-08 9:42 ` [PATCH v2 08/13] vdpa: disable RAM block discard only for the first device Eugenio Pérez
2023-02-08 9:42 ` [PATCH v2 09/13] vdpa net: block migration if the device has CVQ Eugenio Pérez
2023-02-13 6:50 ` Si-Wei Liu
2023-02-14 18:06 ` Eugenio Perez Martin
2023-02-22 4:00 ` Jason Wang
2023-02-22 7:28 ` Eugenio Perez Martin
2023-02-23 2:41 ` Jason Wang
2023-02-08 9:42 ` [PATCH v2 10/13] vdpa: block migration if device has unsupported features Eugenio Pérez
2023-02-08 9:42 ` [PATCH v2 11/13] vdpa: block migration if dev does not have _F_SUSPEND Eugenio Pérez
2023-02-22 4:05 ` Jason Wang
2023-02-22 14:25 ` Eugenio Perez Martin
2023-02-23 2:38 ` Jason Wang
2023-02-23 11:06 ` Eugenio Perez Martin
2023-02-24 3:16 ` Jason Wang
2023-02-08 9:42 ` [PATCH v2 12/13] vdpa: block migration if SVQ does not admit a feature Eugenio Pérez
2023-02-08 9:42 ` [PATCH v2 13/13] vdpa: return VHOST_F_LOG_ALL in vhost-vdpa devices Eugenio Pérez
2023-02-22 4:07 ` Jason Wang
2023-02-08 10:29 ` [PATCH v2 00/13] Dynamycally switch to vhost shadow virtqueues at vdpa net migration Alvaro Karsz
2023-02-09 14:38 ` Lei Yang
2023-02-10 12:57 ` Gautam Dawar
2023-02-15 18:40 ` Eugenio Perez Martin
2023-02-16 13:50 ` Lei Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2901fd82-5c0e-c830-5288-a72b8c08d628@oracle.com \
--to=si-wei.liu@oracle.com \
--cc=alvaro.karsz@solid-run.com \
--cc=arei.gonglei@huawei.com \
--cc=eli@mellanox.com \
--cc=eperezma@redhat.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=jasowang@redhat.com \
--cc=leiyang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=liuxiangdong5@huawei.com \
--cc=longpeng2@huawei.com \
--cc=lulu@redhat.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=snelson@pensando.io \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).