From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: qemu-devel@nongnu.org, Gautam Dawar <gdawar@xilinx.com>,
si-wei.liu@oracle.com, Zhu Lingshan <lingshan.zhu@intel.com>,
Stefano Garzarella <sgarzare@redhat.com>,
Parav Pandit <parav@mellanox.com>, Cindy Lu <lulu@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Harpreet Singh Anand <hanand@xilinx.com>,
Laurent Vivier <lvivier@redhat.com>,
Shannon Nelson <snelson@pensando.io>,
Lei Yang <leiyang@redhat.com>,
Dragos Tatulea <dtatulea@nvidia.com>
Subject: Re: [PATCH 0/7] Enable vdpa net migration with features depending on CVQ
Date: Tue, 1 Aug 2023 11:48:28 +0800 [thread overview]
Message-ID: <CACGkMEsNVOajUObv_5Stnn4wtQtSdLNcxVqiB7_6xAnN1OSjQQ@mail.gmail.com> (raw)
In-Reply-To: <CAJaqyWccXD1PcA=jV59LCkxzCbnvghtPrk_ShFscdDXe1Aj4uQ@mail.gmail.com>
On Mon, Jul 31, 2023 at 6:15 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Mon, Jul 31, 2023 at 8:41 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Sat, Jul 29, 2023 at 1:20 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > At this moment the migration of net features that depends on CVQ is not
> > > possible, as there is no reliable way to restore the device state like mac
> > > address, number of enabled queues, etc to the destination. This is mainly
> > > caused because the device must only read CVQ, and process all the commands
> > > before resuming the dataplane.
> > >
> > > This series lift that requirement, sending the VHOST_VDPA_SET_VRING_ENABLE
> > > ioctl for dataplane vqs only after the device has processed all commands.
> >
> > I think it's better to explain (that is what I don't understand) why
> > we can not simply reorder vhost_net_start_one() in vhost_net_start()?
> >
> > for (i = 0; i < nvhosts; i++) {
> > if (i < data_queue_pairs) {
> > peer = qemu_get_peer(ncs, i);
> > } else {
> > peer = qemu_get_peer(ncs, n->max_queue_pairs);
> > }
> >
> > if (peer->vring_enable) {
> > /* restore vring enable state */
> > r = vhost_set_vring_enable(peer, peer->vring_enable);
> >
> > if (r < 0) {
> > goto err_start;
> > }
> > }
> >
> > => r = vhost_net_start_one(get_vhost_net(peer), dev);
> > if (r < 0) {
> > goto err_start;
> > }
> > }
> >
> > Can we simply start cvq first here?
> >
>
> Well the current order is:
> * set dev features (conditioned by
> * Configure all vq addresses
> * Configure all vq size
> ...
> * Enable cvq
> * DRIVER_OK
> * Enable all the rest of the queues.
>
> If we just start CVQ first, we need to modify vhost_vdpa_set_features
> as minimum. A lot of code that depends on vdev->vq_index{,_end} may be
> affected.
>
> Also, I'm not sure if all the devices will support configure address,
> vq size, etc after DRIVER_OK.
Ok, so basically what I meant is to seek a way to refactor
vhost_net_start() instead of introducing new ops (e.g introducing
virtio ops in vhost seems a layer violation anyhow).
Can we simply factor VRING_ENABLE out and then we can enable vring in
any order as we want in vhost_net_start()?
Thanks
>
> > Thanks
> >
> > > ---
> > > From FRC:
> > > * Enable vqs early in case CVQ cannot be shadowed.
> > >
> > > Eugenio Pérez (7):
> > > vdpa: export vhost_vdpa_set_vring_ready
> > > vdpa: add should_enable op
> > > vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready
> > > vdpa: add stub vhost_vdpa_should_enable
> > > vdpa: delay enable of data vqs
> > > vdpa: enable cvq svq if data vq are shadowed
> > > vdpa: remove net cvq migration blocker
> > >
> > > include/hw/virtio/vhost-vdpa.h | 9 +++++
> > > hw/virtio/vhost-vdpa.c | 33 ++++++++++++----
> > > net/vhost-vdpa.c | 69 ++++++++++++++++++++++++++--------
> > > hw/virtio/trace-events | 2 +-
> > > 4 files changed, 89 insertions(+), 24 deletions(-)
> > >
> > > --
> > > 2.39.3
> > >
> > >
> >
>
prev parent reply other threads:[~2023-08-01 3:49 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
2023-07-28 17:20 ` [PATCH 1/7] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
2023-07-28 17:20 ` [PATCH 2/7] vdpa: add should_enable op Eugenio Pérez
2023-07-28 17:20 ` [PATCH 3/7] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
2023-07-28 17:20 ` [PATCH 4/7] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
2023-07-28 17:20 ` [PATCH 5/7] vdpa: delay enable of data vqs Eugenio Pérez
2023-07-28 17:20 ` [PATCH 6/7] vdpa: enable cvq svq if data vq are shadowed Eugenio Pérez
2023-07-28 17:20 ` [PATCH 7/7] vdpa: remove net cvq migration blocker Eugenio Pérez
2023-07-31 6:41 ` [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Jason Wang
2023-07-31 10:15 ` Eugenio Perez Martin
2023-08-01 3:48 ` Jason Wang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACGkMEsNVOajUObv_5Stnn4wtQtSdLNcxVqiB7_6xAnN1OSjQQ@mail.gmail.com \
--to=jasowang@redhat.com \
--cc=dtatulea@nvidia.com \
--cc=eperezma@redhat.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=leiyang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=lulu@redhat.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=si-wei.liu@oracle.com \
--cc=snelson@pensando.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).