From: Eugenio Perez Martin <eperezma@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Longpeng (Mike,
Cloud Infrastructure Service Product Dept.)"
<longpeng2@huawei.com>, qemu-level <qemu-devel@nongnu.org>,
Michael Tsirkin <mst@redhat.com>,
Si-Wei Liu <si-wei.liu@oracle.com>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>,
Eli Cohen <elic@nvidia.com>, Parav Pandit <parav@nvidia.com>,
Juan Quintela <quintela@redhat.com>,
David Gilbert <dgilbert@redhat.com>
Subject: Re: Reducing vdpa migration downtime because of memory pin / maps
Date: Tue, 11 Apr 2023 08:28:34 +0200 [thread overview]
Message-ID: <CAJaqyWcF6NMBB+MmgmMnnpKovMDYLoUaDAmVharW33FPmebaMQ@mail.gmail.com> (raw)
In-Reply-To: <CACGkMEsP_CTz9Mapps9bkUSfU2yMuBQd6jFxpRbLVcvfDh_awA@mail.gmail.com>
On Tue, Apr 11, 2023 at 4:26 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Mon, Apr 10, 2023 at 5:05 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Mon, Apr 10, 2023 at 5:22 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Mon, Apr 10, 2023 at 11:17 AM Longpeng (Mike, Cloud Infrastructure
> > > Service Product Dept.) <longpeng2@huawei.com> wrote:
> > > >
> > > >
> > > >
> > > > 在 2023/4/10 10:14, Jason Wang 写道:
> > > > > On Wed, Apr 5, 2023 at 7:38 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > > > >>
> > > > >> Hi!
> > > > >>
> > > > >> As mentioned in the last upstream virtio-networking meeting, one of
> > > > >> the factors that adds more downtime to migration is the handling of
> > > > >> the guest memory (pin, map, etc). At this moment this handling is
> > > > >> bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the
> > > > >> destination device waits until all the guest memory / state is
> > > > >> migrated to start pinning all the memory.
> > > > >>
> > > > >> The proposal is to bind it to the char device life cycle (open vs
> > > > >> close), so all the guest memory can be pinned for all the guest / qemu
> > > > >> lifecycle.
> > > > >>
> > > > >> This has two main problems:
> > > > >> * At this moment the reset semantics forces the vdpa device to unmap
> > > > >> all the memory. So this change needs a vhost vdpa feature flag.
> > > > >
> > > > > Is this true? I didn't find any codes to unmap the memory in
> > > > > vhost_vdpa_set_status().
> > > > >
> > > >
> > > > It could depend on the vendor driver, for example, the vdpasim would do
> > > > something like that.
> > > >
> > > > vhost_vdpa_set_status->vdpa_reset->vdpasim_reset->vdpasim_do_reset->vhost_iotlb_reset
> > >
> > > This looks like a bug. Or I wonder if any user space depends on this
> > > behaviour, if yes, we really need a new flag then.
> > >
> >
> > My understanding was that we depend on this for cases like qemu
> > crashes. We don't do an unmap(-1ULL) or anything like that to make
> > sure the device is clean when we bind a second qemu to the same
> > device. That's why I think that close() should clean them.
>
> In vhost_vdpa_release() we do:
>
> vhost_vdpa_release()
> vhost_vdpa_cleanup()
> for_each_as()
> vhost_vdpa_remove_as()
> vhost_vdpa_iotlb_unmap(0ULL, 0ULL - 1)
> vhost_vdpa_free_domain()
>
> Anything wrong here?
>
No, I think we just trusted in different pre-existing cleanup points
"semantics".
> Conceptually, the address mapping is not a part of the abstraction for
> a virtio device now. So resetting the memory mapping during virtio
> device reset seems wrong.
>
I agree. So then no change in the kernel should be needed but to
revert this cleanup on device reset. I guess we should document it
ops->reset just in case?
Thanks!
next prev parent reply other threads:[~2023-04-11 6:30 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-05 11:37 Reducing vdpa migration downtime because of memory pin / maps Eugenio Perez Martin
2023-04-10 2:14 ` Jason Wang
2023-04-10 3:16 ` longpeng2--- via
2023-04-10 3:21 ` Jason Wang
2023-04-10 9:04 ` Eugenio Perez Martin
2023-04-11 2:25 ` Jason Wang
2023-04-11 6:28 ` Eugenio Perez Martin [this message]
2023-04-11 6:36 ` Jason Wang
2023-04-11 12:33 ` Eugenio Perez Martin
2023-04-12 5:56 ` Jason Wang
2023-04-12 6:18 ` Jason Wang
2023-04-13 7:27 ` Eugenio Perez Martin
2023-06-06 22:44 ` Si-Wei Liu
2023-06-07 8:08 ` Eugenio Perez Martin
2023-06-08 22:40 ` Si-Wei Liu
2023-06-09 3:18 ` Jason Wang
2023-06-09 14:32 ` Eugenio Perez Martin
2023-06-27 6:36 ` Si-Wei Liu
2023-07-05 18:03 ` Eugenio Perez Martin
2023-07-06 0:13 ` Si-Wei Liu
2023-07-06 5:46 ` Eugenio Perez Martin
2023-07-08 9:14 ` Si-Wei Liu
2023-07-10 6:04 ` Eugenio Perez Martin
2023-07-17 19:56 ` Si-Wei Liu
2023-07-19 10:40 ` Eugenio Perez Martin
2023-07-20 0:48 ` Si-Wei Liu
2023-08-02 12:42 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJaqyWcF6NMBB+MmgmMnnpKovMDYLoUaDAmVharW33FPmebaMQ@mail.gmail.com \
--to=eperezma@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=dgilbert@redhat.com \
--cc=elic@nvidia.com \
--cc=jasowang@redhat.com \
--cc=longpeng2@huawei.com \
--cc=mst@redhat.com \
--cc=parav@nvidia.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=si-wei.liu@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).