From: Alex Williamson <alex.williamson@redhat.com>
To: "Liu, Yi L" <yi.l.liu@intel.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
"Zhao, Yan Y" <yan.y.zhao@intel.com>,
"He, Shaopeng" <shaopeng.he@intel.com>,
"Xia, Chenbo" <chenbo.xia@intel.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: mdev live migration support with vfio-mdev-pci
Date: Thu, 12 Sep 2019 15:41:27 +0100 [thread overview]
Message-ID: <20190912154127.04ed3951@x1.home> (raw)
In-Reply-To: <A2975661238FB949B60364EF0F2C25743A08FC3F@SHSMSX104.ccr.corp.intel.com>
On Mon, 9 Sep 2019 11:41:45 +0000
"Liu, Yi L" <yi.l.liu@intel.com> wrote:
> Hi Alex,
>
> Recently, we had an internal discussion on mdev live migration support
> for SR-IOV. The usage is to wrap VF as mdev and make it migrate-able
> when passthru to VMs. It is very alike with the vfio-mdev-pci sample
> driver work which also wraps PF/VF as mdev. But there is gap. Current
> vfio-mdev-pci driver is a generic driver which has no ability to support
> customized regions. e.g. state save/restore or dirty page region which is
> important in live migration. To support the usage, there are two directions:
>
> 1) extend vfio-mdev-pci driver to expose interface, let vendor specific
> in-kernel module (not driver) to register some ops for live migration.
> Thus to support customized regions. In this direction, vfio-mdev-pci
> driver will be in charge of the hardware. The in-kernel vendor specific
> module is just to provide customized region emulation.
> - Pros: it will be helpful if we want to expose some user-space ABI in
> future since it is a generic driver.
> - Cons: no apparent cons per me, may keep me honest, my folks.
>
> 2) further abstract out the generic parts in vfio-mdev-driver to be a library
> and let vendor driver to call the interfaces exposed by this library. e.g.
> provides APIs to wrap a VF as mdev and make a non-singleton iommu
> group to be vfio viable when a vendor driver wants to wrap a VF as a
> mdev. In this direction, device driver still in charge of hardware.
> - Pros: devices driver still owns the device, which looks to be more
> "reasonable".
> - Cons: no apparent cons, may be unable to have unified user space ABI if
> it's needed in future.
>
> Any thoughts on the above usage and the two directions? Also, Kevin, Yan,
> Shaopeng could keep me honest if anything missed.
A concern with 1) is that we specifically made the vfio-mdev-pci driver
a sample driver to avoid user confusion over when to use vfio-pci vs
when to use vfio-mdev-pci. This use case suggests vfio-mdev-pci
becoming a peer of vfio-pci when really I think it was meant only as a
demo of IOMMU backed mdev devices and perhaps a starting point for
vendors wanting to create an mdev wrapper around real hardware. I
had assumed that in the latter case, the sample driver would be forked.
Do these new suggestions indicate we're deprecating vfio-pci? I'm not
necessarily in favor of that. Couldn't we also have device specific
extensions of vfio-pci that could provide migration support for a
physical device? Do we really want to add the usage burden of the mdev
sysfs interface if we're only adding migration to a VF? Maybe instead
we should add common helpers for migration that could be used by either
vfio-pci or vendor specific mdev drivers. Ideally I think that if
we're not trying to multiplex a device into multiple mdevs or trying
to supplement a device that would be incomplete without mdev, and only
want to enable migration for a PF/VF, we'd bind it to vfio-pci and those
features would simply appear for device we've enlightened vfio-pci to
migrate. Thanks,
Alex
next prev parent reply other threads:[~2019-09-12 18:40 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-09 11:41 mdev live migration support with vfio-mdev-pci Liu, Yi L
2019-09-12 14:41 ` Alex Williamson [this message]
2019-09-13 0:28 ` Tian, Kevin
2019-09-13 15:54 ` Alex Williamson
2019-09-16 2:10 ` Tian, Kevin
2019-09-16 2:23 ` Yan Zhao
2019-09-16 8:35 ` He, Shaopeng
2019-09-16 8:55 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190912154127.04ed3951@x1.home \
--to=alex.williamson@redhat.com \
--cc=chenbo.xia@intel.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=shaopeng.he@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox