From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jan Kiszka <jan.kiszka@siemens.com>
Cc: virtio-dev@lists.oasis-open.org, Claudio.Fontana@huawei.com,
qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
"Nakajima, Jun" <jun.nakajima@intel.com>,
Varun Sethi <Varun.Sethi@freescale.com>,
opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [Qemu-devel] rfc: vhost user enhancements for vm2vm communication
Date: Tue, 1 Sep 2015 17:34:22 +0300 [thread overview]
Message-ID: <20150901172239-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <55E5B1A8.9080506@siemens.com>
On Tue, Sep 01, 2015 at 04:09:44PM +0200, Jan Kiszka wrote:
> On 2015-09-01 11:24, Michael S. Tsirkin wrote:
> > On Tue, Sep 01, 2015 at 11:11:52AM +0200, Jan Kiszka wrote:
> >> On 2015-09-01 10:01, Michael S. Tsirkin wrote:
> >>> On Tue, Sep 01, 2015 at 09:35:21AM +0200, Jan Kiszka wrote:
> >>>> Leaving all the implementation and interface details aside, this
> >>>> discussion is first of all about two fundamentally different approaches:
> >>>> static shared memory windows vs. dynamically remapped shared windows (a
> >>>> third one would be copying in the hypervisor, but I suppose we all agree
> >>>> that the whole exercise is about avoiding that). Which way do we want or
> >>>> have to go?
> >>>>
> >>>> Jan
> >>>
> >>> Dynamic is a superset of static: you can always make it static if you
> >>> wish. Static has the advantage of simplicity, but that's lost once you
> >>> realize you need to invent interfaces to make it work. Since we can use
> >>> existing IOMMU interfaces for the dynamic one, what's the disadvantage?
> >>
> >> Complexity. Having to emulate even more of an IOMMU in the hypervisor
> >> (we already have to do a bit for VT-d IR in Jailhouse) and doing this
> >> per platform (AMD IOMMU, ARM SMMU, ...) is out of scope for us. In that
> >> sense, generic grant tables would be more appealing.
> >
> > That's not how we do things for KVM, PV features need to be
> > modular and interchangeable with emulation.
>
> I know, and we may have to make some compromise for Jailhouse if that
> brings us valuable standardization and broad guest support. But we will
> surely not support an arbitrary amount of IOMMU models for that reason.
>
> >
> > If you just want something that's cross-platform and easy to
> > implement, just build a PV IOMMU. Maybe use virtio for this.
>
> That is likely required to keep the complexity manageable and to allow
> static preconfiguration.
Real IOMMU allow static configuration just fine. This is exactly
what VFIO uses.
> Well, we could declare our virtio-shmem device to be an IOMMU device
> that controls access of a remote VM to RAM of the one that owns the
> device. In the static case, this access may at most be enabled/disabled
> but not moved around. The static regions would have to be discoverable
> for the VM (register read-back), and the guest's firmware will likely
> have to declare those ranges reserved to the guest OS.
> In the dynamic case, the guest would be able to create an alternative
> mapping.
I don't think we want a special device just to support the
static case. It might be a bit less code to write, but
eventually it should be up to the guest.
Fundamentally, it's policy that host has no business
dictating.
> We would probably have to define a generic page table structure
> for that. Or do you rather have some MPU-like control structure in mind,
> more similar to the memory region descriptions vhost is already using?
I don't care much. Page tables use less memory if a lot of memory needs
to be covered. OTOH if you want to use virtio (e.g. to allow command
batching) that likely means commands to manipulate the IOMMU, and
maintaining it all on the host. You decide.
> Also not yet clear to me are how the vhost-pci device and the
> translations it will have to do should look like for VM2.
I think we can use vhost-pci BAR + VM1 bus address as the
VM2 physical address. In other words, all memory exposed to
virtio-pci by VM1 through it's IOMMU is mapped into BAR of
vhost-pci.
Bus addresses can be validated to make sure they fit
in the BAR.
One issue to consider is that VM1 can trick VM2 into writing
into bus address that isn't mapped in the IOMMU, or
is mapped read-only.
We probably would have to teach KVM to handle this somehow,
e.g. exit to QEMU, or even just ignore. Maybe notify guest
e.g. by setting a bit in the config space of the device,
to avoid easy DOS.
> >
> >> But what we would
> >> actually need is an interface that is only *optionally* configured by a
> >> guest for dynamic scenarios, otherwise preconfigured by the hypervisor
> >> for static setups. And we need guests that support both. That's the
> >> challenge.
> >>
> >> Jan
> >
> > That's already there for IOMMUs: vfio does the static setup by default,
> > enabling iommu by guests is optional.
>
> Cannot follow yet how vfio comes into play regarding some preconfigured
> virtual IOMMU.
>
> Jan
>
> --
> Siemens AG, Corporate Technology, CT RTC ITP SES-DE
> Corporate Competence Center Embedded Linux
next prev parent reply other threads:[~2015-09-01 14:34 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-31 14:11 [Qemu-devel] rfc: vhost user enhancements for vm2vm communication Michael S. Tsirkin
2015-08-31 18:35 ` Nakajima, Jun
2015-09-01 3:03 ` Varun Sethi
2015-09-01 8:30 ` Michael S. Tsirkin
2015-09-01 8:17 ` Michael S. Tsirkin
2015-09-01 22:56 ` Nakajima, Jun
2015-10-06 21:42 ` Nakajima, Jun
2015-10-07 5:39 ` Michael S. Tsirkin
2015-09-01 7:35 ` Jan Kiszka
2015-09-01 8:01 ` Michael S. Tsirkin
2015-09-01 9:11 ` Jan Kiszka
2015-09-01 9:24 ` Michael S. Tsirkin
2015-09-01 14:09 ` Jan Kiszka
2015-09-01 14:34 ` Michael S. Tsirkin [this message]
2015-09-01 15:34 ` Jan Kiszka
2015-09-01 16:02 ` Michael S. Tsirkin
2015-09-01 16:28 ` Jan Kiszka
2015-09-02 0:01 ` Nakajima, Jun
2015-09-02 12:15 ` Michael S. Tsirkin
2015-09-03 4:45 ` Nakajima, Jun
2015-09-03 8:09 ` Michael S. Tsirkin
2015-09-03 8:08 ` Michael S. Tsirkin
2015-09-03 8:21 ` Jan Kiszka
2015-09-03 8:37 ` Michael S. Tsirkin
2015-09-03 10:25 ` Jan Kiszka
2015-09-07 12:38 ` Claudio Fontana
2015-09-09 6:40 ` [Qemu-devel] [opnfv-tech-discuss] " Zhang, Yang Z
2015-09-09 8:39 ` Claudio Fontana
2015-09-18 16:29 ` [Qemu-devel] RFC: virtio-peer shared memory based peer communication device Claudio Fontana
2015-09-18 21:11 ` Paolo Bonzini
2015-09-21 10:47 ` Jan Kiszka
2015-09-21 12:15 ` Paolo Bonzini
2015-09-21 12:13 ` Michael S. Tsirkin
2015-09-21 12:32 ` Jan Kiszka
2015-09-24 10:04 ` Michael S. Tsirkin
2015-09-09 7:06 ` [Qemu-devel] rfc: vhost user enhancements for vm2vm communication Michael S. Tsirkin
2015-09-11 15:39 ` Claudio Fontana
2015-09-13 9:12 ` Michael S. Tsirkin
2015-09-14 0:43 ` [Qemu-devel] [opnfv-tech-discuss] " Zhang, Yang Z
2015-09-14 16:00 ` [Qemu-devel] [virtio-dev] " Stefan Hajnoczi
-- strict thread matches above, loose matches on Subject: below --
2016-03-17 12:56 [Qemu-devel] " Bret Ketchum
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150901172239-mutt-send-email-mst@redhat.com \
--to=mst@redhat.com \
--cc=Claudio.Fontana@huawei.com \
--cc=Varun.Sethi@freescale.com \
--cc=jan.kiszka@siemens.com \
--cc=jun.nakajima@intel.com \
--cc=opnfv-tech-discuss@lists.opnfv.org \
--cc=qemu-devel@nongnu.org \
--cc=virtio-dev@lists.oasis-open.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).