qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>,
	"Liu, Yi L" <yi.l.liu@linux.intel.com>,
	tianyu.lan@intel.com, kevin.tian@intel.com, yi.l.liu@intel.com,
	mst@redhat.com, jasowang@redhat.com, qemu-devel@nongnu.org,
	alex.williamson@redhat.com, pbonzini@redhat.com
Subject: Re: [Qemu-devel] [RESEND PATCH 2/6] memory: introduce AddressSpaceOps and IOMMUObject
Date: Mon, 18 Dec 2017 22:35:31 +1100	[thread overview]
Message-ID: <20171218113531.GC4786@umbus.fritz.box> (raw)
In-Reply-To: <20171115071632.GF6821@xz-mi>

[-- Attachment #1: Type: text/plain, Size: 4009 bytes --]

On Wed, Nov 15, 2017 at 03:16:32PM +0800, Peter Xu wrote:
> On Tue, Nov 14, 2017 at 10:52:54PM +0100, Auger Eric wrote:
> 
> [...]
> 
> > I meant, in the current intel_iommu code, vtd_find_add_as() creates 1
> > IOMMU MR and 1 AS per PCIe device, right?
> 
> I think this is the most tricky point - in QEMU IOMMU MR is not really
> a 1:1 relationship to devices.  For Intel, it's true; for Power, it's
> not.  On Power guests, one device's DMA address space can be splited
> into different translation windows, while each window corresponds to
> one IOMMU MR.

Right.

> So IMHO the real 1:1 mapping is between the device and its DMA address
> space, rather than MRs.

That's not true either.  With both POWER and Intel, several devices
can share a DMA address space: on POWER if they are in the same PE, on
Intel if they are place in the same IOMMU domain.

On x86 and on POWER bare metal we generally try to make the minimum
granularity for each PE/domain be a single function.  However, that
may not be possible in the case of PCIe to PCI bridges, or
multifunction devices where the functions aren't properly isolated
from each other (e.g. function 0 debug registers which can affect
other functions are quite common).

For POWER guests we only have one PE/domain per virtual host bridge.
That's just a matter of implementation simplicity - if you want fine
grained isolation you can just create more virtual host bridges.

> It's been a long time since when I drafted the patches.  I think at
> least that should be a more general notifier mechanism comparing to
> current IOMMUNotifier thing, which was bound to IOTLB notifies only.
> AFAICT if we want to trap first-level translation changes, current
> notifier is not even close to that interface - just see the definition
> of IOMMUTLBEntry, it is tailored only for MAP/UNMAP of translation
> addresses, not anything else.  And IMHO that's why it's tightly bound
> to MemoryRegions, and that's the root problem.  The dynamic IOMMU MR
> switching problem is related to this issue as well.

So, having read and thought a bunch more, I think I know where you
need to start hooking this in.  The thing is the current qemu PCI DMA
structure assumes that each device belongs to just a single PCI
address space - that's what pci_device_iommu_address_space() returns.

For virt-SVM that's just not true.  IIUC, a virt-SVM capable device
could simultaneously write to multiple process address spaces, since
the process IDs actually go over the bus.

So trying to hook notifiers at the AddressSpace OR MemoryRegion level
just doesn't make sense - if we've picked a single addresss space for
the device, we've already made a wrong step.

Instead what you need I think is something like:
pci_device_virtsvm_context().  virt-SVM capable devices would need to
call that *before* calling pci_device_iommu_address_space ().  Well
rather the virt-SVM capable DMA helpers would need to call that.

That would return a new VirtSVMContext (or something) object, which
would roughly correspond to a single PASID table.  That's where the
methods and notifiers for managing that would need to go.

> I am not sure current "get IOMMU object from address space" solution
> would be best, maybe it's "too bigger a scope", I think it depends on
> whether in the future we'll have some requirement in such a bigger
> scope (say, something we want to trap from vIOMMU and deliver it to
> host IOMMU which may not even be device-related?  I don't know).  Now
> another alternative I am thinking is, whether we can provide a
> per-device notifier, then it can be bound to PCIDevice rather than
> MemoryRegions, then it will be in device scope.

I think that sounds like a version of what I've suggested above.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2017-12-18 11:38 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-03 12:01 [Qemu-devel] [RESEND PATCH 0/6] Introduce new iommu notifier framework Liu, Yi L
2017-11-03 12:01 ` [Qemu-devel] [RESEND PATCH 1/6] memory: rename existing iommu notifier to be iommu mr notifier Liu, Yi L
2017-11-03 12:01 ` [Qemu-devel] [RESEND PATCH 2/6] memory: introduce AddressSpaceOps and IOMMUObject Liu, Yi L
2017-11-13  5:56   ` David Gibson
2017-11-13  8:28     ` Peter Xu
2017-11-14  0:59       ` David Gibson
2017-11-14  3:31         ` Peter Xu
2017-12-18  5:41           ` David Gibson
2017-11-16  8:57         ` Liu, Yi L
2017-12-18  6:14           ` David Gibson
2017-12-18  9:17             ` Liu, Yi L
2017-12-18 11:22               ` David Gibson
2017-12-20  6:32                 ` Liu, Yi L
2017-12-20 11:01                   ` David Gibson
2017-12-22  6:47                     ` Liu, Yi L
2017-11-13  9:58     ` Liu, Yi L
2017-11-14  8:53       ` Auger Eric
2017-11-14 13:59         ` Liu, Yi L
2017-11-14 21:52           ` Auger Eric
2017-11-15  2:36             ` Liu, Yi L
2017-11-15  7:16             ` Peter Xu
2017-12-18 11:35               ` David Gibson [this message]
2017-12-20  6:47                 ` Liu, Yi L
2017-12-20 11:18                   ` David Gibson
2017-12-21  8:40                     ` Liu, Yi L
2018-01-03  0:28                       ` David Gibson
2018-01-04  9:40                         ` Liu, Yi L
2018-01-12 10:25                     ` Liu, Yi L
2018-01-16  6:04                       ` David Gibson
2017-12-18  6:30         ` David Gibson
2017-11-14 10:21   ` Auger Eric
2017-11-14 14:20     ` Liu, Yi L
2017-12-18 11:38     ` David Gibson
2017-11-03 12:01 ` [Qemu-devel] [RESEND PATCH 3/6] intel_iommu: provide AddressSpaceOps.iommu_get instance Liu, Yi L
2017-11-03 12:01 ` [Qemu-devel] [RESEND PATCH 4/6] vfio: rename GuestIOMMU to be GuestIOMMUMR Liu, Yi L
2017-11-03 12:01 ` [Qemu-devel] [RESEND PATCH 5/6] vfio/pci: add notify framework based on IOMMUObject Liu, Yi L
2017-11-14 10:23   ` Auger Eric
2017-11-14 14:24     ` Liu, Yi L
2017-11-03 12:01 ` [Qemu-devel] [RESEND PATCH 6/6] vfio/pci: register vfio_iommu_bind_pasidtbl_notify notifier Liu, Yi L

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171218113531.GC4786@umbus.fritz.box \
    --to=david@gibson.dropbear.id.au \
    --cc=alex.williamson@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=tianyu.lan@intel.com \
    --cc=yi.l.liu@intel.com \
    --cc=yi.l.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).