From: Jacob Pan <jacob.jun.pan@linux.intel.com>
To: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>,
Joerg Roedel <joro@8bytes.org>,
David Woodhouse <dwmw2@infradead.org>,
Yi Liu <yi.l.liu@intel.com>, "Tian, Kevin" <kevin.tian@intel.com>,
Raj Ashok <ashok.raj@intel.com>,
Christoph Hellwig <hch@infradead.org>,
Lu Baolu <baolu.lu@linux.intel.com>,
Andriy Shevchenko <andriy.shevchenko@linux.intel.com>,
jacob.jun.pan@linux.intel.com
Subject: Re: [PATCH 10/18] iommu/vt-d: Add custom allocator for IOASID
Date: Thu, 18 Apr 2019 21:29:58 -0700 [thread overview]
Message-ID: <20190418212958.65975103@jacob-builder> (raw)
In-Reply-To: <70c6b197-029e-94cd-2745-d4a193cbbcd0@arm.com>
On Thu, 18 Apr 2019 16:36:02 +0100
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:
> On 16/04/2019 00:10, Jacob Pan wrote:[...]
> >> > + /*
> >> > + * Register a custom ASID allocator if we
> >> > are running
> >> > + * in a guest, the purpose is to have a
> >> > system wide PASID
> >> > + * namespace among all PASID users.
> >> > + * Note that only one vIOMMU in each guest
> >> > is supported.
> >>
> >> Why one vIOMMU per guest? This would prevent guests with multiple
> >> PCI domains aiui.
> >>
> > This is mainly for simplicity reasons. These are all virtual BDFs
> > anyway. As long as guest BDF can be mapped to a host BDF, it should
> > be sufficient, am I missing anything?
> >
> > From PASID allocation perspective, it is not tied to any PCI device
> > until bind call. We only need to track PASID ownership per guest.
> >
> > virtio-IOMMU spec does support multiple PCI domains but I am not
> > sure if that applies to all assigned devices, whether all assigned
> > devices are under the same domain. Perhaps Jean can help to clarify
> > how PASID allocation API looks like on virtio IOMMU.
>
> [Ugh, this is much longer than I hoped. In short I don't think
> multiple vIOMMUs is a problem, because the host uses the same
> allocator for all of them.]
>
I agreed, it is not an issue as far as PASID allocation is concerned.
> Yes there can be a single virtio-iommu instance for multiple PCI
> domains, or multiple instances each managing assigned devices. It's
> up to the hypervisor to decide on the topology.
>
> For Linux and QEMU I was assuming that choosing the vIOMMU used for
> PASID allocation isn't a big deal, since in the end they all use the
> same allocator in the host. It gets complicated when some vIOMMUs can
> be removed at runtime (unload the virtio-iommu module that was
> providing the PASID allocator, and then you can't allocate PASIDs for
> the VT-d instance anymore), so maybe limiting to one type of vIOMMU
> (don't mix VT-d and virtio-iommu in the same VM) is more reasonable.
>
I think you can deal with the hot removal of vIOMMU by keeping multiple
allocators in a list. i.e. when the second vIOMMU register an
allocator, instead of return -EBUSY, we just keep the it in back pocket
list. If the first vIOMMU is removed, the second one can be popped out
into action (and vise versa). Then we always have an allocator.
> It's a bit more delicate from the virtio-iommu perspective. The
> interface is portable and I can't tie it down to the choices we're
> making for Linux and KVM. Having a system-wide PASID space is what we
> picked for Linux but the PCIe architecture allows for each device to
> have their own PASID space, and I suppose some hypervisors and guests
> might prefer implementing it that way.
>
> My plan for the moment is to implement global PASID allocation using
> one feature bit and two new requests, but leave space for a
> per-device PASID allocation, introduced with another feature bit if
> necessary. If it ever gets added, I expect the per-device allocation
> to be done during the bind request rather than with a separate
> PASID_ALLOC request.
>
> So currently I have a new feature bit and two commands:
>
> #define VIRTIO_IOMMU_F_PASID_ALLOC
> #define VIRTIO_IOMMU_T_ALLOC_PASID
> #define VIRTIO_IOMMU_T_FREE_PASID
>
> struct virtio_iommu_req_alloc_pasid {
> struct virtio_iommu_req_head head;
> u32 reserved;
>
> /* Device-writeable */
> le32 pasid;
> struct virtio_iommu_req_tail tail;
> };
>
> struct virtio_iommu_req_free_pasid {
> struct virtio_iommu_req_head head;
> u32 reserved;
> le32 pasid;
>
> /* Device-writeable */
> struct virtio_iommu_req_tail tail;
> };
>
> If the feature bit is offered it must be used, and the guest can only
> use PASIDs allocated via VIRTIO_IOMMU_T_ALLOC_PASID in its bind
> requests.
>
> The PASID space is global at the host scope. If multiple virtio-iommu
> devices in the VM offer the feature bit, then using either of their
> command queue to issue a VIRTIO_IOMMU_F_ALLOC_PASID and
> VIRTIO_IOMMU_F_FREE_PASID is equivalent. Another possibility is to
> require that only one of the virtio-iommu instances per VM offers the
> feature bit. I do prefer this option, but there is the vIOMMU removal
> problem mentioned above - which, with the first option, could be
> solved by keeping a list of PASID allocator functions rather than a
> single one.
>
> I'm considering adding max_pasid field to
> virtio_iommu_req_alloc_pasid. If VIRTIO_IOMMU_T_ALLOC_PASID returns a
> random 20-bit value then a lot of space might be needed for storing
> PASID contexts (is that a real concern though? For internal data it
> can use a binary tree, and the guest is not in charge of hardware
> PASID tables here). If the guest is short on memory then it could
> benefit from a smaller number of PASID bits. That could either be
> globally configurable in the virtio-iommu config space, or using a
> max_pasid field in the VIRTIO_IOMMU_T_ALLOC_PASID request. The latter
> allows to support devices with less than 20 PASID bits, though we're
> hoping that no one will implement that.
>
Space is not an concern for Vt-d vIOMMU in that we have a two level
pasid table. And we need to shadow anyway.
If it is OK with you, I will squash my changes into your ioasid patch
and address the review comments in the v2 of this set, OK?
i.e.
[PATCH 02/18] ioasid: Add custom IOASID allocator
[PATCH 03/18] ioasid: Convert ioasid_idr to XArray
> Thanks,
> Jean
next prev parent reply other threads:[~2019-04-19 18:49 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-08 23:59 [PATCH 00/18] Shared virtual address IOMMU and VT-d support Jacob Pan
2019-04-08 23:59 ` [PATCH 01/18] drivers core: Add I/O ASID allocator Jacob Pan
2019-04-09 10:00 ` Andriy Shevchenko
2019-04-09 10:04 ` Christoph Hellwig
2019-04-09 10:30 ` Andriy Shevchenko
2019-04-09 14:53 ` Paul E. McKenney
2019-04-09 15:21 ` Andriy Shevchenko
2019-04-09 22:08 ` Paul E. McKenney
2019-04-08 23:59 ` [PATCH 02/18] ioasid: Add custom IOASID allocator Jacob Pan
2019-04-15 18:53 ` Alex Williamson
2019-04-15 22:45 ` Jacob Pan
2019-04-08 23:59 ` [PATCH 03/18] ioasid: Convert ioasid_idr to XArray Jacob Pan
2019-04-08 23:59 ` [PATCH 04/18] driver core: add per device iommu param Jacob Pan
2019-04-08 23:59 ` [PATCH 05/18] iommu: introduce device fault data Jacob Pan
2019-04-09 10:03 ` Andriy Shevchenko
2019-04-09 16:44 ` Jacob Pan
2019-04-08 23:59 ` [PATCH 06/18] iommu: introduce device fault report API Jacob Pan
2019-04-08 23:59 ` [PATCH 07/18] iommu: Introduce attach/detach_pasid_table API Jacob Pan
2019-04-08 23:59 ` [PATCH 08/18] iommu: Introduce cache_invalidate API Jacob Pan
2019-04-09 10:07 ` Andriy Shevchenko
2019-04-09 16:43 ` Jacob Pan
2019-04-09 17:37 ` Andriy Shevchenko
2019-04-10 21:21 ` Jacob Pan
2019-04-11 10:02 ` Andriy Shevchenko
2019-04-08 23:59 ` [PATCH 09/18] iommu/vt-d: Enlightened PASID allocation Jacob Pan
2019-04-09 10:08 ` Andriy Shevchenko
2019-04-09 16:34 ` Jacob Pan
2019-04-08 23:59 ` [PATCH 10/18] iommu/vt-d: Add custom allocator for IOASID Jacob Pan
2019-04-15 20:37 ` Alex Williamson
2019-04-15 23:10 ` Jacob Pan
2019-04-18 15:36 ` Jean-Philippe Brucker
2019-04-19 4:29 ` Jacob Pan [this message]
2019-04-23 10:53 ` Jean-Philippe Brucker
2019-04-16 15:30 ` Jacob Pan
2019-04-08 23:59 ` [PATCH 11/18] iommu/vt-d: Replace Intel specific PASID allocator with IOASID Jacob Pan
2019-04-08 23:59 ` [PATCH 12/18] iommu: Add guest PASID bind function Jacob Pan
2019-04-08 23:59 ` [PATCH 13/18] iommu/vt-d: Move domain helper to header Jacob Pan
2019-04-08 23:59 ` [PATCH 14/18] iommu/vt-d: Add nested translation support Jacob Pan
2019-04-08 23:59 ` [PATCH 15/18] iommu/vt-d: Add bind guest PASID support Jacob Pan
2019-04-09 14:52 ` Andriy Shevchenko
2019-04-08 23:59 ` [PATCH 16/18] iommu: add max num of cache and granu types Jacob Pan
2019-04-09 14:53 ` Andriy Shevchenko
2019-04-08 23:59 ` [PATCH 17/18] iommu/vt-d: Support flushing more translation cache types Jacob Pan
2019-04-08 23:59 ` [PATCH 18/18] iommu/vt-d: Add svm/sva invalidate function Jacob Pan
2019-04-09 14:57 ` Andriy Shevchenko
2019-04-09 17:43 ` Jacob Pan
2019-04-09 9:56 ` [PATCH 00/18] Shared virtual address IOMMU and VT-d support Andriy Shevchenko
2019-04-09 16:33 ` Jacob Pan
2019-04-15 17:25 ` Jacob Pan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190418212958.65975103@jacob-builder \
--to=jacob.jun.pan@linux.intel.com \
--cc=alex.williamson@redhat.com \
--cc=andriy.shevchenko@linux.intel.com \
--cc=ashok.raj@intel.com \
--cc=baolu.lu@linux.intel.com \
--cc=dwmw2@infradead.org \
--cc=hch@infradead.org \
--cc=iommu@lists.linux-foundation.org \
--cc=jean-philippe.brucker@arm.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox