* [PATCH] iommu/virtio: Advertise IOMMU_CAP_CACHE_COHERENCY
@ 2022-07-14 11:11 Jean-Philippe Brucker
2022-07-14 12:01 ` Robin Murphy
0 siblings, 1 reply; 5+ messages in thread
From: Jean-Philippe Brucker @ 2022-07-14 11:11 UTC (permalink / raw)
To: joro; +Cc: eric.auger, iommu, will, Jean-Philippe Brucker, virtualization
Fix virtio-iommu interaction with VFIO, as VFIO now requires
IOMMU_CAP_CACHE_COHERENCY. virtio-iommu does not support non-cacheable
mappings, and always expects to be called with IOMMU_CACHE.
Fixes: e8ae0e140c05 ("vfio: Require that devices support DMA cache coherence")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
drivers/iommu/virtio-iommu.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 25be4b822aa0..bf340d779c10 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1006,7 +1006,18 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
return iommu_fwspec_add_ids(dev, args->args, 1);
}
+static bool viommu_capable(enum iommu_cap cap)
+{
+ switch (cap) {
+ case IOMMU_CAP_CACHE_COHERENCY:
+ return true;
+ default:
+ return false;
+ }
+}
+
static struct iommu_ops viommu_ops = {
+ .capable = viommu_capable,
.domain_alloc = viommu_domain_alloc,
.probe_device = viommu_probe_device,
.probe_finalize = viommu_probe_finalize,
--
2.36.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] iommu/virtio: Advertise IOMMU_CAP_CACHE_COHERENCY
2022-07-14 11:11 [PATCH] iommu/virtio: Advertise IOMMU_CAP_CACHE_COHERENCY Jean-Philippe Brucker
@ 2022-07-14 12:01 ` Robin Murphy
2022-07-14 13:00 ` Jean-Philippe Brucker
0 siblings, 1 reply; 5+ messages in thread
From: Robin Murphy @ 2022-07-14 12:01 UTC (permalink / raw)
To: Jean-Philippe Brucker, joro; +Cc: eric.auger, iommu, will, virtualization
On 2022-07-14 12:11, Jean-Philippe Brucker wrote:
> Fix virtio-iommu interaction with VFIO, as VFIO now requires
> IOMMU_CAP_CACHE_COHERENCY. virtio-iommu does not support non-cacheable
> mappings, and always expects to be called with IOMMU_CACHE.
Can we know this is actually true though? What if the virtio-iommu
implementation is backed by something other than VFIO, and the
underlying hardware isn't coherent? AFAICS the spec doesn't disallow that.
Thanks,
Robin.
> Fixes: e8ae0e140c05 ("vfio: Require that devices support DMA cache coherence")
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> drivers/iommu/virtio-iommu.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 25be4b822aa0..bf340d779c10 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -1006,7 +1006,18 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> return iommu_fwspec_add_ids(dev, args->args, 1);
> }
>
> +static bool viommu_capable(enum iommu_cap cap)
> +{
> + switch (cap) {
> + case IOMMU_CAP_CACHE_COHERENCY:
> + return true;
> + default:
> + return false;
> + }
> +}
> +
> static struct iommu_ops viommu_ops = {
> + .capable = viommu_capable,
> .domain_alloc = viommu_domain_alloc,
> .probe_device = viommu_probe_device,
> .probe_finalize = viommu_probe_finalize,
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] iommu/virtio: Advertise IOMMU_CAP_CACHE_COHERENCY
2022-07-14 12:01 ` Robin Murphy
@ 2022-07-14 13:00 ` Jean-Philippe Brucker
2022-07-14 13:39 ` Robin Murphy
0 siblings, 1 reply; 5+ messages in thread
From: Jean-Philippe Brucker @ 2022-07-14 13:00 UTC (permalink / raw)
To: Robin Murphy; +Cc: eric.auger, joro, will, iommu, virtualization
On Thu, Jul 14, 2022 at 01:01:37PM +0100, Robin Murphy wrote:
> On 2022-07-14 12:11, Jean-Philippe Brucker wrote:
> > Fix virtio-iommu interaction with VFIO, as VFIO now requires
> > IOMMU_CAP_CACHE_COHERENCY. virtio-iommu does not support non-cacheable
> > mappings, and always expects to be called with IOMMU_CACHE.
>
> Can we know this is actually true though? What if the virtio-iommu
> implementation is backed by something other than VFIO, and the underlying
> hardware isn't coherent? AFAICS the spec doesn't disallow that.
Right, I should add a note about that. If someone does actually want to
support non-coherent device, I assume we'll add a per-device property, a
'non-cacheable' mapping flag, and IOMMU_CAP_CACHE_COHERENCY will hold.
I'm also planning to add a check on (IOMMU_CACHE && !IOMMU_NOEXEC) in
viommu_map(), but not as a fix.
In the meantime we do need to restore VFIO support under virtio-iommu,
since userspace still expects that to work, and the existing use-cases are
coherent devices.
Thanks,
Jean
>
> Thanks,
> Robin.
>
> > Fixes: e8ae0e140c05 ("vfio: Require that devices support DMA cache coherence")
> > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > ---
> > drivers/iommu/virtio-iommu.c | 11 +++++++++++
> > 1 file changed, 11 insertions(+)
> >
> > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> > index 25be4b822aa0..bf340d779c10 100644
> > --- a/drivers/iommu/virtio-iommu.c
> > +++ b/drivers/iommu/virtio-iommu.c
> > @@ -1006,7 +1006,18 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> > return iommu_fwspec_add_ids(dev, args->args, 1);
> > }
> > +static bool viommu_capable(enum iommu_cap cap)
> > +{
> > + switch (cap) {
> > + case IOMMU_CAP_CACHE_COHERENCY:
> > + return true;
> > + default:
> > + return false;
> > + }
> > +}
> > +
> > static struct iommu_ops viommu_ops = {
> > + .capable = viommu_capable,
> > .domain_alloc = viommu_domain_alloc,
> > .probe_device = viommu_probe_device,
> > .probe_finalize = viommu_probe_finalize,
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] iommu/virtio: Advertise IOMMU_CAP_CACHE_COHERENCY
2022-07-14 13:00 ` Jean-Philippe Brucker
@ 2022-07-14 13:39 ` Robin Murphy
2022-07-22 12:15 ` Jean-Philippe Brucker
0 siblings, 1 reply; 5+ messages in thread
From: Robin Murphy @ 2022-07-14 13:39 UTC (permalink / raw)
To: Jean-Philippe Brucker; +Cc: eric.auger, joro, will, iommu, virtualization
On 2022-07-14 14:00, Jean-Philippe Brucker wrote:
> On Thu, Jul 14, 2022 at 01:01:37PM +0100, Robin Murphy wrote:
>> On 2022-07-14 12:11, Jean-Philippe Brucker wrote:
>>> Fix virtio-iommu interaction with VFIO, as VFIO now requires
>>> IOMMU_CAP_CACHE_COHERENCY. virtio-iommu does not support non-cacheable
>>> mappings, and always expects to be called with IOMMU_CACHE.
>>
>> Can we know this is actually true though? What if the virtio-iommu
>> implementation is backed by something other than VFIO, and the underlying
>> hardware isn't coherent? AFAICS the spec doesn't disallow that.
>
> Right, I should add a note about that. If someone does actually want to
> support non-coherent device, I assume we'll add a per-device property, a
> 'non-cacheable' mapping flag, and IOMMU_CAP_CACHE_COHERENCY will hold.
> I'm also planning to add a check on (IOMMU_CACHE && !IOMMU_NOEXEC) in
> viommu_map(), but not as a fix.
But what about all the I/O-coherent PL330s? :P (IIRC you can actually
make a Juno do that with S2CR.MTCFG hacks...)
> In the meantime we do need to restore VFIO support under virtio-iommu,
> since userspace still expects that to work, and the existing use-cases are
> coherent devices.
Yeah, I'm not necessarily against adding this as a horrible bodge for
now - the reality is that people using VFIO must be doing it on coherent
systems or it wouldn't be working properly anyway - as long as we all
agree that that's what it is.
Next cycle I'll be sending the follow-up patches to bring
device_iommu_capable() to its final form (hoping the outstanding VDPA
patch lands in the meantime), at which point we get to sort-of-fix the
SMMU drivers[1], and can do something similar here too. I guess the main
question for virtio-iommu is whether it needs to be described/negotiated
in the protocol itself, or can be reliably described by other standard
firmware properties (with maybe just a spec not to clarify that
coherency must be consistent).
Cheers,
Robin.
[1]
https://gitlab.arm.com/linux-arm/linux-rm/-/commit/d8256bf48c8606cbaa6f0815696c2a6dbb72f1b0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] iommu/virtio: Advertise IOMMU_CAP_CACHE_COHERENCY
2022-07-14 13:39 ` Robin Murphy
@ 2022-07-22 12:15 ` Jean-Philippe Brucker
0 siblings, 0 replies; 5+ messages in thread
From: Jean-Philippe Brucker @ 2022-07-22 12:15 UTC (permalink / raw)
To: Robin Murphy; +Cc: joro, virtualization, eric.auger, iommu, will
On Thu, Jul 14, 2022 at 02:39:32PM +0100, Robin Murphy wrote:
> > In the meantime we do need to restore VFIO support under virtio-iommu,
> > since userspace still expects that to work, and the existing use-cases are
> > coherent devices.
>
> Yeah, I'm not necessarily against adding this as a horrible bodge for now -
> the reality is that people using VFIO must be doing it on coherent systems
> or it wouldn't be working properly anyway - as long as we all agree that
> that's what it is.
>
> Next cycle I'll be sending the follow-up patches to bring
> device_iommu_capable() to its final form (hoping the outstanding VDPA patch
> lands in the meantime), at which point we get to sort-of-fix the SMMU
> drivers[1], and can do something similar here too. I guess the main question
> for virtio-iommu is whether it needs to be described/negotiated in the
> protocol itself, or can be reliably described by other standard firmware
> properties (with maybe just a spec not to clarify that coherency must be
> consistent).
What consumers of IOMMU_CAP_CACHE_COHERENCY now want to know, is whether
coherency is managed in HW for one particular endpoint, or if they need to
issue cache maintenance. The latter cannot be handled by VFIO since cache
maintenance is generally privileged.
So I had to list several possibilities regarding non-coherent accesses.
I don't think we need a spec change.
A. Accesses through physical IOMMU are never coherent
-----------------------------------------------------
In this case, translated accesses from the physical device can't access
memory coherently. The host would describe it using existing FW methods
(dma-coherent in DT, _CCA in ACPI) since it's also needed without a
vIOMMU.
No change needed for virtio-iommu, I think, it can support non-coherent
devices. It can also support mixing coherent and non-coherent devices in
the same domain, because domains just allow to multiplex map requests at
the moment. Since we allow sending the same map request onto two different
domains, one with coherent devices and one with non-coherent ones, then we
can also allow using a single domain for that. If the host cannot handle
this, it is allowed to reject attach requests for incompatible devices.
In Linux I think compatible() should include dev->dma_coherent after your
change, or the callers should check dev->dma_coherent themselves
(vfio-platform in particular)
B. Non-cacheable mappings
-------------------------
Here, accesses are normally coherent but the pIOMMU mappings may be
configured to be non-coherent (non-cacheable access type on Arm). If there
is an actual need for this, we could add a feature bit, a probe request
property and a map flag.
In Linux we may want to disallow !IOMMU_CACHE if the device is coherent,
since we don't support this case.
C. PCIe devices performing no-snoop accesses
--------------------------------------------
Accesses are normally coherent but the device may set a transaction bit
requesting the transaction to be non-coherent.
A guest can't enable and use no-snoop in a PCIe device without knowing
whether the system supports it. It's not discoverable on Arm, so a guest
can't use it. On x86 I think it's always supported but the pIOMMU may
enforce snoop (and the guest may be unable to perform cache maintenance?
I didn't follow the whole wbinvd discussions for lack of time).
The problem is the same without a vIOMMU, so I'd defer that to some
firmware method describing no-snoop.
D. Non-coherent virtio-iommu
----------------------------
Non-coherent virtqueues. It's not forbidden by the spec, and a transport
driver could support it, but it's a transport problem and virtio-iommu
doesn't need to know about it.
Did I forget anything? Otherwise I don't think we need any spec change at
the moment. But when adding support for page tables, we'll have to
consider each of these cases since the guest will be able to set memory
attributes and will care about page walks coherency. That will be bundled
in a probe request along with the other page table information.
Thanks,
Jean
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-07-22 12:15 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-14 11:11 [PATCH] iommu/virtio: Advertise IOMMU_CAP_CACHE_COHERENCY Jean-Philippe Brucker
2022-07-14 12:01 ` Robin Murphy
2022-07-14 13:00 ` Jean-Philippe Brucker
2022-07-14 13:39 ` Robin Murphy
2022-07-22 12:15 ` Jean-Philippe Brucker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox