public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* VFIO iommu page size masking
@ 2015-02-13  2:41 Alexander Graf
  2015-02-13 16:31 ` Alex Williamson
  0 siblings, 1 reply; 2+ messages in thread
From: Alexander Graf @ 2015-02-13  2:41 UTC (permalink / raw)
  To: Alex Williamson; +Cc: KVM, Varun Sethi, Will Deacon

Hi Alex,

While trying to get VFIO-PCI working on AArch64 (with 64k page size), I
stumbled over the following piece of code:

> static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> {
>         struct vfio_domain *domain;
>         unsigned long bitmap = PAGE_MASK;
> 
>         mutex_lock(&iommu->lock);
>         list_for_each_entry(domain, &iommu->domain_list, next)
>                 bitmap &= domain->domain->ops->pgsize_bitmap;
>         mutex_unlock(&iommu->lock);
> 
>         return bitmap;
> }

The SMMU page mask is

[    3.054302] arm-smmu e0a00000.smmu: 	Supported page sizes: 0x40201000

but after this function, we end up supporting one 2MB pages and above.
The reason for that is simple: You restrict the bitmap to PAGE_MASK and
above.

Now the big question is why you're doing that. I don't see why it would
be a problem if the IOMMU maps a page in smaller chunks.

So I tried to patch the code above with s/PAGE_MASK/1UL/ and everything
seems to run fine. But maybe we're not lacking some sanity checks?


Alex

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: VFIO iommu page size masking
  2015-02-13  2:41 VFIO iommu page size masking Alexander Graf
@ 2015-02-13 16:31 ` Alex Williamson
  0 siblings, 0 replies; 2+ messages in thread
From: Alex Williamson @ 2015-02-13 16:31 UTC (permalink / raw)
  To: Alexander Graf; +Cc: KVM, Varun Sethi, Will Deacon

On Fri, 2015-02-13 at 03:41 +0100, Alexander Graf wrote:
> Hi Alex,
> 
> While trying to get VFIO-PCI working on AArch64 (with 64k page size), I
> stumbled over the following piece of code:
> 
> > static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> > {
> >         struct vfio_domain *domain;
> >         unsigned long bitmap = PAGE_MASK;
> > 
> >         mutex_lock(&iommu->lock);
> >         list_for_each_entry(domain, &iommu->domain_list, next)
> >                 bitmap &= domain->domain->ops->pgsize_bitmap;
> >         mutex_unlock(&iommu->lock);
> > 
> >         return bitmap;
> > }
> 
> The SMMU page mask is
> 
> [    3.054302] arm-smmu e0a00000.smmu: 	Supported page sizes: 0x40201000
> 
> but after this function, we end up supporting one 2MB pages and above.
> The reason for that is simple: You restrict the bitmap to PAGE_MASK and
> above.
> 
> Now the big question is why you're doing that. I don't see why it would
> be a problem if the IOMMU maps a page in smaller chunks.
> 
> So I tried to patch the code above with s/PAGE_MASK/1UL/ and everything
> seems to run fine. But maybe we're not lacking some sanity checks?

Hey Alex,

Yeah, we may need to double check if we prevent sub-PAGE_SIZE mappings
elsewhere in the DMA mapping path, but that's probably the right thing
to do.  On x86 we have AMD-Vi, which actually supports just about any
power-of-two mapping and therefore exposes effectively PAGE_MASK and
VT-d, which only natively supports a few page sizes, but breaks down
mappings itself and therefore muddies the interface by exposing
PAGE_MASK also.  So the IOMMU API ends up not really being a way to
expose native IOMMU page sizes anyway.

BTW, I'm on holiday until late next week, so I apologize to all the vfio
threads that won't be getting any attention until then.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-02-13 16:31 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-02-13  2:41 VFIO iommu page size masking Alexander Graf
2015-02-13 16:31 ` Alex Williamson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox