iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] KVM: IOMMU: hva align mapping page size
@ 2013-11-01 16:08 Greg Edwards
  2013-11-02  1:14 ` Marcelo Tosatti
  0 siblings, 1 reply; 5+ messages in thread
From: Greg Edwards @ 2013-11-01 16:08 UTC (permalink / raw)
  To: kvm-u79uwXL29TY76Z2rM5mHXA
  Cc: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

When determining the page size we could use to map with the IOMMU, the
page size should be aligned with the hva, not the gfn.  The gfn may not
reflect the real alignment within the hugetlbfs file.

Most of the time, this works fine.  However, if the hugetlbfs file is
backed by non-contiguous huge pages, a multi-huge page memslot starts at
an unaligned offset within the hugetlbfs file, and the gfn is aligned
with respect to the huge page size, kvm_host_page_size() will return the
huge page size and we will use that to map with the IOMMU.

When we later unpin that same memslot, the IOMMU returns the unmap size
as the huge page size, and we happily unpin that many pfns in
monotonically increasing order, not realizing we are spanning
non-contiguous huge pages and partially unpin the wrong huge page.

Instead, ensure the IOMMU mapping page size is aligned with the hva
corresponding to the gfn, which does reflect the alignment within the
hugetlbfs file.

Signed-off-by: Greg Edwards <gedwards-LfVdkaOWEx8@public.gmane.org>
Cc: stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
---
This resolves the bug previously reported (and misdiagnosed) here:

 http://www.spinics.net/lists/kvm/msg97599.html

 virt/kvm/iommu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c
index 72a130b..0e2ff32 100644
--- a/virt/kvm/iommu.c
+++ b/virt/kvm/iommu.c
@@ -99,8 +99,8 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
 		while ((gfn + (page_size >> PAGE_SHIFT)) > end_gfn)
 			page_size >>= 1;
 
-		/* Make sure gfn is aligned to the page size we want to map */
-		while ((gfn << PAGE_SHIFT) & (page_size - 1))
+		/* Make sure hva is aligned to the page size we want to map */
+		while (__gfn_to_hva_memslot(slot, gfn) & (page_size - 1))
 			page_size >>= 1;
 
 		/*
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-11-05  7:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-01 16:08 [PATCH] KVM: IOMMU: hva align mapping page size Greg Edwards
2013-11-02  1:14 ` Marcelo Tosatti
     [not found]   ` <20131102011433.GA30381-I4X2Mt4zSy4@public.gmane.org>
2013-11-04 16:08     ` [PATCH v2] " Greg Edwards
2013-11-04 20:14       ` Marcelo Tosatti
2013-11-05  7:56       ` Gleb Natapov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).