iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* intel-iommu: iova_to_phys: fill in bits from iova when large pte
@ 2013-11-02  1:45 Marcelo Tosatti
       [not found] ` <20131102014511.GA29838-I4X2Mt4zSy4@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: Marcelo Tosatti @ 2013-11-02  1:45 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA


intel_iommu_iova_to_phys returns incorrect physical address 
when iova is translated by large pte.

Fill in bits from iova when creating the physical address.

Signed-off-by: Marcelo Tosatti <mtosatti-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 15e9b57..f8f2988 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -774,7 +774,8 @@ out:
 }
 
 static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
-				      unsigned long pfn, int target_level)
+				      unsigned long pfn, int target_level,
+				      int *large_page)
 {
 	int addr_width = agaw_to_width(domain->agaw) - VTD_PAGE_SHIFT;
 	struct dma_pte *parent, *pte = NULL;
@@ -790,8 +791,15 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
 
 		offset = pfn_level_offset(pfn, level);
 		pte = &parent[offset];
-		if (!target_level && (dma_pte_superpage(pte) || !dma_pte_present(pte)))
-			break;
+		if (!target_level) {
+			if (!dma_pte_present(pte))
+				break;
+			if (dma_pte_superpage(pte)) {
+				if (large_page)
+					*large_page = level;
+				break;
+			}
+		}
 		if (level == target_level)
 			break;
 
@@ -1824,7 +1832,8 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
 		if (!pte) {
 			largepage_lvl = hardware_largepage_caps(domain, iov_pfn, phys_pfn, sg_res);
 
-			first_pte = pte = pfn_to_dma_pte(domain, iov_pfn, largepage_lvl);
+			first_pte = pte = pfn_to_dma_pte(domain, iov_pfn,
+							 largepage_lvl, NULL);
 			if (!pte)
 				return -ENOMEM;
 			/* It is large page*/
@@ -4129,11 +4138,17 @@ static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
 {
 	struct dmar_domain *dmar_domain = domain->priv;
 	struct dma_pte *pte;
-	u64 phys = 0;
+	u64 phys = 0, iova_mask;
+	int large_page = 1;
 
-	pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, 0);
-	if (pte)
+	pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, 0,
+			     &large_page);
+	if (pte) {
 		phys = dma_pte_addr(pte);
+		large_page--;
+		iova_mask = (1ULL << (VTD_PAGE_SHIFT+(large_page*9)))-1;
+		phys |= iova & iova_mask;
+	}
 
 	return phys;
 }

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: intel-iommu: iova_to_phys: fill in bits from iova when large pte
       [not found] ` <20131102014511.GA29838-I4X2Mt4zSy4@public.gmane.org>
@ 2013-11-04 17:07   ` Greg Edwards
  2013-11-04 20:15     ` Marcelo Tosatti
  0 siblings, 1 reply; 3+ messages in thread
From: Greg Edwards @ 2013-11-04 17:07 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

On Fri, Nov 01, 2013 at 06:45:12PM -0700, Marcelo Tosatti wrote:
>
> intel_iommu_iova_to_phys returns incorrect physical address
> when iova is translated by large pte.
>
> Fill in bits from iova when creating the physical address.

Marcelo, for what it's worth, this patch alone didn't fix the BUG when
using KVM PCI assignment with huge pages.  I still needed the hva
alignment patch as well.

Greg

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: intel-iommu: iova_to_phys: fill in bits from iova when large pte
  2013-11-04 17:07   ` Greg Edwards
@ 2013-11-04 20:15     ` Marcelo Tosatti
  0 siblings, 0 replies; 3+ messages in thread
From: Marcelo Tosatti @ 2013-11-04 20:15 UTC (permalink / raw)
  To: Greg Edwards
  Cc: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	David Woodhouse, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

On Mon, Nov 04, 2013 at 10:07:54AM -0700, Greg Edwards wrote:
> On Fri, Nov 01, 2013 at 06:45:12PM -0700, Marcelo Tosatti wrote:
> >
> > intel_iommu_iova_to_phys returns incorrect physical address
> > when iova is translated by large pte.
> >
> > Fill in bits from iova when creating the physical address.
> 
> Marcelo, for what it's worth, this patch alone didn't fix the BUG when
> using KVM PCI assignment with huge pages.  I still needed the hva
> alignment patch as well.
> 
> Greg

Yep, still a bugfix though.

Good catch, Greg.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-11-04 20:15 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-02  1:45 intel-iommu: iova_to_phys: fill in bits from iova when large pte Marcelo Tosatti
     [not found] ` <20131102014511.GA29838-I4X2Mt4zSy4@public.gmane.org>
2013-11-04 17:07   ` Greg Edwards
2013-11-04 20:15     ` Marcelo Tosatti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).