public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch 0/4] dmar: queued invalidation patches
@ 2008-10-16 23:31 Suresh Siddha
  2008-10-16 23:31 ` [patch 1/4] dmar: use spin_lock_irqsave() in qi_submit_sync() Suresh Siddha
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Suresh Siddha @ 2008-10-16 23:31 UTC (permalink / raw)
  To: mingo, dwmw2, jbarnes; +Cc: youquan.song, suresh.b.siddha, linux-kernel

Reposting the patchset posted earlier on Oct 8th. This time against Linus's git.

As this patchset removes a temporary quirk (disabling dma-remapping
when intr-remapping is enabled) we would like to see it go in to Linus tree
in this merge window. I am not sure if David is ready with his iommu
git tree setup. As Ingo pushed the original pieces, I am ok with Ingo
picking this up and pushing it to Linus. Either way, I am fine and would
like to see go into some subsystem tree and to Linus before the merge window
is closed. Thanks.
---

This patchset enables queued invalidation for DMA-remapping. And as such
removes the quirk from the x2apic/interrupt-remapping patchset which
disables dma-remapping while enabling interrupt-remapping.

Patches are on top of -tip tree because of its interaction with
x2apic/interrupt-remapping patches in tip.

Signed-off-by: Youquan Song <youquan.song@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [patch 1/4] dmar: use spin_lock_irqsave() in qi_submit_sync()
  2008-10-16 23:31 [patch 0/4] dmar: queued invalidation patches Suresh Siddha
@ 2008-10-16 23:31 ` Suresh Siddha
  2008-10-16 23:31 ` [patch 2/4] dmar: context cache and IOTLB invalidation using queued invalidation Suresh Siddha
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Suresh Siddha @ 2008-10-16 23:31 UTC (permalink / raw)
  To: mingo, dwmw2, jbarnes; +Cc: youquan.song, suresh.b.siddha, linux-kernel

[-- Attachment #1: fix_spin_lock_qi_submit_sync.patch --]
[-- Type: text/plain, Size: 2037 bytes --]

From: Suresh Siddha <suresh.b.siddha@intel.com>
Subject: use spin_lock_irqsave() in qi_submit_sync()

Next patch in the series will use queued invalidation interface
qi_submit_sync() for DMA-remapping aswell, which can be called from interrupt
context.

So use spin_lock_irqsave() instead of spin_lock() in qi_submit_sync().

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Youquan Song <youquan.song@intel.com>
---

Index: tip/drivers/pci/dmar.c
===================================================================
--- tip.orig/drivers/pci/dmar.c	2008-10-07 15:42:47.000000000 -0700
+++ tip/drivers/pci/dmar.c	2008-10-07 16:27:47.000000000 -0700
@@ -592,11 +592,11 @@
 
 	hw = qi->desc;
 
-	spin_lock(&qi->q_lock);
+	spin_lock_irqsave(&qi->q_lock, flags);
 	while (qi->free_cnt < 3) {
-		spin_unlock(&qi->q_lock);
+		spin_unlock_irqrestore(&qi->q_lock, flags);
 		cpu_relax();
-		spin_lock(&qi->q_lock);
+		spin_lock_irqsave(&qi->q_lock, flags);
 	}
 
 	index = qi->free_head;
@@ -617,15 +617,22 @@
 	qi->free_head = (qi->free_head + 2) % QI_LENGTH;
 	qi->free_cnt -= 2;
 
-	spin_lock_irqsave(&iommu->register_lock, flags);
+	spin_lock(&iommu->register_lock);
 	/*
 	 * update the HW tail register indicating the presence of
 	 * new descriptors.
 	 */
 	writel(qi->free_head << 4, iommu->reg + DMAR_IQT_REG);
-	spin_unlock_irqrestore(&iommu->register_lock, flags);
+	spin_unlock(&iommu->register_lock);
 
 	while (qi->desc_status[wait_index] != QI_DONE) {
+		/*
+		 * We will leave the interrupts disabled, to prevent interrupt
+		 * context to queue another cmd while a cmd is already submitted
+		 * and waiting for completion on this cpu. This is to avoid
+		 * a deadlock where the interrupt context can wait indefinitely
+		 * for free slots in the queue.
+		 */
 		spin_unlock(&qi->q_lock);
 		cpu_relax();
 		spin_lock(&qi->q_lock);
@@ -634,7 +641,7 @@
 	qi->desc_status[index] = QI_DONE;
 
 	reclaim_free_desc(qi);
-	spin_unlock(&qi->q_lock);
+	spin_unlock_irqrestore(&qi->q_lock, flags);
 }
 
 /*

-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [patch 2/4] dmar: context cache and IOTLB invalidation using queued invalidation
  2008-10-16 23:31 [patch 0/4] dmar: queued invalidation patches Suresh Siddha
  2008-10-16 23:31 ` [patch 1/4] dmar: use spin_lock_irqsave() in qi_submit_sync() Suresh Siddha
@ 2008-10-16 23:31 ` Suresh Siddha
  2008-10-16 23:31 ` [patch 3/4] dmar: Use queued invalidation interface for IOTLB and context invalidation Suresh Siddha
  2008-10-16 23:31 ` [patch 4/4] dmar: remove the quirk which disables dma-remapping when intr-remapping enabled Suresh Siddha
  3 siblings, 0 replies; 5+ messages in thread
From: Suresh Siddha @ 2008-10-16 23:31 UTC (permalink / raw)
  To: mingo, dwmw2, jbarnes; +Cc: youquan.song, suresh.b.siddha, linux-kernel

[-- Attachment #1: qi_support_for_context_iotlb_invalidation.patch --]
[-- Type: text/plain, Size: 3994 bytes --]

From: Youquan Song <youquan.song@intel.com>
Subject: context cache and IOTLB invalidation using queued invalidation

Implement context cache invalidate and IOTLB invalidation using
queued invalidation interface. This interface will be used by
DMA remapping, when queued invalidation is supported.

Signed-off-by: Youquan Song <youquan.song@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
---

Index: linux-2.6.git/drivers/pci/dmar.c
===================================================================
--- linux-2.6.git.orig/drivers/pci/dmar.c	2008-10-16 15:52:06.000000000 -0700
+++ linux-2.6.git/drivers/pci/dmar.c	2008-10-16 15:53:31.000000000 -0700
@@ -645,6 +645,62 @@
 	qi_submit_sync(&desc, iommu);
 }
 
+int qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm,
+		     u64 type, int non_present_entry_flush)
+{
+
+	struct qi_desc desc;
+
+	if (non_present_entry_flush) {
+		if (!cap_caching_mode(iommu->cap))
+			return 1;
+		else
+			did = 0;
+	}
+
+	desc.low = QI_CC_FM(fm) | QI_CC_SID(sid) | QI_CC_DID(did)
+			| QI_CC_GRAN(type) | QI_CC_TYPE;
+	desc.high = 0;
+
+	qi_submit_sync(&desc, iommu);
+
+	return 0;
+
+}
+
+int qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+		   unsigned int size_order, u64 type,
+		   int non_present_entry_flush)
+{
+	u8 dw = 0, dr = 0;
+
+	struct qi_desc desc;
+	int ih = 0;
+
+	if (non_present_entry_flush) {
+		if (!cap_caching_mode(iommu->cap))
+			return 1;
+		else
+			did = 0;
+	}
+
+	if (cap_write_drain(iommu->cap))
+		dw = 1;
+
+	if (cap_read_drain(iommu->cap))
+		dr = 1;
+
+	desc.low = QI_IOTLB_DID(did) | QI_IOTLB_DR(dr) | QI_IOTLB_DW(dw)
+		| QI_IOTLB_GRAN(type) | QI_IOTLB_TYPE;
+	desc.high = QI_IOTLB_ADDR(addr) | QI_IOTLB_IH(ih)
+		| QI_IOTLB_AM(size_order);
+
+	qi_submit_sync(&desc, iommu);
+
+	return 0;
+
+}
+
 /*
  * Enable Queued Invalidation interface. This is a must to support
  * interrupt-remapping. Also used by DMA-remapping, which replaces
Index: linux-2.6.git/include/linux/intel-iommu.h
===================================================================
--- linux-2.6.git.orig/include/linux/intel-iommu.h	2008-10-16 15:48:59.000000000 -0700
+++ linux-2.6.git/include/linux/intel-iommu.h	2008-10-16 15:53:31.000000000 -0700
@@ -127,6 +127,7 @@
 
 
 /* IOTLB_REG */
+#define DMA_TLB_FLUSH_GRANU_OFFSET  60
 #define DMA_TLB_GLOBAL_FLUSH (((u64)1) << 60)
 #define DMA_TLB_DSI_FLUSH (((u64)2) << 60)
 #define DMA_TLB_PSI_FLUSH (((u64)3) << 60)
@@ -140,6 +141,7 @@
 #define DMA_TLB_MAX_SIZE (0x3f)
 
 /* INVALID_DESC */
+#define DMA_CCMD_INVL_GRANU_OFFSET  61
 #define DMA_ID_TLB_GLOBAL_FLUSH	(((u64)1) << 3)
 #define DMA_ID_TLB_DSI_FLUSH	(((u64)2) << 3)
 #define DMA_ID_TLB_PSI_FLUSH	(((u64)3) << 3)
@@ -238,6 +240,19 @@
 #define QI_IWD_STATUS_DATA(d)	(((u64)d) << 32)
 #define QI_IWD_STATUS_WRITE	(((u64)1) << 5)
 
+#define QI_IOTLB_DID(did) 	(((u64)did) << 16)
+#define QI_IOTLB_DR(dr) 	(((u64)dr) << 7)
+#define QI_IOTLB_DW(dw) 	(((u64)dw) << 6)
+#define QI_IOTLB_GRAN(gran) 	(((u64)gran) >> (DMA_TLB_FLUSH_GRANU_OFFSET-4))
+#define QI_IOTLB_ADDR(addr)	(((u64)addr) & PAGE_MASK_4K)
+#define QI_IOTLB_IH(ih)		(((u64)ih) << 6)
+#define QI_IOTLB_AM(am)		(((u8)am))
+
+#define QI_CC_FM(fm)		(((u64)fm) << 48)
+#define QI_CC_SID(sid)		(((u64)sid) << 32)
+#define QI_CC_DID(did)		(((u64)did) << 16)
+#define QI_CC_GRAN(gran)	(((u64)gran) >> (DMA_CCMD_INVL_GRANU_OFFSET-4))
+
 struct qi_desc {
 	u64 low, high;
 };
@@ -303,6 +318,12 @@
 extern int dmar_enable_qi(struct intel_iommu *iommu);
 extern void qi_global_iec(struct intel_iommu *iommu);
 
+extern int qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid,
+			        u8 fm, u64 type, int non_present_entry_flush);
+extern int qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+			  unsigned int size_order, u64 type,
+			  int non_present_entry_flush);
+
 extern void qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
 
 void intel_iommu_domain_exit(struct dmar_domain *domain);

-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [patch 3/4] dmar: Use queued invalidation interface for IOTLB and context invalidation.
  2008-10-16 23:31 [patch 0/4] dmar: queued invalidation patches Suresh Siddha
  2008-10-16 23:31 ` [patch 1/4] dmar: use spin_lock_irqsave() in qi_submit_sync() Suresh Siddha
  2008-10-16 23:31 ` [patch 2/4] dmar: context cache and IOTLB invalidation using queued invalidation Suresh Siddha
@ 2008-10-16 23:31 ` Suresh Siddha
  2008-10-16 23:31 ` [patch 4/4] dmar: remove the quirk which disables dma-remapping when intr-remapping enabled Suresh Siddha
  3 siblings, 0 replies; 5+ messages in thread
From: Suresh Siddha @ 2008-10-16 23:31 UTC (permalink / raw)
  To: mingo, dwmw2, jbarnes; +Cc: youquan.song, suresh.b.siddha, linux-kernel

[-- Attachment #1: unify_iommu_cache_invalidate.patch --]
[-- Type: text/plain, Size: 6935 bytes --]

From: Youquan Song <youquan.song@intel.com>
Subject: Use queued invalidation interface for IOTLB and context invalidation

If queued invalidation interface is available and enabled, queued invalidation 
interface will be used instead of the register based interface.

According to Vt-d2 specification, when queued invalidation is enabled,
invalidation command submit works only through invalidation queue and not
through the command registers interface.

Signed-off-by: Youquan Song <youquan.song@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
---

Index: linux-2.6.git/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.git.orig/drivers/pci/intel-iommu.c	2008-10-16 15:48:59.000000000 -0700
+++ linux-2.6.git/drivers/pci/intel-iommu.c	2008-10-16 15:54:09.000000000 -0700
@@ -567,27 +567,6 @@
 	return 0;
 }
 
-static int inline iommu_flush_context_global(struct intel_iommu *iommu,
-	int non_present_entry_flush)
-{
-	return __iommu_flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
-		non_present_entry_flush);
-}
-
-static int inline iommu_flush_context_domain(struct intel_iommu *iommu, u16 did,
-	int non_present_entry_flush)
-{
-	return __iommu_flush_context(iommu, did, 0, 0, DMA_CCMD_DOMAIN_INVL,
-		non_present_entry_flush);
-}
-
-static int inline iommu_flush_context_device(struct intel_iommu *iommu,
-	u16 did, u16 source_id, u8 function_mask, int non_present_entry_flush)
-{
-	return __iommu_flush_context(iommu, did, source_id, function_mask,
-		DMA_CCMD_DEVICE_INVL, non_present_entry_flush);
-}
-
 /* return value determine if we need a write buffer flush */
 static int __iommu_flush_iotlb(struct intel_iommu *iommu, u16 did,
 	u64 addr, unsigned int size_order, u64 type,
@@ -660,20 +639,6 @@
 	return 0;
 }
 
-static int inline iommu_flush_iotlb_global(struct intel_iommu *iommu,
-	int non_present_entry_flush)
-{
-	return __iommu_flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH,
-		non_present_entry_flush);
-}
-
-static int inline iommu_flush_iotlb_dsi(struct intel_iommu *iommu, u16 did,
-	int non_present_entry_flush)
-{
-	return __iommu_flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH,
-		non_present_entry_flush);
-}
-
 static int iommu_flush_iotlb_psi(struct intel_iommu *iommu, u16 did,
 	u64 addr, unsigned int pages, int non_present_entry_flush)
 {
@@ -684,8 +649,9 @@
 
 	/* Fallback to domain selective flush if no PSI support */
 	if (!cap_pgsel_inv(iommu->cap))
-		return iommu_flush_iotlb_dsi(iommu, did,
-			non_present_entry_flush);
+		return iommu->flush.flush_iotlb(iommu, did, 0, 0,
+						DMA_TLB_DSI_FLUSH,
+						non_present_entry_flush);
 
 	/*
 	 * PSI requires page size to be 2 ^ x, and the base address is naturally
@@ -694,11 +660,12 @@
 	mask = ilog2(__roundup_pow_of_two(pages));
 	/* Fallback to domain selective flush if size is too big */
 	if (mask > cap_max_amask_val(iommu->cap))
-		return iommu_flush_iotlb_dsi(iommu, did,
-			non_present_entry_flush);
+		return iommu->flush.flush_iotlb(iommu, did, 0, 0,
+			DMA_TLB_DSI_FLUSH, non_present_entry_flush);
 
-	return __iommu_flush_iotlb(iommu, did, addr, mask,
-		DMA_TLB_PSI_FLUSH, non_present_entry_flush);
+	return iommu->flush.flush_iotlb(iommu, did, addr, mask,
+					DMA_TLB_PSI_FLUSH,
+					non_present_entry_flush);
 }
 
 static void iommu_disable_protect_mem_regions(struct intel_iommu *iommu)
@@ -1204,11 +1171,13 @@
 	__iommu_flush_cache(iommu, context, sizeof(*context));
 
 	/* it's a non-present to present mapping */
-	if (iommu_flush_context_device(iommu, domain->id,
-			(((u16)bus) << 8) | devfn, DMA_CCMD_MASK_NOBIT, 1))
+	if (iommu->flush.flush_context(iommu, domain->id,
+		(((u16)bus) << 8) | devfn, DMA_CCMD_MASK_NOBIT,
+		DMA_CCMD_DEVICE_INVL, 1))
 		iommu_flush_write_buffer(iommu);
 	else
-		iommu_flush_iotlb_dsi(iommu, 0, 0);
+		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_DSI_FLUSH, 0);
+
 	spin_unlock_irqrestore(&iommu->lock, flags);
 	return 0;
 }
@@ -1310,8 +1279,10 @@
 static void detach_domain_for_dev(struct dmar_domain *domain, u8 bus, u8 devfn)
 {
 	clear_context_table(domain->iommu, bus, devfn);
-	iommu_flush_context_global(domain->iommu, 0);
-	iommu_flush_iotlb_global(domain->iommu, 0);
+	domain->iommu->flush.flush_context(domain->iommu, 0, 0, 0,
+					   DMA_CCMD_GLOBAL_INVL, 0);
+	domain->iommu->flush.flush_iotlb(domain->iommu, 0, 0, 0,
+					 DMA_TLB_GLOBAL_FLUSH, 0);
 }
 
 static void domain_remove_dev_info(struct dmar_domain *domain)
@@ -1662,6 +1633,28 @@
 		}
 	}
 
+	for_each_drhd_unit(drhd) {
+		if (drhd->ignored)
+			continue;
+
+		iommu = drhd->iommu;
+		if (dmar_enable_qi(iommu)) {
+			/*
+			 * Queued Invalidate not enabled, use Register Based
+			 * Invalidate
+			 */
+			iommu->flush.flush_context = __iommu_flush_context;
+			iommu->flush.flush_iotlb = __iommu_flush_iotlb;
+			printk(KERN_INFO "IOMMU 0x%Lx: using Register based "
+			       "invalidation\n", drhd->reg_base_addr);
+		} else {
+			iommu->flush.flush_context = qi_flush_context;
+        		iommu->flush.flush_iotlb = qi_flush_iotlb;
+			printk(KERN_INFO "IOMMU 0x%Lx: using Queued "
+			       "invalidation\n", drhd->reg_base_addr);
+		}
+	}
+
 	/*
 	 * For each rmrr
 	 *   for each dev attached to rmrr
@@ -1714,9 +1707,10 @@
 
 		iommu_set_root_entry(iommu);
 
-		iommu_flush_context_global(iommu, 0);
-		iommu_flush_iotlb_global(iommu, 0);
-
+		iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
+					   0);
+		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH,
+					 0);
 		iommu_disable_protect_mem_regions(iommu);
 
 		ret = iommu_enable_translation(iommu);
@@ -1891,7 +1885,8 @@
 			struct intel_iommu *iommu =
 				deferred_flush[i].domain[0]->iommu;
 
-			iommu_flush_iotlb_global(iommu, 0);
+			iommu->flush.flush_iotlb(iommu, 0, 0, 0,
+						 DMA_TLB_GLOBAL_FLUSH, 0);
 			for (j = 0; j < deferred_flush[i].next; j++) {
 				__free_iova(&deferred_flush[i].domain[j]->iovad,
 						deferred_flush[i].iova[j]);
Index: linux-2.6.git/include/linux/intel-iommu.h
===================================================================
--- linux-2.6.git.orig/include/linux/intel-iommu.h	2008-10-16 15:53:31.000000000 -0700
+++ linux-2.6.git/include/linux/intel-iommu.h	2008-10-16 15:54:09.000000000 -0700
@@ -278,6 +278,13 @@
 };
 #endif
 
+struct iommu_flush {
+	int (*flush_context)(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm,
+		u64 type, int non_present_entry_flush);
+	int (*flush_iotlb)(struct intel_iommu *iommu, u16 did, u64 addr,
+		unsigned int size_order, u64 type, int non_present_entry_flush);
+};
+
 struct intel_iommu {
 	void __iomem	*reg; /* Pointer to hardware regs, virtual addr */
 	u64		cap;
@@ -297,6 +304,7 @@
 	unsigned char name[7];    /* Device Name */
 	struct msi_msg saved_msg;
 	struct sys_device sysdev;
+	struct iommu_flush flush;
 #endif
 	struct q_inval  *qi;            /* Queued invalidation info */
 #ifdef CONFIG_INTR_REMAP

-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [patch 4/4] dmar: remove the quirk which disables dma-remapping when intr-remapping enabled
  2008-10-16 23:31 [patch 0/4] dmar: queued invalidation patches Suresh Siddha
                   ` (2 preceding siblings ...)
  2008-10-16 23:31 ` [patch 3/4] dmar: Use queued invalidation interface for IOTLB and context invalidation Suresh Siddha
@ 2008-10-16 23:31 ` Suresh Siddha
  3 siblings, 0 replies; 5+ messages in thread
From: Suresh Siddha @ 2008-10-16 23:31 UTC (permalink / raw)
  To: mingo, dwmw2, jbarnes; +Cc: youquan.song, suresh.b.siddha, linux-kernel

[-- Attachment #1: enable_dmar.patch --]
[-- Type: text/plain, Size: 1737 bytes --]

From: Youquan Song <youquan.song@intel.com>
Subject: remove the quirk which disables dma-remapping when intr-remapping enabled

Now that we have DMA-remapping support for queued invalidation, we
can enable both DMA-remapping and interrupt-remapping at the same time.

Signed-off-by: Youquan Song <youquan.song@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
---

Index: linux-2.6.git/drivers/pci/dmar.c
===================================================================
--- linux-2.6.git.orig/drivers/pci/dmar.c	2008-10-16 15:53:31.000000000 -0700
+++ linux-2.6.git/drivers/pci/dmar.c	2008-10-16 15:55:44.000000000 -0700
@@ -455,8 +455,8 @@
 
 	ret = early_dmar_detect();
 
-#ifdef CONFIG_DMAR
 	{
+#ifdef CONFIG_INTR_REMAP
 		struct acpi_table_dmar *dmar;
 		/*
 		 * for now we will disable dma-remapping when interrupt
@@ -465,28 +465,18 @@
 		 * is added, we will not need this any more.
 		 */
 		dmar = (struct acpi_table_dmar *) dmar_tbl;
-		if (ret && cpu_has_x2apic && dmar->flags & 0x1) {
+		if (ret && cpu_has_x2apic && dmar->flags & 0x1)
 			printk(KERN_INFO
 			       "Queued invalidation will be enabled to support "
 			       "x2apic and Intr-remapping.\n");
-			printk(KERN_INFO
-			       "Disabling IOMMU detection, because of missing "
-			       "queued invalidation support for IOTLB "
-			       "invalidation\n");
-			printk(KERN_INFO
-			       "Use \"nox2apic\", if you want to use Intel "
-			       " IOMMU for DMA-remapping and don't care about "
-			       " x2apic support\n");
-
-			dmar_disabled = 1;
-			return;
-		}
+#endif
 
+#ifdef CONFIG_DMAR
 		if (ret && !no_iommu && !iommu_detected && !swiotlb &&
 		    !dmar_disabled)
 			iommu_detected = 1;
-	}
 #endif
+	}
 }
 
 

-- 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2008-10-16 23:47 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-10-16 23:31 [patch 0/4] dmar: queued invalidation patches Suresh Siddha
2008-10-16 23:31 ` [patch 1/4] dmar: use spin_lock_irqsave() in qi_submit_sync() Suresh Siddha
2008-10-16 23:31 ` [patch 2/4] dmar: context cache and IOTLB invalidation using queued invalidation Suresh Siddha
2008-10-16 23:31 ` [patch 3/4] dmar: Use queued invalidation interface for IOTLB and context invalidation Suresh Siddha
2008-10-16 23:31 ` [patch 4/4] dmar: remove the quirk which disables dma-remapping when intr-remapping enabled Suresh Siddha

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox