public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [Intel-IOMMU 00/10] Intel IOMMU support
@ 2007-06-04 21:02 anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
                   ` (9 more replies)
  0 siblings, 10 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

Hi,
	We are pleased to announce the revised version of
the Intel IOMMU driver. This driver incorporates several
feedback received from Anid Kleen, David Miller and 
several others.

Most notable changes from previous postings (apart from
general code cleanup) are

1) Replaced linear linked list with RB tree to manage IOVA's.

2) IOVA address is now being allocated from the cards MAX DMA address capability or 
DMA32bit limit which ever is lower. This allowed us to get rid of having to
preserve certain address range when multiple cards of different DMA address
capabilities share the same domain.

3)Implements generic pre-allocated pools a.k.a. resource pool to allocate
memory for IOVA's and for vt-d page tables. This resource pools grows
automagically in the background (work queued to keventd) based
on the demand.

4) Did some tuning in terms of locking for iova allocation and freeing.

5) Changed command line options for isa and gfx workaround to CONFIG options,
so that when we have all the components adhere to PCI-DMA api's we can
easily yank this workarounds.

With all the above changes, the performance greatly improved and
the results showed that performance with IOMMU was comparable to 
without IOMMU configured.


Once again, thanks for providing valuable feedback, please
apply this set of patches to MM if you have no further objections.


Cheers,
-Anil Keshavmaurthy
e-mail: anil.s.keshavamurthy@intel.com
Open Source Technology Center
Intel Corp.
-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 01/10] DMAR detection and parsing logic
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 22:54   ` Jeff Garzik
  2007-06-04 23:03   ` Jeff Garzik
  2007-06-04 21:02 ` [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects anil.s.keshavamurthy
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: dmar_detection.patch --]
[-- Type: text/plain, Size: 14490 bytes --]

This patch adds support for early detection and parsing of DMAR's
reported to OS via ACPI tables.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 arch/x86_64/Kconfig   |   11 +
 drivers/pci/Makefile  |    3 
 drivers/pci/dmar.c    |  318 ++++++++++++++++++++++++++++++++++++++++++++++++++
 include/acpi/actbl1.h |   27 +++-
 include/linux/dmar.h  |   52 ++++++++
 5 files changed, 404 insertions(+), 7 deletions(-)

Index: linux-2.6.22-rc3/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/Kconfig	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/Kconfig	2007-06-04 12:33:15.000000000 -0700
@@ -716,6 +716,17 @@
 	bool "Support mmconfig PCI config space access"
 	depends on PCI && ACPI
 
+config DMAR
+	bool "Support for DMA Remapping Devices (EXPERIMENTAL)"
+	depends on PCI_MSI && ACPI && EXPERIMENTAL
+	default y
+	help
+	  DMA remapping(DMAR) devices support enables independent address
+	  translations for Direct Memory Access(DMA) from Devices.
+	  These DMA remapping devices are reported via ACPI tables
+	  and includes pci device scope covered by these DMA
+	  remapping device.
+
 source "drivers/pci/pcie/Kconfig"
 
 source "drivers/pci/Kconfig"
Index: linux-2.6.22-rc3/drivers/pci/Makefile
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/Makefile	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/Makefile	2007-06-04 12:33:15.000000000 -0700
@@ -20,6 +20,9 @@
 # Build the Hypertransport interrupt support
 obj-$(CONFIG_HT_IRQ) += htirq.o
 
+# Build Intel IOMMU support
+obj-$(CONFIG_DMAR) += dmar.o
+
 #
 # Some architectures use the generic PCI setup functions
 #
Index: linux-2.6.22-rc3/drivers/pci/dmar.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/dmar.c	2007-06-04 12:33:15.000000000 -0700
@@ -0,0 +1,318 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * 	Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ *	Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ *
+ * 	This file implements early detection/parsing of DMA Remapping Devices
+ * reported to OS through BIOS via DMA remapping reporting (DMAR) ACPI
+ * tables.
+ */
+
+#include <linux/pci.h>
+#include <linux/dmar.h>
+
+#undef PREFIX
+#define PREFIX "DMAR:"
+
+/* No locks are needed as DMA remapping hardware unit
+ * list is constructed at boot time and hotplug of
+ * these units are not supported by the architecture.
+ */
+LIST_HEAD(dmar_drhd_units);
+LIST_HEAD(dmar_rmrr_units);
+
+static struct acpi_table_header * __initdata dmar_tbl;
+
+static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
+{
+	/*
+	 * add INCLUDE_ALL at the tail, so scan the list will find it at
+	 * the very end.
+	 */
+	if (drhd->include_all)
+		list_add_tail(&drhd->list, &dmar_drhd_units);
+	else
+		list_add(&drhd->list, &dmar_drhd_units);
+}
+
+static void __init dmar_register_rmrr_unit(struct dmar_rmrr_unit *rmrr)
+{
+	list_add(&rmrr->list, &dmar_rmrr_units);
+}
+
+static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope,
+					   struct pci_dev **dev, u16 segment)
+{
+	struct pci_bus *bus;
+	struct pci_dev *pdev = NULL;
+	struct acpi_dmar_pci_path *path;
+	int count;
+
+	bus = pci_find_bus(segment, scope->bus);
+	path = (struct acpi_dmar_pci_path *)(scope + 1);
+	count = (scope->length - sizeof(struct acpi_dmar_device_scope))
+		/sizeof(struct acpi_dmar_pci_path);
+
+	while (count) {
+		if (pdev)
+			pci_dev_put(pdev);
+		/*
+		 * Some BIOSes list non-exist devices in DMAR table, just
+		 * ignore it
+		 */
+		if (!bus) {
+			printk(KERN_WARNING
+			PREFIX "Device scope bus [%d] not found\n",
+			scope->bus);
+			break;
+		}
+		pdev = pci_get_slot(bus, PCI_DEVFN(path->dev, path->fn));
+		if (!pdev) {
+			printk(KERN_WARNING PREFIX
+			"Device scope device [%04x:%02x:%02x.%02x] not found\n",
+				segment, bus->number, path->dev, path->fn);
+			break;
+		}
+		path ++;
+		count --;
+		bus = pdev->subordinate;
+	}
+	if (!pdev) {
+		printk(KERN_WARNING PREFIX
+		"Device scope device [%04x:%02x:%02x.%02x] not found\n",
+		segment, scope->bus, path->dev, path->fn);
+		*dev = NULL;
+		return 0;
+	}
+	if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT && pdev->subordinate)
+	   || (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE && !pdev->subordinate)) {
+		pci_dev_put(pdev);
+		printk(KERN_WARNING PREFIX "Device scope type does not match for %s\n", pci_name(pdev));
+		return -EINVAL;
+	}
+	*dev = pdev;
+	return 0;
+}
+
+static int __init dmar_parse_dev_scope(void *start, void *end, int *cnt,
+				       struct pci_dev ***devices, u16 segment)
+{
+	struct acpi_dmar_device_scope *scope;
+	void * tmp = start;
+	int index;
+	int ret;
+
+	*cnt = 0;
+	while (start < end) {
+		scope = start;
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
+		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE)
+			(*cnt)++;
+		else
+			printk(KERN_WARNING PREFIX "Unsupported device scope\n");
+		start += scope->length;
+	}
+	if (*cnt == 0)
+		return 0;
+
+	*devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL);
+	if (!*devices)
+		return -ENOMEM;
+
+	start = tmp;
+	index = 0;
+	while (start < end) {
+		scope = start;
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
+		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) {
+			ret = dmar_parse_one_dev_scope(scope,
+				&(*devices)[index], segment);
+			if (ret) {
+				kfree(*devices);
+				return ret;
+			}
+			index ++;
+		}
+		start += scope->length;
+	}
+
+	return 0;
+}
+
+/**
+ * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition
+ * structure which uniquely represent one DMA remapping hardware unit
+ * present in the platform
+ */
+static int __init
+dmar_parse_one_drhd(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_hardware_unit * drhd = (struct acpi_dmar_hardware_unit *)header;
+	struct dmar_drhd_unit *dmaru;
+	int ret = 0;
+	static int include_all;
+
+	dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL);
+	if (!dmaru)
+		return -ENOMEM;
+
+	dmaru->reg_base_addr = drhd->address;
+	dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */
+
+	if (!dmaru->include_all)
+		ret = dmar_parse_dev_scope((void *)(drhd + 1),
+				((void *)drhd) + header->length,
+				&dmaru->devices_cnt, &dmaru->devices,
+				drhd->segment);
+	else {
+		/* Only allow one INCLUDE_ALL */
+		if (include_all) {
+			printk(KERN_WARNING PREFIX "Only one INCLUDE_ALL "
+				"device scope is allowed\n");
+			ret = -EINVAL;
+		}
+		include_all = 1;
+	}
+
+	if (ret || (dmaru->devices_cnt == 0 && !dmaru->include_all))
+		kfree(dmaru);
+	else
+		dmar_register_drhd_unit(dmaru);
+	return ret;
+}
+
+static int __init
+dmar_parse_one_rmrr(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_reserved_memory *rmrr = (struct acpi_dmar_reserved_memory *)header;
+	struct dmar_rmrr_unit *rmrru;
+	int ret = 0;
+
+	rmrru = kzalloc(sizeof(*rmrru), GFP_KERNEL);
+	if (!rmrru)
+		return -ENOMEM;
+
+	rmrru->base_address = rmrr->base_address;
+	rmrru->end_address = rmrr->end_address;
+	ret = dmar_parse_dev_scope((void *)(rmrr + 1),
+		((void*)rmrr) + header->length,
+		&rmrru->devices_cnt, &rmrru->devices, rmrr->segment);
+
+	if (ret || (rmrru->devices_cnt == 0))
+		kfree(rmrru);
+	else
+		dmar_register_rmrr_unit(rmrru);
+	return ret;
+}
+
+static void __init
+dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_hardware_unit *drhd;
+	struct acpi_dmar_reserved_memory *rmrr;
+
+	switch (header->type) {
+	case ACPI_DMAR_TYPE_HARDWARE_UNIT:
+		drhd = (struct acpi_dmar_hardware_unit *)header;
+		printk (KERN_INFO PREFIX
+			"DRHD (flags: 0x%08x)base: 0x%016Lx\n",
+			drhd->flags, drhd->address);
+		break;
+	case ACPI_DMAR_TYPE_RESERVED_MEMORY:
+		rmrr = (struct acpi_dmar_reserved_memory *)header;
+
+		printk (KERN_INFO PREFIX
+			"RMRR base: 0x%016Lx end: 0x%016Lx\n",
+			rmrr->base_address, rmrr->end_address);
+		break;
+	}
+}
+
+/**
+ * parse_dmar_table - parses the DMA reporting table
+ */
+static int __init
+parse_dmar_table(void)
+{
+	struct acpi_table_dmar *dmar;
+	struct acpi_dmar_header *entry_header;
+	int ret = 0;
+
+	dmar = (struct acpi_table_dmar *)dmar_tbl;
+
+	if (!dmar->width) {
+		printk (KERN_WARNING PREFIX "Zero: Invalid DMAR haw\n");
+		return -EINVAL;
+	}
+
+	printk (KERN_INFO PREFIX "Host address width %d\n",
+		dmar->width + 1);
+
+	entry_header = (struct acpi_dmar_header *)(dmar + 1);
+	while (((unsigned long)entry_header) < (((unsigned long)dmar) + dmar_tbl->length)) {
+		dmar_table_print_dmar_entry(entry_header);
+
+		switch (entry_header->type) {
+		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
+			ret = dmar_parse_one_drhd(entry_header);
+			break;
+		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
+			ret = dmar_parse_one_rmrr(entry_header);
+			break;
+		default:
+			printk(KERN_WARNING PREFIX "Unknown DMAR structure type\n");
+			ret = 0; /* for forward compatibility */
+			break;
+		}
+		if (ret)
+			break;
+
+		entry_header = ((void *)entry_header + entry_header->length);
+	}
+	return ret;
+}
+
+
+int __init dmar_table_init(void)
+{
+
+	parse_dmar_table();
+	if (list_empty(&dmar_drhd_units)) {
+		printk(KERN_ERR PREFIX "No DMAR devices found\n");
+		return -ENODEV;
+	}
+	return 0;
+}
+
+/**
+ * early_dmar_detect - checks to see if the platform supports DMAR devices
+ */
+int __init early_dmar_detect(void)
+{
+	acpi_status status = AE_OK;
+
+	/* if we could find DMAR table, then there are DMAR devices */
+	status = acpi_get_table(ACPI_SIG_DMAR, 0,
+				(struct acpi_table_header **)&dmar_tbl);
+
+	if (ACPI_SUCCESS(status) && !dmar_tbl) {
+		printk (KERN_WARNING PREFIX "Unable to map DMAR\n");
+		status = AE_NOT_FOUND;
+	}
+
+	return (ACPI_SUCCESS(status) ? 1 : 0);
+}
Index: linux-2.6.22-rc3/include/acpi/actbl1.h
===================================================================
--- linux-2.6.22-rc3.orig/include/acpi/actbl1.h	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/include/acpi/actbl1.h	2007-06-04 12:33:15.000000000 -0700
@@ -257,7 +257,8 @@
 struct acpi_table_dmar {
 	struct acpi_table_header header;	/* Common ACPI table header */
 	u8 width;		/* Host Address Width */
-	u8 reserved[11];
+	u8 flags;
+	u8 reserved[10];
 };
 
 /* DMAR subtable header */
@@ -265,8 +266,6 @@
 struct acpi_dmar_header {
 	u16 type;
 	u16 length;
-	u8 flags;
-	u8 reserved[3];
 };
 
 /* Values for subtable type in struct acpi_dmar_header */
@@ -274,13 +273,15 @@
 enum acpi_dmar_type {
 	ACPI_DMAR_TYPE_HARDWARE_UNIT = 0,
 	ACPI_DMAR_TYPE_RESERVED_MEMORY = 1,
-	ACPI_DMAR_TYPE_RESERVED = 2	/* 2 and greater are reserved */
+	ACPI_DMAR_TYPE_ATSR = 2,
+	ACPI_DMAR_TYPE_RESERVED = 3	/* 3 and greater are reserved */
 };
 
 struct acpi_dmar_device_scope {
 	u8 entry_type;
 	u8 length;
-	u8 segment;
+	u16 reserved;
+	u8 enumeration_id;
 	u8 bus;
 };
 
@@ -290,7 +291,14 @@
 	ACPI_DMAR_SCOPE_TYPE_NOT_USED = 0,
 	ACPI_DMAR_SCOPE_TYPE_ENDPOINT = 1,
 	ACPI_DMAR_SCOPE_TYPE_BRIDGE = 2,
-	ACPI_DMAR_SCOPE_TYPE_RESERVED = 3	/* 3 and greater are reserved */
+	ACPI_DMAR_SCOPE_TYPE_IOAPIC = 3,
+	ACPI_DMAR_SCOPE_TYPE_HPET = 4,
+	ACPI_DMAR_SCOPE_TYPE_RESERVED = 5	/* 5 and greater are reserved */
+};
+
+struct acpi_dmar_pci_path {
+	u8 dev;
+	u8 fn;
 };
 
 /*
@@ -301,6 +309,9 @@
 
 struct acpi_dmar_hardware_unit {
 	struct acpi_dmar_header header;
+	u8 flags;
+	u8 reserved;
+	u16 segment;
 	u64 address;		/* Register Base Address */
 };
 
@@ -312,7 +323,9 @@
 
 struct acpi_dmar_reserved_memory {
 	struct acpi_dmar_header header;
-	u64 address;		/* 4_k aligned base address */
+	u16 reserved;
+	u16 segment;
+	u64 base_address;		/* 4_k aligned base address */
 	u64 end_address;	/* 4_k aligned limit address */
 };
 
Index: linux-2.6.22-rc3/include/linux/dmar.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-04 12:33:15.000000000 -0700
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ * Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ */
+
+#ifndef __DMAR_H__
+#define __DMAR_H__
+
+#include <linux/acpi.h>
+#include <linux/types.h>
+
+
+extern int dmar_table_init(void);
+extern int early_dmar_detect(void);
+
+extern struct list_head dmar_drhd_units;
+extern struct list_head dmar_rmrr_units;
+
+struct dmar_drhd_unit {
+	struct list_head list;		/* list of drhd units	*/
+	u64	reg_base_addr;		/* register base address*/
+	struct	pci_dev **devices; 	/* target device array	*/
+	int	devices_cnt;		/* target device count	*/
+	u8	ignored:1; 		/* ignore drhd		*/
+	u8	include_all:1;
+	struct intel_iommu *iommu;
+};
+
+struct dmar_rmrr_unit {
+	struct list_head list;		/* list of rmrr units	*/
+	u64	base_address;		/* reserved base address*/
+	u64	end_address;		/* reserved end address */
+	struct pci_dev **devices;	/* target devices */
+	int	devices_cnt;		/* target device count */
+};
+
+#endif /* __DMAR_H__ */

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 22:57   ` Jeff Garzik
  2007-06-04 21:02 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: resource_pool.patch --]
[-- Type: text/plain, Size: 7617 bytes --]

	This patch provides a common interface for pre allocating and 
managing pool of objects.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 include/linux/respool.h |   43 +++++++++++
 lib/Makefile            |    1 
 lib/respool.c           |  176 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 220 insertions(+)

Index: linux-2.6.22-rc3/include/linux/respool.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/include/linux/respool.h	2007-06-04 12:36:17.000000000 -0700
@@ -0,0 +1,43 @@
+/*
+ * respool.c - library routines for handling generic pre-allocated pool of objects
+ *
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ *
+ */
+
+#ifndef _RESPOOL_H_
+#define _RESPOOL_H_
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+
+typedef void *(*rpool_alloc_t)(unsigned int, gfp_t);
+typedef void (*rpool_free_t)(void *, unsigned int);
+
+struct resource_pool {
+	struct work_struct work;
+	spinlock_t	pool_lock;	/* pool lock to walk the pool_head */
+	struct list_head pool_head;	/* pool objects list head	*/
+	unsigned int	min_count;	/* min count to maintain	*/
+	unsigned int	grow_count;	/* grow by count when time to grow */
+	unsigned int	curr_count;	/* count of current free objects */
+	unsigned int	alloc_size;	/* objects size			*/
+	rpool_alloc_t 	alloc_mem;	/* pool mem alloc function pointer */
+	rpool_free_t 	free_mem;	/* pool mem free function pointer */
+};
+
+void *get_resource_pool_obj(struct resource_pool *ppool);
+void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool);
+void destroy_resource_pool(struct resource_pool *ppool);
+int init_resource_pool(struct resource_pool *res,
+	unsigned int min_count, unsigned int alloc_size,
+	unsigned int grow_count, rpool_alloc_t alloc_fn,
+	rpool_free_t free_fn);
+
+#endif
Index: linux-2.6.22-rc3/lib/Makefile
===================================================================
--- linux-2.6.22-rc3.orig/lib/Makefile	2007-06-04 12:28:10.000000000 -0700
+++ linux-2.6.22-rc3/lib/Makefile	2007-06-04 12:36:17.000000000 -0700
@@ -58,6 +58,7 @@
 obj-$(CONFIG_AUDIT_GENERIC) += audit.o
 
 obj-$(CONFIG_SWIOTLB) += swiotlb.o
+obj-$(CONFIG_DMAR) += respool.o
 obj-$(CONFIG_FAULT_INJECTION) += fault-inject.o
 
 lib-$(CONFIG_GENERIC_BUG) += bug.o
Index: linux-2.6.22-rc3/lib/respool.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/lib/respool.c	2007-06-04 12:36:17.000000000 -0700
@@ -0,0 +1,176 @@
+/*
+ * respool.c - library routines for handling generic pre-allocated pool of objects
+ *
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ */
+
+#include <linux/respool.h>
+
+/**
+ * get_resource_pool_obj - gets an object from the pool
+ * @ppool - resource pool in question
+ * This function gets an object from the pool and
+ * if the pool count drops below min_count, this
+ * function schedules work to grow the pool. If
+ * no elements are fount in the pool then this function
+ * tries to get memory from kernel.
+ */
+void * get_resource_pool_obj(struct resource_pool *ppool)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+	bool queue_work = 0;
+
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	if (!list_empty(&ppool->pool_head)) {
+		plist = ppool->pool_head.next;
+		list_del(plist);
+		ppool->curr_count--;
+	} else {
+		/*Making sure that curr_count is 0 when list is empty */
+		plist = NULL;
+		BUG_ON(ppool->curr_count != 0);
+	}
+
+	/* Check if pool needs to grow */
+	if (ppool->curr_count <= ppool->min_count)
+		queue_work = 1;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+
+	if (queue_work)
+		schedule_work(&ppool->work); /* queue work to grow the pool */
+
+
+	if (plist) {
+		memset(plist, 0, ppool->alloc_size); /* Zero out memory */
+		return plist;
+	}
+
+	/* Out of luck, try to get memory from kernel */
+	plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
+			GFP_ATOMIC);
+
+	return plist;
+}
+
+/**
+ * put_resource_pool_obj - puts an object back to the pool
+ * @vaddr - object's address
+ * @ppool - resource pool in question.
+ * This function puts an object back to the pool.
+ */
+void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool)
+{
+	unsigned long	flags;
+	struct list_head *plist = (struct list_head *)vaddr;
+
+	BUG_ON(!vaddr);
+	BUG_ON(!ppool);
+
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	list_add(plist, &ppool->pool_head);
+	ppool->curr_count++;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+}
+
+/**
+ * grow_resource_pool - grows the given resource pool
+ * @work - work struct
+ * This functions gets the resource pool pointer from the
+ * work struct and grows the resource pool by grow_count.
+ */
+static void
+grow_resource_pool(struct work_struct * work)
+{
+	struct resource_pool *ppool;
+	struct list_head *plist;
+	unsigned int min_count, grow_count = 0;
+	unsigned long	flags;
+
+	ppool = container_of(work, struct resource_pool, work);
+
+	/* compute the minimum count to grow */
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	min_count = ppool->min_count + ppool->grow_count;
+	if (ppool->curr_count < min_count)
+		grow_count = min_count - ppool->curr_count;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+
+	while(grow_count) {
+		plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
+			GFP_KERNEL);
+
+		if (!plist)
+			break;
+
+		/* Add the element to the list */
+		spin_lock_irqsave(&ppool->pool_lock, flags);
+		list_add(plist, &ppool->pool_head);
+		ppool->curr_count++;
+		spin_unlock_irqrestore(&ppool->pool_lock, flags);
+		grow_count--;
+	}
+}
+
+/**
+ * destroy_resource_pool - destroys the given resource pool
+ * @ppool - resource pool in question.
+ * This function walks throuhg its list and frees up the
+ * preallocated objects.
+ */
+void
+destroy_resource_pool(struct resource_pool *ppool)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	while (!list_empty(&ppool->pool_head)) {
+		plist = &ppool->pool_head;
+		list_del(plist);
+
+		ppool->free_mem(plist, ppool->alloc_size);
+
+	}
+	ppool->curr_count = 0;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+}
+
+/**
+ * init_resource_pool - initializes the resource pool
+ * @res: resource pool in question.
+ * @min_count: count of objectes to pre-allocate
+ * @alloc_size: size of each objects
+ * @grow_count: count of objects to grow when required
+ * @alloc_fn: function which allocates memory
+ * @free_fn: function which frees memory
+ *
+ * This function initializes the given resource pool and
+ * populates the min_count of objects to begin with.
+ */
+int
+init_resource_pool(struct resource_pool *res,
+	unsigned int min_count, unsigned int alloc_size,
+	unsigned int grow_count, rpool_alloc_t alloc_fn,
+	rpool_free_t free_fn)
+{
+	res->min_count = min_count;
+	res->alloc_size = alloc_size;
+	res->grow_count = grow_count;
+	res->curr_count = 0;
+	res->alloc_mem = alloc_fn;
+	res->free_mem = free_fn;
+	spin_lock_init(&res->pool_lock);
+	INIT_LIST_HEAD(&res->pool_head);
+	INIT_WORK(&res->work, grow_resource_pool);
+
+	/* grow the pool */
+	grow_resource_pool(&res->work);
+
+	return (res->curr_count == 0);
+}
+

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 03/10] PCI generic helper function
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: pcie_port_type.patch --]
[-- Type: text/plain, Size: 4090 bytes --]

When devices are under a p2p bridge, upstream
transactions get replaced by the device id of the bridge as it owns the
PCIE transaction. Hence its necessary to setup translations on behalf of the
bridge as well. Due to this limitation all devices under a p2p share the same
domain in a DMAR.

We just cache the type of device, if its a native PCIe device
or not for later use.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 drivers/pci/pci.h    |    1 +
 drivers/pci/probe.c  |   14 ++++++++++++++
 drivers/pci/search.c |   30 ++++++++++++++++++++++++++++++
 include/linux/pci.h  |    2 ++
 4 files changed, 47 insertions(+)

Index: linux-2.6.22-rc3/drivers/pci/pci.h
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/pci.h	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/pci.h	2007-06-04 12:36:31.000000000 -0700
@@ -92,3 +92,4 @@
 	return NULL;
 }
 
+struct pci_dev *pci_find_upstream_pcie_bridge(struct pci_dev *pdev);
Index: linux-2.6.22-rc3/drivers/pci/probe.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/probe.c	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/probe.c	2007-06-04 12:36:31.000000000 -0700
@@ -803,6 +803,19 @@
 	kfree(pci_dev);
 }
 
+static void set_pcie_port_type(struct pci_dev *pdev)
+{
+	int pos;
+	u16 reg16;
+
+	pos = pci_find_capability(pdev, PCI_CAP_ID_EXP);
+	if (!pos)
+		return;
+	pdev->is_pcie = 1;
+	pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16);
+	pdev->pcie_type = (reg16 & PCI_EXP_FLAGS_TYPE) >> 4;
+}
+
 /**
  * pci_cfg_space_size - get the configuration space size of the PCI device.
  * @dev: PCI device
@@ -917,6 +930,7 @@
 	dev->device = (l >> 16) & 0xffff;
 	dev->cfg_size = pci_cfg_space_size(dev);
 	dev->error_state = pci_channel_io_normal;
+	set_pcie_port_type(dev);
 
 	/* Assume 32-bit PCI; let 64-bit PCI cards (which are far rarer)
 	   set this higher, assuming the system even supports it.  */
Index: linux-2.6.22-rc3/drivers/pci/search.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/search.c	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/search.c	2007-06-04 12:36:31.000000000 -0700
@@ -14,6 +14,36 @@
 #include "pci.h"
 
 DECLARE_RWSEM(pci_bus_sem);
+/*
+ * find the upstream PCIE-to-PCI bridge of a PCI device
+ * if the device is PCIE, return NULL
+ * if the device isn't connected to a PCIE bridge (that is its parent is a
+ * legacy PCI bridge and the bridge is directly connected to bus 0), return its
+ * parent
+ */
+struct pci_dev *
+pci_find_upstream_pcie_bridge(struct pci_dev *pdev)
+{
+	struct pci_dev *tmp = NULL;
+
+	if (pdev->is_pcie)
+		return NULL;
+	while (1) {
+		if (!pdev->bus->self)
+			break;
+		pdev = pdev->bus->self;
+		/* a p2p bridge */
+		if (!pdev->is_pcie) {
+			tmp = pdev;
+			continue;
+		}
+		/* PCI device should connect to a PCIE bridge */
+		BUG_ON(pdev->pcie_type != PCI_EXP_TYPE_PCI_BRIDGE);
+		return pdev;
+	}
+
+	return tmp;
+}
 
 static struct pci_bus *pci_do_find_bus(struct pci_bus *bus, unsigned char busnr)
 {
Index: linux-2.6.22-rc3/include/linux/pci.h
===================================================================
--- linux-2.6.22-rc3.orig/include/linux/pci.h	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/include/linux/pci.h	2007-06-04 12:36:31.000000000 -0700
@@ -139,6 +139,7 @@
 	unsigned short	subsystem_device;
 	unsigned int	class;		/* 3 bytes: (base,sub,prog-if) */
 	u8		hdr_type;	/* PCI header type (`multi' flag masked out) */
+	u8		pcie_type;	/* PCI-E device/port type */
 	u8		rom_base_reg;	/* which config register controls the ROM */
 	u8		pin;  		/* which interrupt pin this device uses */
 
@@ -181,6 +182,7 @@
 	unsigned int 	msi_enabled:1;
 	unsigned int	msix_enabled:1;
 	unsigned int	is_managed:1;
+	unsigned int	is_pcie:1;
 	atomic_t	enable_cnt;	/* pci_enable_device has been called */
 
 	u32		saved_config_space[16]; /* config space saved at suspend time */

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 04/10] clflush_cache_range now takes size param
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
                   ` (2 preceding siblings ...)
  2007-06-04 21:02 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: clflush_cache_range.patch --]
[-- Type: text/plain, Size: 1703 bytes --]

	Introduce the size param for clflush_cache_range().

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 arch/x86_64/mm/pageattr.c       |    6 +++---
 include/asm-x86_64/cacheflush.h |    1 +
 2 files changed, 4 insertions(+), 3 deletions(-)

Index: linux-2.6.22-rc3/arch/x86_64/mm/pageattr.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/mm/pageattr.c	2007-06-04 12:27:33.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/mm/pageattr.c	2007-06-04 12:37:30.000000000 -0700
@@ -61,10 +61,10 @@
 	return base;
 } 
 
-static void cache_flush_page(void *adr)
+void clflush_cache_range(void *adr, int size)
 {
 	int i;
-	for (i = 0; i < PAGE_SIZE; i += boot_cpu_data.x86_clflush_size)
+	for (i = 0; i < size; i += boot_cpu_data.x86_clflush_size)
 		asm volatile("clflush (%0)" :: "r" (adr + i));
 }
 
@@ -80,7 +80,7 @@
 	list_for_each_entry(pg, l, lru) {
 		void *adr = page_address(pg);
 		if (cpu_has_clflush)
-			cache_flush_page(adr);
+ 			clflush_cache_range(adr, PAGE_SIZE);
 	}
 	__flush_tlb_all();
 }
Index: linux-2.6.22-rc3/include/asm-x86_64/cacheflush.h
===================================================================
--- linux-2.6.22-rc3.orig/include/asm-x86_64/cacheflush.h	2007-06-04 12:27:33.000000000 -0700
+++ linux-2.6.22-rc3/include/asm-x86_64/cacheflush.h	2007-06-04 12:37:30.000000000 -0700
@@ -27,6 +27,7 @@
 void global_flush_tlb(void); 
 int change_page_attr(struct page *page, int numpages, pgprot_t prot);
 int change_page_attr_addr(unsigned long addr, int numpages, pgprot_t prot);
+void clflush_cache_range(void *addr, int size);
 
 #ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void);

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 05/10] IOVA allocation and management routines
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
                   ` (3 preceding siblings ...)
  2007-06-04 21:02 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: generic_iova.patch --]
[-- Type: text/plain, Size: 12849 bytes --]

	This code implements a generic IOVA allocation and 
management. As per Dave's suggestion we are now allocating
IO virtual address from Higher DMA limit address rather
than lower end address and this eliminated the need to preserve
the IO virtual address for multiple devices sharing the same
domain virtual address.

Also this code uses red black trees to store the allocated and
reserved iova nodes. This showed a good performance improvements
over previous linear linked list.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 drivers/pci/iova.c |  344 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/pci/iova.h |   57 ++++++++
 2 files changed, 401 insertions(+)

Index: linux-2.6.22-rc3/drivers/pci/iova.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/iova.c	2007-06-04 12:40:20.000000000 -0700
@@ -0,0 +1,344 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ */
+
+#include "iova.h"
+
+void
+init_iova_domain(struct iova_domain *iovad)
+{
+	spin_lock_init(&iovad->iova_alloc_lock);
+	spin_lock_init(&iovad->iova_rbtree_lock);
+	iovad->rbroot = RB_ROOT;
+	iovad->cached32_node = NULL;
+
+}
+
+static struct rb_node *
+__get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn)
+{
+	if ((*limit_pfn != DMA_32BIT_PFN) ||
+		(iovad->cached32_node == NULL))
+		return rb_last(&iovad->rbroot);
+	else {
+		struct rb_node *prev_node = rb_prev(iovad->cached32_node);
+		struct iova *curr_iova =
+			container_of(iovad->cached32_node, struct iova, node);
+		*limit_pfn = curr_iova->pfn_lo - 1;
+		return prev_node;
+	}
+}
+
+static inline void
+__cached_rbnode_insert_update(struct iova_domain *iovad,
+	unsigned long limit_pfn, struct iova *new)
+{
+	if (limit_pfn != DMA_32BIT_PFN)
+		return;
+	iovad->cached32_node = &new->node;
+}
+
+static inline void
+__cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
+{
+	struct iova *cached_iova;
+	struct rb_node *curr;
+
+	if (!iovad->cached32_node)
+		return;
+	curr = iovad->cached32_node;
+	cached_iova = container_of(curr, struct iova, node);
+
+	if (free->pfn_lo >= cached_iova->pfn_lo)
+		iovad->cached32_node = rb_next(&free->node);
+}
+
+static inline int __alloc_iova_range(struct iova_domain *iovad,
+	unsigned long size, unsigned long limit_pfn, struct iova *new)
+{
+	struct rb_node *curr = NULL;
+	unsigned long flags;
+	unsigned long saved_pfn;
+
+	/* Walk the tree backwards */
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	saved_pfn = limit_pfn;
+	curr = __get_cached_rbnode(iovad, &limit_pfn);
+	while (curr) {
+		struct iova *curr_iova = container_of(curr, struct iova, node);
+		if (limit_pfn < curr_iova->pfn_lo)
+			goto move_left;
+		if (limit_pfn < curr_iova->pfn_hi)
+			goto adjust_limit_pfn;
+		if ((curr_iova->pfn_hi + size) <= limit_pfn)
+			break;	/* found a free slot */
+adjust_limit_pfn:
+		limit_pfn = curr_iova->pfn_lo - 1;
+move_left:
+		curr = rb_prev(curr);
+	}
+
+	if ((!curr) && !(IOVA_START_PFN + size <= limit_pfn)) {
+		spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+		return -ENOMEM;
+	}
+	new->pfn_hi = limit_pfn;
+	new->pfn_lo = limit_pfn - size + 1;
+
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+	return 0;
+}
+
+static void
+iova_insert_rbtree(struct rb_root *root, struct iova *iova)
+{
+	struct rb_node **new = &(root->rb_node), *parent = NULL;
+	/* Figure out where to put new node */
+	while (*new) {
+		struct iova *this = container_of(*new, struct iova, node);
+		parent = *new;
+
+		if (iova->pfn_lo < this->pfn_lo)
+			new = &((*new)->rb_left);
+		else if (iova->pfn_lo > this->pfn_lo)
+			new = &((*new)->rb_right);
+		else
+			BUG(); /* this should not happen */
+	}
+	/* Add new node and rebalance tree. */
+	rb_link_node(&iova->node, parent, new);
+	rb_insert_color(&iova->node, root);
+}
+
+/**
+ * alloc_iova - allocates an iova
+ * @iovad - iova domain in question
+ * @size - size of page frames to allocate
+ * @limit_pfn - max limit address
+ * This function allocates an iova in the range limit_pfn to IOVA_START_PFN
+ * looking from limit_pfn instead from IOVA_START_PFN.
+ */
+
+struct iova *
+alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn)
+{
+	unsigned long flags,flags1;
+	struct iova *new_iova;
+	int ret;
+
+	new_iova = alloc_iova_mem();
+	if (!new_iova)
+		return NULL;
+
+	spin_lock_irqsave(&iovad->iova_alloc_lock, flags1);
+	ret = __alloc_iova_range(iovad, size, limit_pfn, new_iova);
+
+	if (ret) {
+		spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags1);
+		free_iova_mem(new_iova);
+		return NULL;
+	}
+
+	/* Insert the new_iova into domain rbtree by holding writer lock */
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	iova_insert_rbtree(&iovad->rbroot, new_iova);
+	__cached_rbnode_insert_update(iovad, limit_pfn, new_iova);
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+
+	spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags1);
+
+	return new_iova;
+}
+
+/**
+ * find_iova - find's an iova for a given pfn
+ * @iovad - iova domain in question.
+ * pfn - page frame number
+ * This function finds and returns an iova belonging to the
+ * given doamin which matches the given pfn.
+ */
+struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn)
+{
+	unsigned long flags;
+	struct rb_node *node;
+
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	node = iovad->rbroot.rb_node;
+	while (node) {
+		struct iova *iova = container_of(node, struct iova, node);
+
+		/* If pfn falls within iova's range, return iova */
+		if ((pfn >= iova->pfn_lo) && (pfn <= iova->pfn_hi)) {
+			spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+			return iova;
+		}
+
+		if (pfn < iova->pfn_lo)
+			node = node->rb_left;
+		else if (pfn > iova->pfn_lo)
+			node = node->rb_right;
+	}
+
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+	return NULL;
+}
+
+/**
+ * __free_iova - frees the given iova
+ * @iovad: iova domain in question.
+ * @iova: iova in question.
+ * Frees the given iova belonging to the giving domain
+ */
+void
+__free_iova(struct iova_domain *iovad, struct iova *iova)
+{
+	unsigned long flags;
+
+	if (iova) {
+		spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+		__cached_rbnode_delete_update(iovad, iova);
+		rb_erase(&iova->node, &iovad->rbroot);
+		spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+		free_iova_mem(iova);
+	}
+}
+/**
+ * free_iova - finds and frees the iova for a given pfn
+ * @iovad: - iova domain in question.
+ * @pfn: - pfn that is allocated previously
+ * This functions finds an iova for a given pfn and then
+ * frees the iova from that domain.
+ */
+
+void
+free_iova(struct iova_domain *iovad, unsigned long pfn)
+{
+	struct iova *iova = find_iova(iovad, pfn);
+	__free_iova(iovad, iova);
+
+}
+
+/**
+ * put_iova_domain - destroys the iova doamin
+ * @iovad: - iova domain in question.
+ * All the iova's in that domain are destroyed.
+ */
+void put_iova_domain(struct iova_domain *iovad)
+{
+	struct rb_node *node;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	while ((node = rb_first(&iovad->rbroot))) {
+		struct iova *iova = container_of(node, struct iova, node);
+		rb_erase(node, &iovad->rbroot);
+		free_iova_mem(iova);
+	}
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+}
+
+static inline int
+__is_range_overlap(struct rb_node *node, unsigned long pfn_lo, unsigned long pfn_hi)
+{
+	struct iova * iova = container_of(node, struct iova, node);
+
+	if ((pfn_lo <= iova->pfn_hi) && (pfn_hi >= iova->pfn_lo))
+		return 1;
+	return 0;
+}
+
+static inline struct iova *
+__insert_new_range(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi)
+{
+	struct iova *iova;
+
+	iova = alloc_iova_mem();
+	if (!iova)
+		return iova;
+
+	iova->pfn_hi = pfn_hi;
+	iova->pfn_lo = pfn_lo;
+	iova_insert_rbtree(&iovad->rbroot,iova);
+	return iova;
+}
+
+static inline void
+__adjust_overlap_range(struct iova *iova, unsigned long *pfn_lo, unsigned long *pfn_hi)
+{
+	if (*pfn_lo < iova->pfn_lo)
+		iova->pfn_lo = *pfn_lo;
+	if (*pfn_hi > iova->pfn_hi)
+		*pfn_lo = iova->pfn_hi + 1;
+}
+
+/**
+ * reserve_iova - reserves an iova in the given range
+ * @iovad: - iova domain pointer
+ * @pfn_lo: - lower page frame address
+ * @pfn_hi:- higher pfn adderss
+ * This function allocates reserves the address range from pfn_lo to pfn_hi so
+ * that this address is not dished out as part of alloc_iova.
+ */
+struct iova *
+reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi)
+{
+	struct rb_node *node;
+	unsigned long flags, flags1;
+	struct iova *iova;
+	unsigned int overlap = 0;
+
+	spin_lock_irqsave(&iovad->iova_alloc_lock, flags);
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags1);
+	for (node = rb_first(&iovad->rbroot); node; node = rb_next(node)) {
+		if (__is_range_overlap(node, pfn_lo, pfn_hi)) {
+			iova = container_of(node, struct iova, node);
+			__adjust_overlap_range(iova, &pfn_lo, &pfn_hi);
+			if ((pfn_lo >= iova->pfn_lo) &&
+				(pfn_hi <= iova->pfn_hi))
+				goto finish;
+			overlap = 1;
+
+		} else if (overlap)
+				break;
+	}
+
+	/* We are here either becasue this is the first reserver node
+	 * or need to insert remaining non overlap addr range
+	 */
+	iova = __insert_new_range(iovad, pfn_lo, pfn_hi);
+finish:
+
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags1);
+	spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags);
+	return iova;
+}
+
+/**
+ * copy_reserved_iova - copies the reserved between domains
+ * @from: - source doamin from where to copy
+ * @to: - destination domin where to copy
+ * This function copies reserved iova's from one doamin to
+ * other.
+ */
+void
+copy_reserved_iova(struct iova_domain *from, struct iova_domain *to)
+{
+	unsigned long flags, flags1;
+	struct rb_node *node;
+	spin_lock_irqsave(&from->iova_alloc_lock, flags);
+	spin_lock_irqsave(&from->iova_rbtree_lock, flags1);
+	for (node = rb_first(&from->rbroot); node; node = rb_next(node)) {
+		struct iova *iova = container_of(node, struct iova, node);
+		struct iova *new_iova;
+		new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi);
+		if (!new_iova)
+			printk(KERN_ERR "Reserve iova range %lx@%lx failed\n",
+				iova->pfn_lo, iova->pfn_lo);
+	}
+	spin_unlock_irqrestore(&from->iova_rbtree_lock, flags1);
+	spin_unlock_irqrestore(&from->iova_alloc_lock, flags);
+}
Index: linux-2.6.22-rc3/drivers/pci/iova.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/iova.h	2007-06-04 12:40:20.000000000 -0700
@@ -0,0 +1,57 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ *
+ */
+
+#ifndef _IOVA_H_
+#define _IOVA_H_
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/rbtree.h>
+#include <linux/dma-mapping.h>
+
+
+#define PAGE_SHIFT_4K		(12)
+#define PAGE_SIZE_4K		(1UL << PAGE_SHIFT_4K)
+#define PAGE_MASK_4K		(((u64)-1) << PAGE_SHIFT_4K)
+#define PAGE_ALIGN_4K(addr)	(((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
+
+#define IOVA_START_ADDR		(0x1000)
+#define IOVA_START_PFN		(IOVA_START_ADDR >> PAGE_SHIFT_4K)
+
+#define IOVA_PFN(addr)		((addr) >> PAGE_SHIFT_4K)
+#define DMA_32BIT_PFN	IOVA_PFN(DMA_32BIT_MASK)
+#define DMA_64BIT_PFN	IOVA_PFN(DMA_64BIT_MASK)
+
+/* iova structure */
+struct iova {
+	struct rb_node	node;
+	unsigned long	pfn_hi; /* IOMMU dish out addr hi */
+	unsigned long	pfn_lo; /* IOMMU dish out addr lo */
+};
+
+/* holds all the iova translations for a domain */
+struct iova_domain {
+	spinlock_t	iova_alloc_lock;/* Lock to protect iova  allocation */
+	spinlock_t	iova_rbtree_lock; /* Lock to protect update of rbtree */
+	struct rb_root	rbroot;		/* iova domain rbtree root */
+	struct rb_node	*cached32_node; /* Save last alloced node to optimize alloc */
+};
+
+struct iova *alloc_iova_mem(void);
+void free_iova_mem(struct iova *iova);
+void free_iova(struct iova_domain *iovad, unsigned long pfn);
+void __free_iova(struct iova_domain *iovad, struct iova *iova);
+struct iova * alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn);
+struct iova * reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi);
+void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
+void init_iova_domain(struct iova_domain *iovad);
+struct iova * find_iova(struct iova_domain *iovad, unsigned long pfn);
+void put_iova_domain(struct iova_domain *iovad);
+
+#endif

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 06/10] Intel IOMMU driver
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
                   ` (4 preceding siblings ...)
  2007-06-04 21:02 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: intel_iommu.patch --]
[-- Type: text/plain, Size: 68641 bytes --]

	Actual intel IOMMU driver. Hardware spec can be found at:
http://www.intel.com/technology/virtualization

This driver sets X86_64 'dma_ops', so hook into standard DMA APIs. In this way,
PCI driver will get virtual DMA address. This change is transparent to PCI
drivers.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/Intel-IOMMU.txt       |   93 +
 Documentation/kernel-parameters.txt |   10 
 arch/x86_64/kernel/pci-dma.c        |    9 
 drivers/pci/Makefile                |    5 
 drivers/pci/intel-iommu.c           | 1918 ++++++++++++++++++++++++++++++++++++
 drivers/pci/intel-iommu.h           |  296 +++++
 include/linux/dmar.h                |   23 
 7 files changed, 2353 insertions(+), 1 deletion(-)

Index: linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt	2007-06-04 12:40:29.000000000 -0700
@@ -0,0 +1,93 @@
+Linux IOMMU Support
+===================
+
+The architecture spec can be obtained from the below location.
+
+http://www.intel.com/technology/virtualization/
+
+This guide gives a quick cheat sheet for some basic understanding.
+
+Some Keywords
+
+DMAR - DMA remapping
+DRHD - DMA Engine Reporting Structure
+RMRR - Reserved memory Region Reporting Structure
+ZLR  - Zero length reads from PCI devices
+IOVA - IO Virtual address.
+
+Basic stuff
+-----------
+
+ACPI enumerates and lists the different DMA engines in the platform, and
+device scope relationships between PCI devices and which DMA engine  controls
+them.
+
+What is RMRR?
+-------------
+
+There are some devices the BIOS controls, for e.g USB devices to perform
+PS2 emulation. The regions of memory used for these devices are marked
+reserved in the e820 map. When we turn on DMA translation, DMA to those
+regions will fail. Hence BIOS uses RMRR to specify these regions along with
+devices that need to access these regions. OS is expected to setup
+unity mappings for these regions for these devices to access these regions.
+
+How is IOVA generated?
+---------------------
+
+Well behaved drivers call pci_map_*() calls before sending command to device
+that needs to perform DMA. Once DMA is completed and mapping is no longer
+required, device performs a pci_unmap_*() calls to unmap the region.
+
+The Intel IOMMU driver allocates a virtual address per domain. Each PCIE
+device has its own domain (hence protection). Devices under p2p bridges
+share the virtual address with all devices under the p2p bridge due to
+transaction id aliasing for p2p bridges.
+
+IOVA generation is pretty generic. We used the same technique as vmalloc()
+but these are not global address spaces, but separate for each domain.
+Different DMA engines may support different number of domains.
+
+We also allocate gaurd pages with each mapping, so we can attempt to catch
+any overflow that might happen.
+
+
+Graphics Problems?
+------------------
+If you encounter issues with graphics devices, you can try adding
+option intel_iommu=igfx_off to turn off the integrated graphics engine.
+
+Some exceptions to IOVA
+-----------------------
+Interrupt ranges are not address translated, (0xfee00000 - 0xfeefffff).
+The same is true for peer to peer transactions. Hence we reserve the
+address from PCI MMIO ranges so they are not allocated for IOVA addresses.
+
+Boot Message Sample
+-------------------
+
+Something like this gets printed indicating presence of DMAR tables
+in ACPI.
+
+ACPI: DMAR (v001 A M I  OEMDMAR  0x00000001 MSFT 0x00000097) @ 0x000000007f5b5ef0
+
+When DMAR is being processed and initialized by ACPI, prints DMAR locations
+and any RMRR's processed.
+
+ACPI DMAR:Host address width 36
+ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed90000
+ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed91000
+ACPI DMAR:DRHD (flags: 0x00000001)base: 0x00000000fed93000
+ACPI DMAR:RMRR base: 0x00000000000ed000 end: 0x00000000000effff
+ACPI DMAR:RMRR base: 0x000000007f600000 end: 0x000000007fffffff
+
+When DMAR is enabled for use, you will notice..
+
+PCI-DMA: Using DMAR IOMMU
+
+TBD
+----
+
+- For compatibility testing, could use unity map domain for all devices, just
+  provide a 1-1 for all useful memory under a single domain for all devices.
+- API for paravirt ops for abstracting functionlity for VMM folks.
Index: linux-2.6.22-rc3/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/kernel-parameters.txt	2007-06-04 12:27:32.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/kernel-parameters.txt	2007-06-04 12:40:29.000000000 -0700
@@ -776,6 +776,16 @@
 
 	inttest=	[IA64]
 
+	intel_iommu=	[DMAR] Intel IOMMU driver (DMAR) option
+		off
+			Disable intel iommu driver.
+		igfx_off [Default Off]
+			By default, gfx is mapped as normal device. If a gfx
+			device has a dedicated DMAR unit, the DMAR unit is
+			bypassed by not enabling DMAR with this option. In
+			this case, gfx device will use physical address for
+			DMA.
+
 	io7=		[HW] IO7 for Marvel based alpha systems
 			See comment before marvel_specify_io7 in
 			arch/alpha/kernel/core_marvel.c.
Index: linux-2.6.22-rc3/arch/x86_64/kernel/pci-dma.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/kernel/pci-dma.c	2007-06-04 12:27:32.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/kernel/pci-dma.c	2007-06-04 12:40:29.000000000 -0700
@@ -7,6 +7,7 @@
 #include <linux/string.h>
 #include <linux/pci.h>
 #include <linux/module.h>
+#include <linux/dmar.h>
 #include <asm/io.h>
 #include <asm/proto.h>
 #include <asm/calgary.h>
@@ -303,6 +304,10 @@
 	detect_calgary();
 #endif
 
+#ifdef CONFIG_DMAR
+	detect_intel_iommu();
+#endif
+
 #ifdef CONFIG_SWIOTLB
 	pci_swiotlb_init();
 #endif
@@ -314,6 +319,10 @@
 	calgary_iommu_init();
 #endif
 
+#ifdef CONFIG_DMAR
+	intel_iommu_init();
+#endif
+
 #ifdef CONFIG_IOMMU
 	gart_iommu_init();
 #endif
Index: linux-2.6.22-rc3/drivers/pci/Makefile
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/Makefile	2007-06-04 12:35:19.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/Makefile	2007-06-04 12:40:29.000000000 -0700
@@ -21,7 +21,10 @@
 obj-$(CONFIG_HT_IRQ) += htirq.o
 
 # Build Intel IOMMU support
-obj-$(CONFIG_DMAR) += dmar.o
+obj-$(CONFIG_DMAR) += dmar.o iova.o intel-iommu.o
+
+#Build Intel-IOMMU support
+obj-$(CONFIG_DMAR) += iova.o dmar.o intel-iommu.o
 
 #
 # Some architectures use the generic PCI setup functions
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:40:29.000000000 -0700
@@ -0,0 +1,1918 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ * Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ */
+
+#include <linux/init.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/irq.h>
+#include <linux/interrupt.h>
+#include <linux/sysdev.h>
+#include <linux/spinlock.h>
+#include <linux/pci.h>
+#include <linux/dmar.h>
+#include <linux/dma-mapping.h>
+#include <linux/mempool.h>
+#include <linux/respool.h>
+#include "iova.h"
+#include "intel-iommu.h"
+#include <asm/proto.h> /* force_iommu in this header in x86-64*/
+#include <asm/cacheflush.h>
+#include "pci.h"
+
+#define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY)
+#define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA)
+
+#define IOAPIC_RANGE_START	(0xfee00000)
+#define IOAPIC_RANGE_END	(0xfeefffff)
+#define IOVA_START_ADDR		(0x1000)
+
+#define DEFAULT_DOMAIN_ADDRESS_WIDTH 48
+
+#define DMAR_OPERATION_TIMEOUT (HZ*60) /* 1m */
+
+#define DOMAIN_MAX_ADDR(gaw) ((((u64)1) << gaw) - 1)
+
+static void domain_remove_dev_info(struct domain *domain);
+
+static int dmar_disabled;
+static int __initdata dmar_map_gfx = 1;
+
+#define DUMMY_DEVICE_DOMAIN_INFO ((struct device_domain_info *)(-1))
+static DEFINE_SPINLOCK(device_domain_lock);
+static LIST_HEAD(device_domain_list);
+
+static int __init intel_iommu_setup(char *str)
+{
+	if (!str)
+		return -EINVAL;
+	while (*str) {
+		if (!strncmp(str, "off", 3)) {
+			dmar_disabled = 1;
+			printk(KERN_INFO"Intel-IOMMU: disabled\n");
+		} else if (!strncmp(str, "igfx_off", 8)) {
+			dmar_map_gfx = 0;
+			printk(KERN_INFO"Intel-IOMMU: disable GFX device mapping\n");
+		}
+
+		str += strcspn(str, ",");
+		while (*str == ',')
+			str++;
+	}
+	return 0;
+}
+__setup("intel_iommu=", intel_iommu_setup);
+
+#define MIN_PGTABLE_PAGES	(10)
+#define GROW_PGTABLE_PAGES	(6)
+
+#define MIN_DOMAIN_REQ		(10)
+#define GROW_DOMAIN_REQ		(4)
+
+#define MIN_DEVINFO_REQ		(10)
+#define GROW_DEVINFO_REQ	(4)
+
+#define MIN_IOVA_REQ		(1024)
+#define GROW_IOVA_REQ		(64)
+
+static struct resource_pool iommu_pgtable_pool;
+static struct resource_pool iommu_domain_pool;
+static struct resource_pool iommu_devinfo_pool;
+static struct resource_pool iommu_iova_pool;
+
+static inline void *alloc_pgtable_page(void)
+{
+	return get_resource_pool_obj(&iommu_pgtable_pool);
+}
+
+static inline void free_pgtable_page(void *vaddr)
+{
+	return put_resource_pool_obj(vaddr, &iommu_pgtable_pool);
+}
+
+static inline void *alloc_domain_mem(void)
+{
+	return get_resource_pool_obj(&iommu_domain_pool);
+}
+
+static inline void free_domain_mem(void *vaddr)
+{
+	return put_resource_pool_obj(vaddr, &iommu_domain_pool);
+}
+
+static inline void * alloc_devinfo_mem(void)
+{
+	return get_resource_pool_obj(&iommu_devinfo_pool);
+}
+
+static inline void free_devinfo_mem(void *vaddr)
+{
+	return put_resource_pool_obj(vaddr, &iommu_devinfo_pool);
+}
+
+struct iova *alloc_iova_mem(void)
+{
+	return get_resource_pool_obj(&iommu_iova_pool);
+}
+
+void free_iova_mem(struct iova *iova)
+{
+	put_resource_pool_obj(iova, &iommu_iova_pool);
+}
+
+static inline void __iommu_flush_cache(struct intel_iommu *iommu, void *addr, int size)
+{
+	if (!ecap_coherent(iommu->ecap))
+		clflush_cache_range(addr, size);
+}
+
+/* context entry handling */
+static struct context_entry * device_to_context_entry(struct intel_iommu *iommu,
+		u8 bus, u8 devfn)
+{
+	struct root_entry *root;
+	struct context_entry *context;
+	unsigned long phy_addr;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	root = &iommu->root_entry[bus];
+	if (!(context = get_context_addr_from_root(*root))) {
+		context = (struct context_entry *)alloc_pgtable_page();
+		if (!context) {
+			spin_unlock_irqrestore(&iommu->lock, flags);
+			return NULL;
+		}
+		__iommu_flush_cache(iommu, (void *)context, PAGE_SIZE_4K);
+		phy_addr = virt_to_phys((void *)context);
+		set_root_value(*root, phy_addr);
+		set_root_present(*root);
+		__iommu_flush_cache(iommu, root, sizeof(*root));
+	}
+	spin_unlock_irqrestore(&iommu->lock, flags);
+	return &context[devfn];
+}
+
+static int device_context_mapped(struct intel_iommu *iommu, u8 bus, u8 devfn)
+{
+	struct root_entry *root;
+	struct context_entry *context;
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	root = &iommu->root_entry[bus];
+	if (!(context = get_context_addr_from_root(*root))) {
+		ret = 0;
+		goto out;
+	}
+	ret = context_present(context[devfn]);
+out:
+	spin_unlock_irqrestore(&iommu->lock, flags);
+	return ret;
+}
+
+static void clear_context_table(struct intel_iommu *iommu, u8 bus, u8 devfn)
+{
+	struct root_entry *root;
+	struct context_entry *context;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	root = &iommu->root_entry[bus];
+	if ((context = get_context_addr_from_root(*root))) {
+		context_clear_entry(context[devfn]);
+		__iommu_flush_cache(iommu, &context[devfn], \
+			sizeof(*context));
+	}
+	spin_unlock_irqrestore(&iommu->lock, flags);
+}
+
+static void free_context_table(struct intel_iommu *iommu)
+{
+	struct root_entry *root;
+	int i;
+	unsigned long flags;
+	struct context_entry *context;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	if (!iommu->root_entry) {
+		goto out;
+	}
+	for (i = 0; i < ROOT_ENTRY_NR; i++) {
+		root = &iommu->root_entry[i];
+		if ((context = get_context_addr_from_root(*root)))
+			free_pgtable_page(context);
+	}
+	free_pgtable_page(iommu->root_entry);
+	iommu->root_entry = NULL;
+out:
+	spin_unlock_irqrestore(&iommu->lock, flags);
+}
+
+/* page table handling */
+#define LEVEL_STRIDE		(9)
+#define LEVEL_MASK		(((u64)1 << LEVEL_STRIDE) - 1)
+#define agaw_to_level(val) ((val) + 2)
+#define agaw_to_width(val) (30 + val * LEVEL_STRIDE)
+#define width_to_agaw(w)  ((w - 30)/LEVEL_STRIDE)
+#define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
+#define address_level_offset(addr, level) \
+	((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
+#define level_size(l) ((u64)1 << level_to_offset_bits(l))
+#define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
+static struct dma_pte * addr_to_dma_pte(struct domain *domain, u64 addr)
+{
+	int addr_width = agaw_to_width(domain->agaw);
+	struct dma_pte *parent, *pte = NULL;
+	int level = agaw_to_level(domain->agaw);
+	int offset;
+	unsigned long flags;
+
+	BUG_ON(!domain->pgd);
+
+	addr &= (((u64)1) << addr_width) - 1;
+	parent = domain->pgd;
+
+	spin_lock_irqsave(&domain->mapping_lock, flags);
+	while (level > 0) {
+		void *tmp_page;
+
+		offset = address_level_offset(addr, level);
+		pte = &parent[offset];
+		if (level == 1)
+			break;
+
+		if (!dma_pte_present(*pte)) {
+			tmp_page = alloc_pgtable_page();
+
+			if (!tmp_page) {
+				spin_unlock_irqrestore(&domain->mapping_lock, flags);
+				return NULL;
+			}
+			__iommu_flush_cache(domain->iommu, tmp_page, PAGE_SIZE_4K);
+			dma_set_pte_addr(*pte, virt_to_phys(tmp_page));
+			/*
+			 * high level table always sets r/w, last level page
+			 * table control read/write
+			 */
+			dma_set_pte_readable(*pte);
+			dma_set_pte_writable(*pte);
+			__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+		}
+		parent = phys_to_virt(dma_pte_addr(*pte));
+		level--;
+	}
+
+	spin_unlock_irqrestore(&domain->mapping_lock, flags);
+	return pte;
+}
+
+/* return address's pte at specific level */
+static struct dma_pte *dma_addr_level_pte(struct domain *domain, u64 addr,
+		int level)
+{
+	struct dma_pte *parent, *pte = NULL;
+	int total = agaw_to_level(domain->agaw);
+	int offset;
+
+	parent = domain->pgd;
+	while (level <= total) {
+		offset = address_level_offset(addr, total);
+		pte = &parent[offset];
+		if (level == total)
+			return pte;
+
+		if (!dma_pte_present(*pte))
+			break;
+		parent = phys_to_virt(dma_pte_addr(*pte));
+		total--;
+	}
+	return NULL;
+}
+
+/* clear one page's page table */
+static void dma_pte_clear_one(struct domain *domain, u64 addr)
+{
+	struct dma_pte *pte = NULL;
+
+	/* get last level pte */
+	pte = dma_addr_level_pte(domain, addr, 1);
+
+	if (pte) {
+		dma_clear_pte(*pte);
+		__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+	}
+}
+
+/* clear last level pte, a tlb flush should be followed */
+static void dma_pte_clear_range(struct domain *domain, u64 start, u64 end)
+{
+	int addr_width = agaw_to_width(domain->agaw);
+
+	start &= (((u64)1) << addr_width) - 1;
+	end &= (((u64)1) << addr_width) - 1;
+	/* in case it's partial page */
+	start = PAGE_ALIGN_4K(start);
+	end &= PAGE_MASK_4K;
+
+	/* we don't need lock here, nobody else touches the iova range */
+	while (start < end) {
+		dma_pte_clear_one(domain, start);
+		start += PAGE_SIZE_4K;
+	}
+}
+
+/* free page table pages. last level pte should already be cleared */
+static void dma_pte_free_pagetable(struct domain *domain, u64 start, u64 end)
+{
+	int addr_width = agaw_to_width(domain->agaw);
+	struct dma_pte *pte;
+	int total = agaw_to_level(domain->agaw);
+	int level;
+	u64 tmp;
+
+	start &= (((u64)1) << addr_width) - 1;
+	end &= (((u64)1) << addr_width) - 1;
+
+	/* we don't need lock here, nobody else touches the iova range */
+	level = 2;
+	while (level <= total) {
+		tmp = align_to_level(start, level);
+		if (tmp >= end || (tmp + level_size(level) > end))
+			return;
+
+		while (tmp < end) {
+			pte = dma_addr_level_pte(domain, tmp, level);
+			if (pte) {
+				free_pgtable_page(
+					phys_to_virt(dma_pte_addr(*pte)));
+				dma_clear_pte(*pte);
+				__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+			}
+			tmp += level_size(level);
+		}
+		level++;
+	}
+	/* free pgd */
+	if (start == 0 && end >= ((((u64)1) << addr_width) - 1)) {
+		free_pgtable_page(domain->pgd);
+		domain->pgd = NULL;
+	}
+}
+
+/* iommu handling */
+static int iommu_alloc_root_entry(struct intel_iommu *iommu)
+{
+	struct root_entry *root;
+	unsigned long flags;
+
+	root = (struct root_entry *)alloc_pgtable_page();
+	if (!root)
+		return -ENOMEM;
+
+	__iommu_flush_cache(iommu, root, PAGE_SIZE_4K);
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	iommu->root_entry = root;
+	spin_unlock_irqrestore(&iommu->lock, flags);
+
+	return 0;
+}
+
+#define IOMMU_WAIT_OP(iommu, offset, op, cond, sts) \
+{\
+	unsigned long start_time = jiffies;\
+	while (1) {\
+		sts = op (iommu->reg, offset);\
+		if (cond)\
+			break;\
+		if (time_after(jiffies, start_time + DMAR_OPERATION_TIMEOUT))\
+			panic("DMAR hardware is malfunctional, please disable IOMMU\n");\
+		cpu_relax();\
+	}\
+}
+
+static void iommu_set_root_entry(struct intel_iommu *iommu)
+{
+	void *addr;
+	u32 cmd, sts;
+	unsigned long flag;
+
+	addr = iommu->root_entry;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writeq(iommu->reg, DMAR_RTADDR_REG, virt_to_phys(addr));
+
+	cmd = iommu->gcmd | DMA_GCMD_SRTP;
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, cmd);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (sts & DMA_GSTS_RTPS), sts);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+static void iommu_flush_write_buffer(struct intel_iommu *iommu)
+{
+	u32 val;
+	unsigned long flag;
+
+	if (!cap_rwbf(iommu->cap))
+		return;
+	val = iommu->gcmd | DMA_GCMD_WBF;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, val);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (!(val & DMA_GSTS_WBFS)), val);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+/* return value determine if we need a write buffer flush */
+static int __iommu_flush_context(struct intel_iommu *iommu,
+	u16 did, u16 source_id, u8 function_mask, u64 type,
+	int non_present_entry_flush)
+{
+	u64 val = 0;
+	unsigned long flag;
+
+	/*
+	 * In the non-present entry flush case, if hardware doesn't cache
+	 * non-present entry we do nothing and if hardware cache non-present
+	 * entry, we flush entries of domain 0 (the domain id is used to cache
+	 * any non-present entries)
+	 */
+	if (non_present_entry_flush) {
+		if (!cap_caching_mode(iommu->cap))
+			return 1;
+		else
+			did = 0;
+	}
+
+	switch (type)
+	{
+	case DMA_CCMD_GLOBAL_INVL:
+		val = DMA_CCMD_GLOBAL_INVL;
+		break;
+	case DMA_CCMD_DOMAIN_INVL:
+		val = DMA_CCMD_DOMAIN_INVL|DMA_CCMD_DID(did);
+		break;
+	case DMA_CCMD_DEVICE_INVL:
+		val = DMA_CCMD_DEVICE_INVL|DMA_CCMD_DID(did)
+			|DMA_CCMD_SID(source_id)|DMA_CCMD_FM(function_mask);
+		break;
+	default:
+		BUG();
+	}
+	val |= DMA_CCMD_ICC;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writeq(iommu->reg, DMAR_CCMD_REG, val);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_CCMD_REG, dmar_readq, (!(val & DMA_CCMD_ICC)), val);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+
+	/* flush context entry will implictly flush write buffer */
+	return 0;
+}
+
+static int inline iommu_flush_context_global(struct intel_iommu *iommu,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
+		non_present_entry_flush);
+}
+
+static int inline iommu_flush_context_domain(struct intel_iommu *iommu, u16 did,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_context(iommu, did, 0, 0, DMA_CCMD_DOMAIN_INVL,
+		non_present_entry_flush);
+}
+
+static int inline iommu_flush_context_device(struct intel_iommu *iommu,
+	u16 did, u16 source_id, u8 function_mask, int non_present_entry_flush)
+{
+	return __iommu_flush_context(iommu, did, source_id, function_mask,
+		DMA_CCMD_DEVICE_INVL, non_present_entry_flush);
+}
+
+/* return value determine if we need a write buffer flush */
+static int __iommu_flush_iotlb(struct intel_iommu *iommu, u16 did,
+	u64 addr, unsigned int size_order, u64 type,
+	int non_present_entry_flush)
+{
+	int tlb_offset = ecap_iotlb_offset(iommu->ecap);
+	u64 val = 0, val_iva = 0;
+	unsigned long flag;
+
+	/*
+	 * In the non-present entry flush case, if hardware doesn't cache
+	 * non-present entry we do nothing and if hardware cache non-present
+	 * entry, we flush entries of domain 0 (the domain id is used to cache
+	 * any non-present entries)
+	 */
+	if (non_present_entry_flush) {
+		if (!cap_caching_mode(iommu->cap))
+			return 1;
+		else
+			did = 0;
+	}
+
+	switch (type) {
+	case DMA_TLB_GLOBAL_FLUSH:
+		/* global flush doesn't need set IVA_REG */
+		val = DMA_TLB_GLOBAL_FLUSH|DMA_TLB_IVT;
+		break;
+	case DMA_TLB_DSI_FLUSH:
+		val = DMA_TLB_DSI_FLUSH|DMA_TLB_IVT|DMA_TLB_DID(did);
+		break;
+	case DMA_TLB_PSI_FLUSH:
+		val = DMA_TLB_PSI_FLUSH|DMA_TLB_IVT|DMA_TLB_DID(did);
+		/* Note: always flush non-leaf currently */
+		val_iva = size_order | addr;
+		break;
+	default:
+		BUG();
+	}
+	/* Note: set drain read/write */
+#if 0
+	/*
+	 * This is probably to be super secure.. Looks like we can
+	 * ignore it without any impact.
+	 */
+	if (cap_read_drain(iommu->cap))
+		val |= DMA_TLB_READ_DRAIN;
+#endif
+	if (cap_write_drain(iommu->cap))
+		val |= DMA_TLB_WRITE_DRAIN;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	/* Note: Only uses first TLB reg currently */
+	if (val_iva)
+		dmar_writeq(iommu->reg, tlb_offset, val_iva);
+	dmar_writeq(iommu->reg, tlb_offset + 8, val);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, tlb_offset + 8, dmar_readq, (!(val & DMA_TLB_IVT)), val);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+
+	/* check IOTLB invalidation granularity */
+	if (DMA_TLB_IAIG(val) == 0)
+		printk(KERN_ERR"IOMMU: flush IOTLB failed\n");
+	if (DMA_TLB_IAIG(val) != DMA_TLB_IIRG(type))
+		pr_debug("IOMMU: tlb flush request %Lx, actual %Lx\n",
+			DMA_TLB_IIRG(type), DMA_TLB_IAIG(val));
+	/* flush context entry will implictly flush write buffer */
+	return 0;
+}
+
+static int inline iommu_flush_iotlb_global(struct intel_iommu *iommu,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH,
+		non_present_entry_flush);
+}
+
+static int inline iommu_flush_iotlb_dsi(struct intel_iommu *iommu, u16 did,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH,
+		non_present_entry_flush);
+}
+
+static int inline get_alignment(u64 base, unsigned int size)
+{
+	int t = 0;
+	u64 end;
+
+	end = base + size - 1;
+	while (base != end) {
+		t++;
+		base >>= 1;
+		end >>= 1;
+	}
+	return t;
+}
+
+static int inline iommu_flush_iotlb_psi(struct intel_iommu *iommu, u16 did,
+	u64 addr, unsigned int pages, int non_present_entry_flush)
+{
+	unsigned int align;
+
+	BUG_ON(addr & (~PAGE_MASK_4K));
+	BUG_ON(pages == 0);
+
+	/* Fallback to domain selective flush if no PSI support */
+	if (!cap_pgsel_inv(iommu->cap))
+		return iommu_flush_iotlb_dsi(iommu, did,
+			non_present_entry_flush);
+
+	/*
+	 * PSI requires page size is 2 ^ x, and the base address is naturally
+	 * aligned to the size
+	 */
+	align = get_alignment(addr >> PAGE_SHIFT_4K, pages);
+	/* Fallback to domain selective flush if size is too big */
+	if (align > cap_max_amask_val(iommu->cap))
+		return iommu_flush_iotlb_dsi(iommu, did,
+			non_present_entry_flush);
+
+	addr >>= PAGE_SHIFT_4K + align;
+	addr <<= PAGE_SHIFT_4K + align;
+
+	return __iommu_flush_iotlb(iommu, did, addr, align,
+		DMA_TLB_PSI_FLUSH, non_present_entry_flush);
+}
+
+static int iommu_enable_translation(struct intel_iommu *iommu)
+{
+	u32 sts;
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, iommu->gcmd|DMA_GCMD_TE);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (sts & DMA_GSTS_TES), sts);
+
+	iommu->gcmd |= DMA_GCMD_TE;
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+	return 0;
+}
+
+static int iommu_disable_translation(struct intel_iommu *iommu)
+{
+	u32 sts;
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	iommu->gcmd &= ~ DMA_GCMD_TE;
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, iommu->gcmd);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (!(sts & DMA_GSTS_TES)), sts);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+	return 0;
+}
+
+static int iommu_init_domains(struct intel_iommu *iommu)
+{
+	unsigned long ndomains;
+	unsigned long nlongs;
+
+	ndomains = cap_ndoms(iommu->cap);
+	pr_debug("Number of Domains supportd <%ld>\n", ndomains);
+	nlongs = BITS_TO_LONGS(ndomains);
+
+	/* TBD: there might be 64K domains, consider other allocation for future chip */
+	iommu->domain_ids = kcalloc(nlongs, sizeof(unsigned long), GFP_KERNEL);
+	if (!iommu->domain_ids) {
+		printk(KERN_ERR "Allocating domain id array failed\n");
+		return -ENOMEM;
+	}
+	iommu->domains = kcalloc(ndomains, sizeof(struct domain *), GFP_KERNEL);
+	if (!iommu->domains) {
+		printk(KERN_ERR "Allocating domain array failed\n");
+		kfree(iommu->domain_ids);
+		return -ENOMEM;
+	}
+
+	/*
+	 * if Caching mode is set, then invalid translations are tagged
+	 * with domainid 0. Hence we need to pre-allocate it.
+	 */
+	if (cap_caching_mode(iommu->cap))
+		set_bit(0, iommu->domain_ids);
+	return 0;
+}
+
+static struct intel_iommu *alloc_iommu(struct dmar_drhd_unit *drhd)
+{
+	struct intel_iommu *iommu;
+	int ret;
+	int map_size;
+	u32 ver;
+
+	iommu = kzalloc(sizeof(*iommu), GFP_KERNEL);
+	if (!iommu)
+		return NULL;
+	iommu->reg = ioremap(drhd->reg_base_addr, PAGE_SIZE_4K);
+	if (!iommu->reg) {
+		printk(KERN_ERR "IOMMU: can't map the region\n");
+		goto error;
+	}
+	iommu->cap = dmar_readq(iommu->reg, DMAR_CAP_REG);
+	iommu->ecap = dmar_readq(iommu->reg, DMAR_ECAP_REG);
+
+	/* the registers might be more than one page */
+	map_size = max_t(int, ecap_max_iotlb_offset(iommu->ecap),
+		cap_max_fault_reg_offset(iommu->cap));
+	map_size = PAGE_ALIGN_4K(map_size);
+	if (map_size > PAGE_SIZE_4K) {
+		iounmap(iommu->reg);
+		iommu->reg = ioremap(drhd->reg_base_addr, map_size);
+		if (!iommu->reg) {
+			printk(KERN_ERR "IOMMU: can't map the region\n");
+			goto error;
+		}
+	}
+
+	ver = dmar_readl(iommu->reg, DMAR_VER_REG);
+	pr_debug("IOMMU %llx: ver %d:%d cap %llx ecap %llx\n",
+		drhd->reg_base_addr, VER_MAJOR(ver), VER_MINOR(ver),
+		iommu->cap, iommu->ecap);
+	ret = iommu_init_domains(iommu);
+	if (ret)
+		goto error_unmap;
+	spin_lock_init(&iommu->lock);
+	spin_lock_init(&iommu->register_lock);
+
+	drhd->iommu = iommu;
+	return iommu;
+error_unmap:
+	iounmap(iommu->reg);
+	iommu->reg = 0;
+error:
+	kfree(iommu);
+	return NULL;
+}
+
+#define iommu_for_each_domain_id(iommu, i) \
+for (i = find_first_bit(iommu->domain_ids, cap_ndoms(iommu->cap)); \
+	i < cap_ndoms(iommu->cap); \
+	i = find_next_bit(iommu->domain_ids, cap_ndoms(iommu->cap), i+1))
+static void domain_exit(struct domain *domain);
+static void free_iommu(struct intel_iommu *iommu)
+{
+	struct domain *domain;
+	int i;
+
+	if (!iommu)
+		return;
+
+	iommu_for_each_domain_id(iommu, i) {
+		domain = iommu->domains[i];
+		clear_bit(i, iommu->domain_ids);
+		domain_exit(domain);
+	}
+
+	if (iommu->gcmd & DMA_GCMD_TE)
+		iommu_disable_translation(iommu);
+
+	if (iommu->irq) {
+		set_irq_data(iommu->irq, NULL);
+		/* This will mask the irq */
+		free_irq(iommu->irq, iommu);
+		destroy_irq(iommu->irq);
+	}
+
+	kfree(iommu->domains);
+	kfree(iommu->domain_ids);
+
+	/* free context mapping */
+	free_context_table(iommu);
+
+	if (iommu->reg)
+		iounmap(iommu->reg);
+	kfree(iommu);
+}
+
+static struct domain * iommu_alloc_domain(struct intel_iommu *iommu)
+{
+	unsigned long num;
+	unsigned long ndomains;
+	struct domain *domain;
+	unsigned long flags;
+
+	domain = alloc_domain_mem();
+	if (!domain)
+		return NULL;
+
+	ndomains = cap_ndoms(iommu->cap);
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	num = find_first_zero_bit(iommu->domain_ids, ndomains);
+	if (num >= ndomains) {
+		spin_unlock_irqrestore(&iommu->lock, flags);
+		free_domain_mem(domain);
+		printk(KERN_ERR "IOMMU: no free domain ids\n");
+		return NULL;
+	}
+
+	set_bit(num, iommu->domain_ids);
+	domain->id = num;
+	domain->iommu = iommu;
+	iommu->domains[num] = domain;
+	spin_unlock_irqrestore(&iommu->lock, flags);
+
+	return domain;
+}
+
+static void iommu_free_domain(struct domain *domain)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&domain->iommu->lock, flags);
+	clear_bit(domain->id, domain->iommu->domain_ids);
+	spin_unlock_irqrestore(&domain->iommu->lock, flags);
+}
+
+static struct iova_domain reserved_iova_list;
+#ifdef DEBUG
+static void print_iova_list(struct iova_domain *head)
+{
+	struct rb_node *node = rb_first(&head->rbroot);
+	struct iova *iova;
+
+	while (node) {
+		iova = container_of(node, struct iova, node);
+
+		pr_debug("Start %lx, end %lx\n",
+			iova->pfn_lo, iova->pfn_hi);
+		node = rb_next(node);
+	}
+}
+#endif
+
+static void dmar_init_reserved_ranges(void)
+{
+	struct pci_dev *pdev = NULL;
+	struct iova *iova;
+	int i;
+	u64 addr, size;
+
+	init_iova_domain(&reserved_iova_list);
+
+	/* IOAPIC ranges shouldn't be accessed by DMA */
+	iova = reserve_iova(&reserved_iova_list, IOVA_PFN(IOAPIC_RANGE_START),
+		IOVA_PFN(IOAPIC_RANGE_END));
+	if (!iova)
+		printk(KERN_ERR "Reserve IOAPIC range failed\n");
+
+	/* Reserve all PCI MMIO to avoid peer-to-peer access */
+	for_each_pci_dev(pdev) {
+		struct resource *r;
+
+		for (i = 0; i < PCI_NUM_RESOURCES; i++) {
+			r = &pdev->resource[i];
+			if (!r->flags || !(r->flags & IORESOURCE_MEM))
+				continue;
+			addr = r->start;
+			addr &= PAGE_MASK_4K;
+			size = r->end - addr;
+			size = PAGE_ALIGN_4K(size);
+			iova = reserve_iova(&reserved_iova_list, IOVA_PFN(addr),
+				IOVA_PFN(size + addr) - 1);
+			if (!iova)
+				printk(KERN_ERR "Reserve iova failed\n");
+		}
+	}
+
+#ifdef DEBUG
+	pr_debug("System reserved iova ranges:\n");
+	print_iova_list(&reserved_iova_list);
+#endif
+}
+
+static void domain_reserve_special_ranges(struct domain *domain)
+{
+	copy_reserved_iova(&reserved_iova_list, &domain->iovad);
+}
+
+static inline int guestwidth_to_adjustwidth(int gaw)
+{
+	int agaw;
+	int r = (gaw - 12) % 9;
+
+	if (r == 0)
+		agaw = gaw;
+	else
+		agaw = gaw + 9 - r;
+	if (agaw > 64)
+		agaw = 64;
+	return agaw;
+}
+
+static int domain_init(struct domain *domain, int guest_width)
+{
+	struct intel_iommu *iommu;
+	int adjust_width, agaw;
+	unsigned long sagaw;
+
+	init_iova_domain(&domain->iovad);
+	spin_lock_init(&domain->mapping_lock);
+
+	domain_reserve_special_ranges(domain);
+
+	/* calculate AGAW */
+	iommu = domain->iommu;
+	if (guest_width > cap_mgaw(iommu->cap))
+		guest_width = cap_mgaw(iommu->cap);
+	domain->gaw = guest_width;
+	adjust_width = guestwidth_to_adjustwidth(guest_width);
+	agaw = width_to_agaw(adjust_width);
+	sagaw = cap_sagaw(iommu->cap);
+	if (!test_bit(agaw, &sagaw)) {
+		/* hardware doesn't support it, choose a bigger one */
+		pr_debug("IOMMU: hardware doesn't support agaw %d\n", agaw);
+		agaw = find_next_bit(&sagaw, 5, agaw);
+		if (agaw >= 5)
+			return -ENODEV;
+	}
+	domain->agaw = agaw;
+	INIT_LIST_HEAD(&domain->devices);
+
+	/* always allocate the top pgd */
+	domain->pgd = (struct dma_pte *)alloc_pgtable_page();
+	if (!domain->pgd)
+		return -ENOMEM;
+	__iommu_flush_cache(iommu, domain->pgd, PAGE_SIZE_4K);
+	return 0;
+}
+
+
+static void domain_exit(struct domain *domain)
+{
+	u64 end;
+
+	/* Domain 0 is reserved, so dont process it */
+	if (!domain)
+		return;
+
+	domain_remove_dev_info(domain);
+	/* destroy iovas */
+	put_iova_domain(&domain->iovad);
+	end = DOMAIN_MAX_ADDR(domain->gaw);
+	end = end & (~PAGE_MASK_4K);
+
+	/* clear ptes */
+	dma_pte_clear_range(domain, 0, end);
+
+	/* free page tables */
+	dma_pte_free_pagetable(domain, 0, end);
+
+	iommu_free_domain(domain);
+	free_domain_mem(domain);
+}
+
+static int domain_context_mapping_one(struct domain *domain, u8 bus, u8 devfn)
+{
+	struct context_entry *context;
+	struct intel_iommu *iommu = domain->iommu;
+	unsigned long flags;
+
+	pr_debug("Set context mapping for %02x:%02x.%d\n", bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+	BUG_ON(!domain->pgd);
+	context = device_to_context_entry(iommu, bus, devfn);
+	if (!context)
+		return -ENOMEM;
+	spin_lock_irqsave(&iommu->lock, flags);
+	if (context_present(*context)) {
+		spin_unlock_irqrestore(&iommu->lock, flags);
+		return 0;
+	}
+
+	context_set_domain_id(*context, domain->id);
+	context_set_address_width(*context, domain->agaw);
+	context_set_address_root(*context, virt_to_phys(domain->pgd));
+	context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL);
+	context_set_fault_enable(*context);
+	context_set_present(*context);
+	__iommu_flush_cache(iommu, context, sizeof(*context));
+
+	/* it's a non-present to present mapping */
+	if (iommu_flush_context_device(iommu, domain->id,
+			(((u16)bus) << 8) | devfn, DMA_CCMD_MASK_NOBIT, 1))
+		iommu_flush_write_buffer(iommu);
+	else
+		iommu_flush_iotlb_dsi(iommu, 0, 0);
+	spin_unlock_irqrestore(&iommu->lock, flags);
+	return 0;
+}
+
+static int
+domain_context_mapping(struct domain *domain, struct pci_dev *pdev)
+{
+	int ret;
+	struct pci_dev *tmp, *parent;
+
+	ret = domain_context_mapping_one(domain, pdev->bus->number,
+		pdev->devfn);
+	if (ret)
+		return ret;
+
+	/* dependent device mapping */
+	tmp = pci_find_upstream_pcie_bridge(pdev);
+	if (!tmp)
+		return 0;
+	/* Secondary interface's bus number and devfn 0 */
+	parent = pdev->bus->self;
+	while (parent != tmp) {
+		ret = domain_context_mapping_one(domain, parent->bus->number,
+			parent->devfn);
+		if (ret)
+			return ret;
+		parent = parent->bus->self;
+	}
+	if (tmp->is_pcie) /* this is a PCIE-to-PCI bridge */
+		return domain_context_mapping_one(domain,
+			tmp->subordinate->number, 0);
+	else /* this is a legacy PCI bridge */
+		return domain_context_mapping_one(domain,
+			tmp->bus->number, tmp->devfn);
+}
+
+static int domain_context_mapped(struct domain *domain, struct pci_dev *pdev)
+{
+	int ret;
+	struct pci_dev *tmp, *parent;
+
+	ret = device_context_mapped(domain->iommu, pdev->bus->number, pdev->devfn);
+	if (!ret)
+		return ret;
+	/* dependent device mapping */
+	tmp = pci_find_upstream_pcie_bridge(pdev);
+	if (!tmp)
+		return ret;
+	/* Secondary interface's bus number and devfn 0 */
+	parent = pdev->bus->self;
+	while (parent != tmp) {
+		ret = device_context_mapped(domain->iommu, parent->bus->number,
+			parent->devfn);
+		if (!ret)
+			return ret;
+		parent = parent->bus->self;
+	}
+	if (tmp->is_pcie)
+		return device_context_mapped(domain->iommu,
+			tmp->subordinate->number, 0);
+	else
+		return device_context_mapped(domain->iommu,
+			tmp->bus->number, tmp->devfn);
+}
+
+static int
+domain_page_mapping(struct domain *domain, dma_addr_t iova,
+			u64 hpa, size_t size, int prot)
+{
+	u64 start_pfn, end_pfn;
+	struct dma_pte *pte;
+	int index;
+
+	if ((prot & (DMA_PTE_READ|DMA_PTE_WRITE)) == 0)
+		return -EINVAL;
+	iova &= PAGE_MASK_4K;
+	start_pfn = ((u64)hpa) >> PAGE_SHIFT_4K;
+	end_pfn = (PAGE_ALIGN_4K(((u64)hpa) + size)) >> PAGE_SHIFT_4K;
+	index = 0;
+	while (start_pfn < end_pfn) {
+		pte = addr_to_dma_pte(domain, iova + PAGE_SIZE_4K * index);
+		if (!pte)
+			return -ENOMEM;
+		/* we don't need lock here, nobody else touches the iova range */
+		BUG_ON(dma_pte_addr(*pte));
+		dma_set_pte_addr(*pte, start_pfn << PAGE_SHIFT_4K);
+		dma_set_pte_prot(*pte, prot);
+		__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+		start_pfn++;
+		index++;
+	}
+	return 0;
+}
+
+
+static void detach_domain_for_dev(struct domain *domain, u8 bus, u8 devfn)
+{
+	clear_context_table(domain->iommu, bus, devfn);
+	iommu_flush_context_global(domain->iommu, 0);
+	iommu_flush_iotlb_global(domain->iommu, 0);
+}
+
+static void domain_remove_dev_info(struct domain *domain)
+{
+	struct device_domain_info *info;
+	unsigned long flags;
+
+	spin_lock_irqsave(&device_domain_lock, flags);
+	while (!list_empty(&domain->devices)) {
+		info = list_entry(domain->devices.next,
+			struct device_domain_info, link);
+		list_del(&info->link);
+		list_del(&info->global);
+		if (info->dev)
+			info->dev->sysdata = NULL;
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+
+		detach_domain_for_dev(info->domain, info->bus, info->devfn);
+		free_devinfo_mem(info);
+
+		spin_lock_irqsave(&device_domain_lock, flags);
+	}
+	spin_unlock_irqrestore(&device_domain_lock, flags);
+}
+
+/*
+ * find_domain
+ * Note: we use struct pci_dev->sysdata stores the info
+ */
+struct domain *
+find_domain(struct pci_dev *pdev)
+{
+	struct device_domain_info *info;
+
+	/* No lock here, assumes no domain exit in normal case */
+	info = (struct device_domain_info *)pdev->sysdata;
+	if (info)
+		return info->domain;
+	return NULL;
+}
+
+static int dmar_pci_device_match(struct pci_dev *devices[], int cnt,
+			     struct pci_dev *dev)
+{
+	int index;
+
+	while (dev) {
+		for (index = 0; index < cnt; index ++)
+			if (dev == devices[index])
+				return 1;
+
+		/* Check our parent */
+		dev = dev->bus->self;
+	}
+
+	return 0;
+}
+
+static struct dmar_drhd_unit *
+dmar_find_matched_drhd_unit(struct pci_dev *dev)
+{
+	struct dmar_drhd_unit *drhd = NULL;
+
+	list_for_each_entry(drhd, &dmar_drhd_units, list) {
+		if (drhd->include_all || dmar_pci_device_match(drhd->devices,
+						drhd->devices_cnt, dev))
+			return drhd;
+	}
+
+	return NULL;
+}
+
+
+/* domain is initialized */
+static struct domain *get_domain_for_dev(struct pci_dev *pdev, int gaw)
+{
+	struct domain *domain, *found = NULL;
+	struct intel_iommu *iommu;
+	struct dmar_drhd_unit *drhd;
+	struct device_domain_info *info, *tmp;
+	struct pci_dev *dev_tmp;
+	unsigned long flags;
+	int bus = 0, devfn = 0;
+
+	domain = find_domain(pdev);
+	if (domain)
+		return domain;
+
+	dev_tmp = pci_find_upstream_pcie_bridge(pdev);
+	if (dev_tmp) {
+		if (dev_tmp->is_pcie) {
+			bus = dev_tmp->subordinate->number;
+			devfn = 0;
+		} else {
+			bus = dev_tmp->bus->number;
+			devfn = dev_tmp->devfn;
+		}
+		spin_lock_irqsave(&device_domain_lock, flags);
+		list_for_each_entry(info, &device_domain_list, global) {
+			if (info->bus == bus && info->devfn == devfn) {
+				found = info->domain;
+				break;
+			}
+		}
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+		/* pcie-pci bridge already has a domain, uses it */
+		if (found) {
+			domain = found;
+			goto found_domain;
+		}
+	}
+
+	/* Allocate new domain for the device */
+	drhd = dmar_find_matched_drhd_unit(pdev);
+	if (!drhd) {
+		printk(KERN_ERR "IOMMU: can't find DMAR for device %s\n",
+			pci_name(pdev));
+		return NULL;
+	}
+	iommu = drhd->iommu;
+
+	domain = iommu_alloc_domain(iommu);
+	if (!domain)
+		goto error;
+
+	if (domain_init(domain, gaw)) {
+		domain_exit(domain);
+		goto error;
+	}
+
+	/* register pcie-to-pci device */
+	if (dev_tmp) {
+		info = alloc_devinfo_mem();
+		if (!info) {
+			domain_exit(domain);
+			goto error;
+		}
+		info->bus = bus;
+		info->devfn = devfn;
+		info->dev = NULL;
+		info->domain = domain;
+		/* This domain is shared by devices under p2p bridge */
+		domain->flags |= DOMAIN_FLAG_MULTIPLE_DEVICES;
+
+		/* pcie-to-pci bridge already has a domain, uses it */
+		found = NULL;
+		spin_lock_irqsave(&device_domain_lock, flags);
+		list_for_each_entry(tmp, &device_domain_list, global) {
+			if (tmp->bus == bus && tmp->devfn == devfn) {
+				found = tmp->domain;
+				break;
+			}
+		}
+		if (found) {
+			free_devinfo_mem(info);
+			domain_exit(domain);
+			domain = found;
+		} else {
+			list_add(&info->link, &domain->devices);
+			list_add(&info->global, &device_domain_list);
+		}
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+	}
+
+found_domain:
+	info = alloc_devinfo_mem();
+	if (!info)
+		goto error;
+	info->bus = pdev->bus->number;
+	info->devfn = pdev->devfn;
+	info->dev = pdev;
+	info->domain = domain;
+	spin_lock_irqsave(&device_domain_lock, flags);
+	/* somebody is fast */
+	if ((found = find_domain(pdev)) != NULL) {
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+		if (found != domain) {
+			domain_exit(domain);
+			domain = found;
+		}
+		free_devinfo_mem(info);
+		return domain;
+	}
+	list_add(&info->link, &domain->devices);
+	list_add(&info->global, &device_domain_list);
+	pdev->sysdata = info;
+	spin_unlock_irqrestore(&device_domain_lock, flags);
+	return domain;
+error:
+	/* recheck it here, maybe others set it */
+	return find_domain(pdev);
+}
+
+static int iommu_prepare_identity_map(struct pci_dev *pdev, u64 start, u64 end)
+{
+	struct domain *domain;
+	unsigned long size;
+	u64 base;
+	int ret;
+
+	printk(KERN_INFO
+		"IOMMU: Setting identity map for device %s [0x%Lx - 0x%Lx]\n",
+		pci_name(pdev), start, end);
+	/* page table init */
+	domain = get_domain_for_dev(pdev, DEFAULT_DOMAIN_ADDRESS_WIDTH);
+	if (!domain)
+		return -ENOMEM;
+
+	/* The address might not be aligned */
+	base = start & PAGE_MASK_4K;
+	size = end - base;
+	size = PAGE_ALIGN_4K(size);
+	if (!reserve_iova(&domain->iovad, IOVA_PFN(base),
+			IOVA_PFN(base + size) - 1)) {
+		printk(KERN_ERR "IOMMU: reserve iova failed\n");
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	pr_debug("Mapping reserved region %lx@%llx for %s\n",
+		size, base, pci_name(pdev));
+	/*
+	 * RMRR range might have overlap with physical memory range,
+	 * clear it first
+	 */
+	dma_pte_clear_range(domain, base, base + size);
+
+	ret = domain_page_mapping(domain, base, base, size,
+		DMA_PTE_READ|DMA_PTE_WRITE);
+	if (ret)
+		goto error;
+
+	/* context entry init */
+	ret = domain_context_mapping(domain, pdev);
+	if (!ret)
+		return 0;
+error:
+	domain_exit(domain);
+	return ret;
+
+}
+
+static inline int iommu_prepare_rmrr_dev(struct dmar_rmrr_unit *rmrr,
+	struct pci_dev *pdev)
+{
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return 0;
+	return iommu_prepare_identity_map(pdev, rmrr->base_address,
+		rmrr->end_address + 1);
+}
+
+int __init init_dmars(void)
+{
+	struct dmar_drhd_unit *drhd;
+	struct dmar_rmrr_unit *rmrr;
+	struct pci_dev *pdev;
+	struct intel_iommu *iommu;
+	int ret, unit = 0;
+
+	/*
+	 * for each drhd
+	 *    allocate root
+	 *    initialize and program root entry to not present
+	 * endfor
+	 */
+	for_each_drhd_unit(drhd) {
+		if (drhd->ignored)
+			continue;
+		iommu = alloc_iommu(drhd);
+		if (!iommu) {
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		/*
+		 * TBD:
+		 * we could share the same root & context tables
+		 * amoung all IOMMU's. Need to Split it later.
+		 */
+		ret = iommu_alloc_root_entry(iommu);
+		if (ret) {
+			printk(KERN_ERR "IOMMU: allocate root entry failed\n");
+			goto error;
+		}
+	}
+
+	/*
+	 * For each rmrr
+	 *   for each dev attached to rmrr
+	 *   do
+	 *     locate drhd for dev, alloc domain for dev
+	 *     allocate free domain
+	 *     allocate page table entries for rmrr
+	 *     if context not allocated for bus
+	 *           allocate and init context
+	 *           set present in root table for this bus
+	 *     init context with domain, translation etc
+	 *    endfor
+	 * endfor
+	 */
+	begin_for_each_rmrr_device(rmrr, pdev)
+		ret = iommu_prepare_rmrr_dev(rmrr, pdev);
+		if (ret)
+			printk(KERN_ERR "IOMMU: mapping reserved region failed\n");
+	end_for_each_rmrr_device(rmrr, pdev)
+
+	/*
+	 * for each drhd
+	 *   enable fault log
+	 *   global invalidate context cache
+	 *   global invalidate iotlb
+	 *   enable translation
+	 */
+	for_each_drhd_unit(drhd) {
+		if (drhd->ignored)
+			continue;
+		iommu = drhd->iommu;
+		sprintf (iommu->name, "dmar%d", unit++);
+
+		iommu_flush_write_buffer(iommu);
+
+		iommu_set_root_entry(iommu);
+
+		iommu_flush_context_global(iommu, 0);
+		iommu_flush_iotlb_global(iommu, 0);
+
+		ret = iommu_enable_translation(iommu);
+		if (ret)
+			goto error;
+	}
+
+	return 0;
+error:
+	for_each_drhd_unit(drhd) {
+		if (drhd->ignored)
+			continue;
+		iommu = drhd->iommu;
+		free_iommu(iommu);
+	}
+	return ret;
+}
+
+#define aligned_size(host_addr, size) \
+	PAGE_ALIGN_4K((host_addr & (~PAGE_MASK_4K)) + size)
+struct iova *
+iommu_alloc_iova(struct domain *domain, void *host_addr, size_t size,
+		u64 start, u64 end)
+{
+	u64 start_addr;
+	struct iova *piova;
+
+	/* Make sure it's in range */
+	if ((start > DOMAIN_MAX_ADDR(domain->gaw)) || end < start)
+		return NULL;
+
+	end = min_t(u64, DOMAIN_MAX_ADDR(domain->gaw), end);
+	start_addr = PAGE_ALIGN_4K(start);
+	size = aligned_size((u64)host_addr, size);
+	if (!size || (start_addr + size > end))
+		return NULL;
+
+	piova = alloc_iova(&domain->iovad, size >> PAGE_SHIFT_4K, IOVA_PFN(end));
+
+	return piova;
+}
+
+
+/* iotlb */
+static dma_addr_t __intel_map_single(struct device *dev, void *addr,
+	size_t size, int dir, u64 *flush_addr, unsigned int *flush_size)
+{
+	struct domain *domain;
+	struct pci_dev *pdev = to_pci_dev(dev);
+	int ret;
+	int prot = 0;
+	struct iova *iova = NULL;
+	u64 start_addr;
+
+	addr = (void *)virt_to_phys(addr);
+
+	domain = get_domain_for_dev(pdev,
+			DEFAULT_DOMAIN_ADDRESS_WIDTH);
+	if (!domain) {
+		printk(KERN_ERR"Allocating domain for %s failed", pci_name(pdev));
+		return 0;
+	}
+
+	start_addr = IOVA_START_ADDR;
+
+	if (pdev->dma_mask <= DMA_32BIT_MASK) {
+		iova = iommu_alloc_iova(domain, addr, size, start_addr,
+			pdev->dma_mask);
+	} else  {
+		/*
+		 * First try to allocate an io virtual address in
+		 * DMA_32BIT_MASK and if that fails then try allocating
+		 * from higer range
+		 */
+		iova = iommu_alloc_iova(domain, addr, size, start_addr,
+			DMA_32BIT_MASK);
+		if (!iova)
+			iova = iommu_alloc_iova(domain, addr, size, start_addr,
+			pdev->dma_mask);
+	}
+
+	if (!iova) {
+		printk(KERN_ERR"Allocating iova for %s failed", pci_name(pdev));
+		return 0;
+	}
+
+	/* make sure context mapping is ok */
+	if (unlikely(!domain_context_mapped(domain, pdev))) {
+		ret = domain_context_mapping(domain, pdev);
+		if (ret)
+			goto error;
+	}
+
+	/*
+	 * Check if DMAR supports zero-length reads on write only
+	 * mappings..
+	 */
+	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \
+			!cap_zlr(domain->iommu->cap))
+		prot |= DMA_PTE_READ;
+	if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
+		prot |= DMA_PTE_WRITE;
+	/*
+	 * addr - (addr + size) might be partial page, we should map the whole
+	 * page.  Note: if two part of one page are separately mapped, we
+	 * might have two guest_addr mapping to the same host addr, but this
+	 * is not a big problem
+	 */
+	ret = domain_page_mapping(domain, iova->pfn_lo << PAGE_SHIFT_4K,
+		((u64)addr) & PAGE_MASK_4K,
+		(iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K, prot);
+	if (ret)
+		goto error;
+
+	pr_debug("Device %s request: %lx@%llx mapping: %lx@%llx, dir %d\n",
+		pci_name(pdev), size, (u64)addr,
+		(iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K,
+		(u64)(iova->pfn_lo << PAGE_SHIFT_4K), dir);
+
+	*flush_addr = iova->pfn_lo << PAGE_SHIFT_4K;
+	*flush_size = (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K;
+	return (iova->pfn_lo << PAGE_SHIFT_4K) + ((u64)addr & (~PAGE_MASK_4K));
+error:
+	__free_iova(&domain->iovad, iova);
+	printk(KERN_ERR"Device %s request: %lx@%llx dir %d --- failed\n",
+		pci_name(pdev), size, (u64)addr, dir);
+	return 0;
+}
+
+static dma_addr_t intel_map_single(struct device *hwdev, void *addr,
+	size_t size, int dir)
+{
+	struct pci_dev *pdev = to_pci_dev(hwdev);
+	dma_addr_t ret;
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	BUG_ON(dir == DMA_NONE);
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return virt_to_bus(addr);
+
+	ret = __intel_map_single(hwdev, addr, size, dir, &flush_addr, &flush_size);
+	if (ret) {
+		domain = find_domain(pdev);
+		/* it's a non-present to present mapping */
+		if (iommu_flush_iotlb_psi(domain->iommu, domain->id,
+				flush_addr, flush_size >> PAGE_SHIFT_4K, 1))
+			iommu_flush_write_buffer(domain->iommu);
+	}
+	return ret;
+}
+
+static void __intel_unmap_single(struct device *dev, dma_addr_t dev_addr,
+	size_t size, int dir, u64 *flush_addr, unsigned int *flush_size)
+{
+	struct domain *domain;
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct iova *iova;
+
+	domain = find_domain(pdev);
+	BUG_ON(!domain);
+
+	iova = find_iova(&domain->iovad, IOVA_PFN(dev_addr));
+	if (!iova) {
+		*flush_size = 0;
+		return;
+	}
+	pr_debug("Device %s unmapping: %lx@%llx\n",
+		pci_name(pdev), (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K,
+		(u64)(iova->pfn_lo << PAGE_SHIFT_4K));
+
+	*flush_addr = iova->pfn_lo << PAGE_SHIFT_4K;
+	*flush_size = (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K;
+	/*  clear the whole page, not just dev_addr - (dev_addr + size) */
+	dma_pte_clear_range(domain, *flush_addr, *flush_addr + *flush_size);
+	/* free page tables */
+	dma_pte_free_pagetable(domain, *flush_addr, *flush_addr + *flush_size);
+	/* free iova */
+	__free_iova(&domain->iovad, iova);
+}
+
+static void intel_unmap_single(struct device *dev, dma_addr_t dev_addr,
+	size_t size, int dir)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return;
+
+	domain = find_domain(pdev);
+	__intel_unmap_single(dev, dev_addr, size, dir, &flush_addr, &flush_size);
+	if (flush_size == 0)
+		return;
+	if (iommu_flush_iotlb_psi(domain->iommu, domain->id, flush_addr,
+			flush_size >> PAGE_SHIFT_4K, 0))
+		iommu_flush_write_buffer(domain->iommu);
+}
+
+static void * intel_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, gfp_t flags)
+{
+	void *vaddr;
+	int order;
+
+	size = PAGE_ALIGN_4K(size);
+	order = get_order(size);
+	flags &= ~(GFP_DMA | GFP_DMA32);
+
+	vaddr = (void *)__get_free_pages(flags, order);
+	if (!vaddr)
+		return NULL;
+	memset(vaddr, 0, size);
+
+	*dma_handle = intel_map_single(hwdev, vaddr, size, DMA_BIDIRECTIONAL);
+	if (*dma_handle)
+		return vaddr;
+	free_pages((unsigned long)vaddr, order);
+	return NULL;
+}
+
+static void intel_free_coherent(struct device *hwdev, size_t size,
+	void *vaddr, dma_addr_t dma_handle)
+{
+	int order;
+
+	size = PAGE_ALIGN_4K(size);
+	order = get_order(size);
+
+	intel_unmap_single(hwdev, dma_handle, size, DMA_BIDIRECTIONAL);
+	free_pages((unsigned long)vaddr, order);
+}
+
+static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sg,
+	int nelems, int dir)
+{
+	int i;
+	struct pci_dev *pdev = to_pci_dev(hwdev);
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return;
+
+	domain = find_domain(pdev);
+	for (i = 0; i < nelems; i++, sg++)
+		__intel_unmap_single(hwdev, sg->dma_address,
+			sg->dma_length, dir, &flush_addr, &flush_size);
+
+	if (iommu_flush_iotlb_dsi(domain->iommu, domain->id, 0))
+		iommu_flush_write_buffer(domain->iommu);
+}
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+static int intel_nontranslate_map_sg(struct device* hddev,
+	struct scatterlist *sg, int nelems, int dir)
+{
+	int i;
+
+ 	for (i = 0; i < nelems; i++) {
+		struct scatterlist *s = &sg[i];
+		BUG_ON(!s->page);
+		s->dma_address = virt_to_bus(SG_ENT_VIRT_ADDRESS(s));
+		s->dma_length = s->length;
+	}
+	return nelems;
+}
+
+static int intel_map_sg(struct device *hwdev, struct scatterlist *sg,
+	int nelems, int dir)
+{
+	void *addr;
+	int i;
+	dma_addr_t dma_handle;
+	struct pci_dev *pdev = to_pci_dev(hwdev);
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	BUG_ON(dir == DMA_NONE);
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return intel_nontranslate_map_sg(hwdev, sg, nelems, dir);
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dma_handle = __intel_map_single(hwdev, addr,
+				sg->length, dir, &flush_addr, &flush_size);
+		if (!dma_handle) {
+			intel_unmap_sg(hwdev, sg - i, i, dir);
+			sg[0].dma_length = 0;
+			return 0;
+		}
+		sg->dma_address = dma_handle;
+		sg->dma_length = sg->length;
+	}
+
+	domain = find_domain(pdev);
+
+	/* it's a non-present to present mapping */
+	if (iommu_flush_iotlb_dsi(domain->iommu, domain->id, 1))
+		iommu_flush_write_buffer(domain->iommu);
+	return nelems;
+}
+
+struct dma_mapping_ops intel_dma_ops = {
+	.alloc_coherent = intel_alloc_coherent,
+	.free_coherent = intel_free_coherent,
+	.map_single = intel_map_single,
+	.unmap_single = intel_unmap_single,
+	.map_sg = intel_map_sg,
+	.unmap_sg = intel_unmap_sg,
+};
+
+void *iommu_rpool_alloc(unsigned int size, gfp_t flag)
+{
+	if (size == PAGE_SIZE_4K)
+		return(void *)get_zeroed_page(flag);
+	else
+		return kzalloc(size, flag);
+}
+
+void iommu_rpool_free(void *pobj, unsigned int size)
+{
+	if (size == PAGE_SIZE_4K)
+		free_page((unsigned long)pobj);
+	else
+		kfree(pobj);
+}
+
+static inline int
+iommu_pgtable_pool_init(void)
+{
+
+	return init_resource_pool(&iommu_pgtable_pool, MIN_PGTABLE_PAGES,
+		PAGE_SIZE_4K, GROW_PGTABLE_PAGES, iommu_rpool_alloc,
+		iommu_rpool_free);
+}
+
+static inline int
+iommu_domain_pool_init(void)
+{
+	return init_resource_pool(&iommu_domain_pool, MIN_DOMAIN_REQ,
+		sizeof(struct domain), GROW_DOMAIN_REQ, iommu_rpool_alloc,
+		iommu_rpool_free);
+}
+
+static inline int
+iommu_devinfo_pool_init(void)
+{
+	return init_resource_pool(&iommu_devinfo_pool, MIN_DEVINFO_REQ,
+		sizeof(struct device_domain_info),
+		GROW_DEVINFO_REQ, iommu_rpool_alloc,
+		iommu_rpool_free);
+}
+
+static inline int
+iommu_iova_pool_init(void)
+{
+	return init_resource_pool(&iommu_iova_pool, MIN_IOVA_REQ,
+		sizeof(struct iova),
+		GROW_IOVA_REQ, iommu_rpool_alloc, iommu_rpool_free);
+}
+
+static int iommu_init_mempool(void)
+{
+	int ret;
+	ret = iommu_iova_pool_init();
+	if (ret)
+		return ret;
+
+	ret = iommu_pgtable_pool_init();
+	if (ret)
+		goto pgtable_error;
+
+	ret = iommu_domain_pool_init();
+	if (ret)
+		goto domain_error;
+
+	ret = iommu_devinfo_pool_init();
+	if (!ret)
+		return ret;
+
+	destroy_resource_pool(&iommu_domain_pool);
+domain_error:
+	destroy_resource_pool(&iommu_pgtable_pool);
+pgtable_error:
+	destroy_resource_pool(&iommu_iova_pool);
+
+	return -ENOMEM;
+}
+
+static void iommu_exit_mempool(void)
+{
+	destroy_resource_pool(&iommu_devinfo_pool);
+	destroy_resource_pool(&iommu_domain_pool);
+	destroy_resource_pool(&iommu_pgtable_pool);
+	destroy_resource_pool(&iommu_iova_pool);
+}
+
+void __init detect_intel_iommu(void)
+{
+	if (swiotlb || no_iommu || iommu_detected || dmar_disabled)
+		return;
+	if (early_dmar_detect()) {
+		iommu_detected = 1;
+	}
+}
+
+static void __init init_no_remapping_devices(void)
+{
+	struct dmar_drhd_unit *drhd;
+
+	for_each_drhd_unit(drhd)
+		if (!drhd->include_all) {
+			int i;
+			for (i=0; i < drhd->devices_cnt; i++)
+				if (drhd->devices[i] != NULL)
+					break;
+			/* ignore DMAR unit if no pci devices exist */
+			if (i == drhd->devices_cnt)
+				drhd->ignored = 1;
+		}
+
+	if (dmar_map_gfx)
+		return;
+
+	for_each_drhd_unit(drhd) {
+		int i;
+		if (drhd->ignored || drhd->include_all)
+			continue;
+
+		for (i = 0; i < drhd->devices_cnt; i++)
+			if (drhd->devices[i] && !IS_GFX_DEVICE(drhd->devices[i]))
+				break;
+
+		if (i < drhd->devices_cnt)
+			continue;
+
+		/* bypass IOMMU if it is just for gfx devices */
+		drhd->ignored = 1;
+		for (i = 0; i < drhd->devices_cnt; i++) {
+			if (!drhd->devices[i])
+				continue;
+			drhd->devices[i]->sysdata = DUMMY_DEVICE_DOMAIN_INFO;
+		}
+	}
+}
+
+int __init intel_iommu_init(void)
+{
+	int ret = 0;
+
+	if (no_iommu || swiotlb || dmar_disabled)
+		return -ENODEV;
+
+	if (dmar_table_init())
+		return 	-ENODEV;
+
+	iommu_init_mempool();
+	dmar_init_reserved_ranges();
+
+	init_no_remapping_devices();
+
+	ret = init_dmars();
+	if (ret) {
+		printk(KERN_ERR "IOMMU: dmar init failed\n");
+		put_iova_domain(&reserved_iova_list);
+		iommu_exit_mempool();
+		return ret;
+	}
+	printk(KERN_INFO
+		"PCI-DMA: Intel(R) Virtualization Technology for Directed I/O\n");
+
+	force_iommu = 1;
+	dma_ops = &intel_dma_ops;
+	return 0;
+}
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.h	2007-06-04 12:40:29.000000000 -0700
@@ -0,0 +1,296 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ */
+
+#ifndef _INTEL_IOMMU_H_
+#define _INTEL_IOMMU_H_
+
+#include <linux/types.h>
+#include <linux/msi.h>
+#include "iova.h"
+#include <asm/io.h>
+
+/*
+ * Intel IOMMU register specification per version 1.0 public spec.
+ */
+
+#define	DMAR_VER_REG	0x0	/* Arch version supported by this IOMMU */
+#define	DMAR_CAP_REG	0x8	/* Hardware supported capabilities */
+#define	DMAR_ECAP_REG	0x10	/* Extended capabilities supported */
+#define	DMAR_GCMD_REG	0x18	/* Global command register */
+#define	DMAR_GSTS_REG	0x1c	/* Global status register */
+#define	DMAR_RTADDR_REG	0x20	/* Root entry table */
+#define	DMAR_CCMD_REG	0x28	/* Context command reg */
+#define	DMAR_FSTS_REG	0x34	/* Fault Status register */
+#define	DMAR_FECTL_REG	0x38	/* Fault control register */
+#define	DMAR_FEDATA_REG	0x3c	/* Fault event interrupt data register */
+#define	DMAR_FEADDR_REG	0x40	/* Fault event interrupt addr register */
+#define	DMAR_FEUADDR_REG 0x44	/* Upper address register */
+#define	DMAR_AFLOG_REG	0x58	/* Advanced Fault control */
+#define	DMAR_PMEN_REG	0x64	/* Enable Protected Memory Region */
+#define	DMAR_PLMBASE_REG 0x68	/* PMRR Low addr */
+#define	DMAR_PLMLIMIT_REG 0x6c	/* PMRR low limit */
+#define	DMAR_PHMBASE_REG 0x70	/* pmrr high base addr */
+#define	DMAR_PHMLIMIT_REG 0x78	/* pmrr high limit */
+
+#define OFFSET_STRIDE		(9)
+#define dmar_readl(dmar, reg) readl(dmar + reg)
+#define dmar_writel(dmar, reg, val) writel((val), dmar + reg)
+#define dmar_readq(dmar, reg) ({ \
+		u32 lo, hi; \
+		lo = dmar_readl(dmar, reg); \
+		hi = dmar_readl(dmar, reg + 4); \
+		(((u64) hi) << 32) + lo; })
+#define dmar_writeq(dmar, reg, val) do {\
+		dmar_writel(dmar, reg, (u32)(val)); \
+		dmar_writel(dmar, reg + 4, (u32)((val) >> 32)); \
+	} while (0)
+
+#define VER_MAJOR(v)		(((v) & 0xf0) >> 4)
+#define VER_MINOR(v)		((v) & 0x0f)
+
+/*
+ * Decoding Capability Register
+ */
+#define cap_read_drain(c)	(((c) >> 55) & 1)
+#define cap_write_drain(c)	(((c) >> 54) & 1)
+#define cap_max_amask_val(c)	(((c) >> 48) & 0x3f)
+#define cap_num_fault_regs(c)	((((c) >> 40) & 0xff) + 1)
+#define cap_pgsel_inv(c)	(((c) >> 39) & 1)
+
+#define cap_super_page_val(c)	(((c) >> 34) & 0xf)
+#define cap_super_offset(c)	(((find_first_bit(&cap_super_page_val(c), 4)) \
+					* OFFSET_STRIDE) + 21)
+
+#define cap_fault_reg_offset(c)	((((c) >> 24) & 0x3ff) * 16)
+#define cap_max_fault_reg_offset(c) \
+	(cap_fault_reg_offset(c) + cap_num_fault_regs(c) * 16)
+
+#define cap_zlr(c)		(((c) >> 22) & 1)
+#define cap_isoch(c)		(((c) >> 23) & 1)
+#define cap_mgaw(c)		((((c) >> 16) & 0x3f) + 1)
+#define cap_sagaw(c)		(((c) >> 8) & 0x1f)
+#define cap_caching_mode(c)	(((c) >> 7) & 1)
+#define cap_phmr(c)		(((c) >> 6) & 1)
+#define cap_plmr(c)		(((c) >> 5) & 1)
+#define cap_rwbf(c)		(((c) >> 4) & 1)
+#define cap_afl(c)		(((c) >> 3) & 1)
+#define cap_ndoms(c)		(((unsigned long)1) << (4 + 2 * ((c) & 0x7)))
+/*
+ * Extended Capability Register
+ */
+
+#define ecap_niotlb_iunits(e)	((((e) >> 24) & 0xff) + 1)
+#define ecap_iotlb_offset(e) 	((((e) >> 8) & 0x3ff) * 16)
+#define ecap_max_iotlb_offset(e) \
+	(ecap_iotlb_offset(e) + ecap_niotlb_iunits(e) * 16)
+#define ecap_coherent(e)	((e) & 0x1)
+
+
+/* IOTLB_REG */
+#define DMA_TLB_GLOBAL_FLUSH (((u64)1) << 60)
+#define DMA_TLB_DSI_FLUSH (((u64)2) << 60)
+#define DMA_TLB_PSI_FLUSH (((u64)3) << 60)
+#define DMA_TLB_IIRG(type) ((type >> 60) & 7)
+#define DMA_TLB_IAIG(val) (((val) >> 57) & 7)
+#define DMA_TLB_READ_DRAIN (((u64)1) << 49)
+#define DMA_TLB_WRITE_DRAIN (((u64)1) << 48)
+#define DMA_TLB_DID(id)	(((u64)((id) & 0xffff)) << 32)
+#define DMA_TLB_IVT (((u64)1) << 63)
+#define DMA_TLB_IH_NONLEAF (((u64)1) << 6)
+#define DMA_TLB_MAX_SIZE (0x3f)
+
+/* GCMD_REG */
+#define DMA_GCMD_TE (((u32)1) << 31)
+#define DMA_GCMD_SRTP (((u32)1) << 30)
+#define DMA_GCMD_SFL (((u32)1) << 29)
+#define DMA_GCMD_EAFL (((u32)1) << 28)
+#define DMA_GCMD_WBF (((u32)1) << 27)
+
+/* GSTS_REG */
+#define DMA_GSTS_TES (((u32)1) << 31)
+#define DMA_GSTS_RTPS (((u32)1) << 30)
+#define DMA_GSTS_FLS (((u32)1) << 29)
+#define DMA_GSTS_AFLS (((u32)1) << 28)
+#define DMA_GSTS_WBFS (((u32)1) << 27)
+
+/* CCMD_REG */
+#define DMA_CCMD_ICC (((u64)1) << 63)
+#define DMA_CCMD_GLOBAL_INVL (((u64)1) << 61)
+#define DMA_CCMD_DOMAIN_INVL (((u64)2) << 61)
+#define DMA_CCMD_DEVICE_INVL (((u64)3) << 61)
+#define DMA_CCMD_FM(m) (((u64)((m) & 0x3)) << 32)
+#define DMA_CCMD_MASK_NOBIT 0
+#define DMA_CCMD_MASK_1BIT 1
+#define DMA_CCMD_MASK_2BIT 2
+#define DMA_CCMD_MASK_3BIT 3
+#define DMA_CCMD_SID(s) (((u64)((s) & 0xffff)) << 16)
+#define DMA_CCMD_DID(d) ((u64)((d) & 0xffff))
+
+/* FECTL_REG */
+#define DMA_FECTL_IM (((u32)1) << 31)
+
+/* FSTS_REG */
+#define DMA_FSTS_PPF ((u32)2)
+#define DMA_FSTS_PFO ((u32)1)
+#define dma_fsts_fault_record_index(s) (((s) >> 8) & 0xff)
+
+/* FRCD_REG, 32 bits access */
+#define DMA_FRCD_F (((u32)1) << 31)
+#define dma_frcd_type(d) ((d >> 30) & 1)
+#define dma_frcd_fault_reason(c) (c & 0xff)
+#define dma_frcd_source_id(c) (c & 0xffff)
+#define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */
+
+/*
+ * 0: Present
+ * 1-11: Reserved
+ * 12-63: Context Ptr (12 - (haw-1))
+ * 64-127: Reserved
+ */
+struct root_entry {
+	u64	val;
+	u64	rsvd1;
+};
+#define ROOT_ENTRY_NR (PAGE_SIZE_4K/sizeof(struct root_entry))
+#define root_present(root)	((root).val & 1)
+#define set_root_present(root) do {(root).val |= 1;} while(0)
+
+struct context_entry;
+static inline struct context_entry *
+get_context_addr_from_root(struct root_entry root)
+{
+	return (struct context_entry *) (root_present(root)?phys_to_virt( 	\
+		(root).val & PAGE_MASK_4K):					\
+		NULL);								\
+}
+
+#define set_root_value(root, value) \
+	do {(root).val |= ((value) & PAGE_MASK_4K);} while(0)
+
+/*
+ * low 64 bits:
+ * 0: present
+ * 1: fault processing disable
+ * 2-3: translation type
+ * 12-63: address space root
+ * high 64 bits:
+ * 0-2: address width
+ * 3-6: aval
+ * 8-23: domain id
+ */
+struct context_entry {
+	u64 lo;
+	u64 hi;
+};
+#define context_present(c) ((c).lo & 1)
+#define context_fault_disable(c) (((c).lo >> 1) & 1)
+#define context_translation_type(c) (((c).lo >> 2) & 3)
+#define context_address_root(c) ((c).lo & PAGE_MASK_4K)
+#define context_address_width(c) ((c).hi &  7)
+#define context_domain_id(c) (((c).hi >> 8) & ((1 << 16) - 1))
+
+#define context_set_present(c) do {(c).lo |= 1;} while(0)
+#define context_set_fault_enable(c) \
+	do {(c).lo &= (((u64)-1) << 2) | 1;} while(0)
+#define context_set_translation_type(c, val) do { \
+		(c).lo &= (((u64)-1) << 4) | 3; \
+		(c).lo |= ((val) & 3) << 2; \
+	} while(0)
+#define CONTEXT_TT_MULTI_LEVEL 0
+#define context_set_address_root(c, val) \
+	do {(c).lo |= (val) & PAGE_MASK_4K;} while(0)
+#define context_set_address_width(c, val) do {(c).hi |= (val) & 7;} while(0)
+#define context_set_domain_id(c, val) \
+	do {(c).hi |= ((val) & ((1 << 16) - 1)) << 8;} while(0)
+#define context_clear_entry(c) do {(c).lo = 0; (c).hi = 0;} while(0)
+
+/*
+ * 0: readable
+ * 1: writable
+ * 2-6: reserved
+ * 7: super page
+ * 8-11: available
+ * 12-63: Host physcial address
+ */
+struct dma_pte {
+	u64 val;
+};
+#define dma_clear_pte(p)	do {(p).val = 0;} while(0)
+
+#define DMA_PTE_READ (1)
+#define DMA_PTE_WRITE (2)
+
+#define dma_set_pte_readable(p) do {(p).val |= DMA_PTE_READ;} while(0)
+#define dma_set_pte_writable(p) do {(p).val |= DMA_PTE_WRITE;} while(0)
+#define dma_set_pte_prot(p, prot) do {\
+	(p).val = ((p).val & ~3) | ((prot) & 3); } while(0)
+#define dma_pte_addr(p) ((p).val & PAGE_MASK_4K)
+#define dma_set_pte_addr(p, addr) do {\
+		(p).val |= ((addr) & PAGE_MASK_4K); } while(0)
+#define dma_pte_present(p) (((p).val & 3) != 0)
+
+struct intel_iommu;
+
+struct domain {
+	int	id;			/* domain id */
+	struct intel_iommu *iommu;	/* back pointer to owning iommu */
+
+	struct list_head devices; 	/* all devices' list */
+	struct iova_domain iovad;	/* iova's that belong to this domain */
+
+	struct dma_pte	*pgd;		/* virtual address */
+	spinlock_t	mapping_lock;	/* page table lock */
+	int		gaw;		/* max guest address width */
+	int		agaw;		/* adjusted guest address width, 0 is level 2 30-bit */
+
+#define DOMAIN_FLAG_MULTIPLE_DEVICES 1
+	int		flags;
+};
+
+/* PCI domain-device relationship */
+struct device_domain_info {
+	struct list_head link;	/* link to domain siblings */
+	struct list_head global; /* link to global list */
+	u8 bus;			/* PCI bus numer */
+	u8 devfn;		/* PCI devfn number */
+	struct pci_dev *dev; /* it's NULL for PCIE-to-PCI bridge */
+	struct domain *domain; /* pointer to domain */
+};
+
+extern int init_dmars(void);
+
+struct intel_iommu {
+	void __iomem	*reg; /* Pointer to hardware regs, virtual addr */
+	u64		cap;
+	u64		ecap;
+	unsigned long 	*domain_ids; /* bitmap of domains */
+	struct domain **domains; /* ptr to domains */
+	int		seg;
+	u32		gcmd; /* Holds TE, EAFL. Don't need SRTP, SFL, WBF */
+	spinlock_t	lock; /* protect context, domain ids */
+	spinlock_t	register_lock; /* protect register handling */
+	struct root_entry *root_entry; /* virtual address */
+
+	unsigned int irq;
+	unsigned char name[7];    /* Device Name */
+	struct msi_msg saved_msg;
+	struct sys_device sysdev;
+};
+
+#endif
Index: linux-2.6.22-rc3/include/linux/dmar.h
===================================================================
--- linux-2.6.22-rc3.orig/include/linux/dmar.h	2007-06-04 12:35:19.000000000 -0700
+++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-04 12:40:29.000000000 -0700
@@ -23,8 +23,15 @@
 
 #include <linux/acpi.h>
 #include <linux/types.h>
+#include <linux/msi.h>
 
 
+struct intel_iommu;
+
+/* Intel IOMMU detection and initialization functions */
+extern void detect_intel_iommu(void);
+extern int intel_iommu_init(void);
+
 extern int dmar_table_init(void);
 extern int early_dmar_detect(void);
 
@@ -49,4 +56,20 @@
 	int	devices_cnt;		/* target device count */
 };
 
+#define for_each_drhd_unit(drhd) \
+	list_for_each_entry(drhd, &dmar_drhd_units, list)
+#define for_each_rmrr_units(rmrr) \
+	list_for_each_entry(rmrr, &dmar_rmrr_units, list)
+#define begin_for_each_rmrr_device(rmrr, pdev) \
+	for_each_rmrr_units(rmrr) { \
+		int _i; \
+		for (_i = 0; _i < rmrr->devices_cnt; _i++) { \
+			pdev = rmrr->devices[_i]; \
+			/* some BIOS lists non-exist devices in DMAR table */\
+			if (!pdev) \
+				continue;
+#define end_for_each_rmrr_device(rmrr, pdev) \
+		} \
+	}
+
 #endif /* __DMAR_H__ */

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
                   ` (5 preceding siblings ...)
  2007-06-04 21:02 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: dmar_forcedac.patch --]
[-- Type: text/plain, Size: 2392 bytes --]

	Introduce intel_iommu=forcedac commandline option.
This option is helpful to verify the pci device capability
of handling physical dma'able address greater than 4G.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/kernel-parameters.txt |    7 +++++++
 drivers/pci/intel-iommu.c           |    6 +++++-
 2 files changed, 12 insertions(+), 1 deletion(-)

Index: linux-2.6.22-rc3/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/kernel-parameters.txt	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/kernel-parameters.txt	2007-06-04 12:40:41.000000000 -0700
@@ -785,6 +785,13 @@
 			bypassed by not enabling DMAR with this option. In
 			this case, gfx device will use physical address for
 			DMA.
+		forcedac
+			With this option iommu will not optimize to look
+			for io virtual address below 32 bit forcing dual
+			address cycle on pci bus for cards supporting greater
+			than 32 bit addressing. The default is to look
+			for translation below 32 bit and if not available
+			then look in the higher range.
 
 	io7=		[HW] IO7 for Marvel based alpha systems
 			See comment before marvel_specify_io7 in
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:40:41.000000000 -0700
@@ -53,6 +53,7 @@
 
 static int dmar_disabled;
 static int __initdata dmar_map_gfx = 1;
+static int dmar_forcedac = 0;
 
 #define DUMMY_DEVICE_DOMAIN_INFO ((struct device_domain_info *)(-1))
 static DEFINE_SPINLOCK(device_domain_lock);
@@ -69,6 +70,9 @@
 		} else if (!strncmp(str, "igfx_off", 8)) {
 			dmar_map_gfx = 0;
 			printk(KERN_INFO"Intel-IOMMU: disable GFX device mapping\n");
+ 		} else if (!strncmp(str, "forcedac", 8)) {
+			printk (KERN_INFO"Intel-IOMMU: Enabling DAC for PCI supporting > 32Bit DMA\n");
+			dmar_forcedac = 1;
 		}
 
 		str += strcspn(str, ",");
@@ -1500,7 +1504,7 @@
 
 	start_addr = IOVA_START_ADDR;
 
-	if (pdev->dma_mask <= DMA_32BIT_MASK) {
+	if ((pdev->dma_mask <= DMA_32BIT_MASK) || (dmar_forcedac)) {
 		iova = iommu_alloc_iova(domain, addr, size, start_addr,
 			pdev->dma_mask);
 	} else  {

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 08/10] DMAR fault handling support
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
                   ` (6 preceding siblings ...)
  2007-06-04 21:02 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: dmar_fault_handling_support.patch --]
[-- Type: text/plain, Size: 10194 bytes --]

	MSI interrupt handler registrations and fault handling support
for Intel-IOMMU hadrware.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/Intel-IOMMU.txt |   17 +++
 arch/x86_64/kernel/io_apic.c  |   59 ++++++++++++
 drivers/pci/intel-iommu.c     |  194 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/dmar.h          |   12 ++
 4 files changed, 281 insertions(+), 1 deletion(-)

Index: linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/Intel-IOMMU.txt	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt	2007-06-04 12:40:58.000000000 -0700
@@ -63,6 +63,15 @@
 The same is true for peer to peer transactions. Hence we reserve the
 address from PCI MMIO ranges so they are not allocated for IOVA addresses.
 
+
+Fault reporting
+---------------
+When errors are reported, the DMA engine signals via an interrupt. The fault
+reason and device that caused it with fault reason is printed on console.
+
+See below for sample.
+
+
 Boot Message Sample
 -------------------
 
@@ -85,6 +94,14 @@
 
 PCI-DMA: Using DMAR IOMMU
 
+Fault reporting
+---------------
+
+DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000
+DMAR:[fault reason 05] PTE Write access is not set
+DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000
+DMAR:[fault reason 05] PTE Write access is not set
+
 TBD
 ----
 
Index: linux-2.6.22-rc3/arch/x86_64/kernel/io_apic.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/kernel/io_apic.c	2007-06-04 12:19:13.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/kernel/io_apic.c	2007-06-04 12:40:58.000000000 -0700
@@ -31,6 +31,7 @@
 #include <linux/sysdev.h>
 #include <linux/msi.h>
 #include <linux/htirq.h>
+#include <linux/dmar.h>
 #ifdef CONFIG_ACPI
 #include <acpi/acpi_bus.h>
 #endif
@@ -1972,8 +1973,64 @@
 	destroy_irq(irq);
 }
 
-#endif /* CONFIG_PCI_MSI */
+#ifdef CONFIG_DMAR
+#ifdef CONFIG_SMP
+static void dmar_msi_set_affinity(unsigned int irq, cpumask_t mask)
+{
+	struct irq_cfg *cfg = irq_cfg + irq;
+	struct msi_msg msg;
+	unsigned int dest;
+	cpumask_t tmp;
+
+	cpus_and(tmp, mask, cpu_online_map);
+	if (cpus_empty(tmp))
+		return;
+
+	if (assign_irq_vector(irq, mask))
+		return;
+
+	cpus_and(tmp, cfg->domain, mask);
+	dest = cpu_mask_to_apicid(tmp);
+
+	dmar_msi_read(irq, &msg);
+
+	msg.data &= ~MSI_DATA_VECTOR_MASK;
+	msg.data |= MSI_DATA_VECTOR(cfg->vector);
+	msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
+	msg.address_lo |= MSI_ADDR_DEST_ID(dest);
+
+	dmar_msi_write(irq, &msg);
+	irq_desc[irq].affinity = mask;
+}
+#endif /* CONFIG_SMP */
+
+struct irq_chip dmar_msi_type = {
+	.name = "DMAR_MSI",
+	.unmask = dmar_msi_unmask,
+	.mask = dmar_msi_mask,
+	.ack = ack_apic_edge,
+#ifdef CONFIG_SMP
+	.set_affinity = dmar_msi_set_affinity,
+#endif
+	.retrigger = ioapic_retrigger_irq,
+};
+
+int arch_setup_dmar_msi(unsigned int irq)
+{
+	int ret;
+	struct msi_msg msg;
+
+	ret = msi_compose_msg(NULL, irq, &msg);
+	if (ret < 0)
+		return ret;
+	dmar_msi_write(irq, &msg);
+	set_irq_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq,
+		"edge");
+	return 0;
+}
+#endif
 
+#endif /* CONFIG_PCI_MSI */
 /*
  * Hypertransport interrupt support
  */
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-04 12:40:41.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:40:58.000000000 -0700
@@ -684,6 +684,196 @@
 	return 0;
 }
 
+/* iommu interrupt handling. Most stuff are MSI-like. */
+
+static char *fault_reason_strings[] =
+{
+	"Software",
+	"Present bit in root entry is clear",
+	"Present bit in context entry is clear",
+	"Invalid context entry",
+	"Access beyond MGAW",
+	"PTE Write access is not set",
+	"PTE Read access is not set",
+	"Next page table ptr is invalid",
+	"Root table address invalid",
+	"Context table ptr is invalid",
+	"non-zero reserved fields in RTP",
+	"non-zero reserved fields in CTP",
+	"non-zero reserved fields in PTE",
+	"Unknown"
+};
+#define MAX_FAULT_REASON_IDX 	ARRAY_SIZE(fault_reason_strings)
+
+char *dmar_get_fault_reason(u8 fault_reason)
+{
+	if (fault_reason > MAX_FAULT_REASON_IDX)
+		return fault_reason_strings[MAX_FAULT_REASON_IDX];
+	else
+		return fault_reason_strings[fault_reason];
+}
+
+void dmar_msi_unmask(unsigned int irq)
+{
+	struct intel_iommu *iommu = get_irq_data(irq);
+	unsigned long flag;
+
+	/* unmask it */
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_FECTL_REG, 0);
+	/* Read a reg to force flush the post write */
+	dmar_readl(iommu->reg, DMAR_FECTL_REG);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+void dmar_msi_mask(unsigned int irq)
+{
+	unsigned long flag;
+	struct intel_iommu *iommu = get_irq_data(irq);
+
+	/* mask it */
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_FECTL_REG, DMA_FECTL_IM);
+	/* Read a reg to force flush the post write */
+	dmar_readl(iommu->reg, DMAR_FECTL_REG);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+void dmar_msi_write(int irq, struct msi_msg *msg)
+{
+	struct intel_iommu *iommu = get_irq_data(irq);
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_FEDATA_REG, msg->data);
+	dmar_writel(iommu->reg, DMAR_FEADDR_REG, msg->address_lo);
+	dmar_writel(iommu->reg, DMAR_FEUADDR_REG, msg->address_hi);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+void dmar_msi_read(int irq, struct msi_msg *msg)
+{
+	struct intel_iommu *iommu = get_irq_data(irq);
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	msg->data = dmar_readl(iommu->reg, DMAR_FEDATA_REG);
+	msg->address_lo = dmar_readl(iommu->reg, DMAR_FEADDR_REG);
+	msg->address_hi = dmar_readl(iommu->reg, DMAR_FEUADDR_REG);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+static int iommu_page_fault_do_one(struct intel_iommu *iommu, int type,
+		u8 fault_reason, u16 source_id, u64 addr)
+{
+	char *reason;
+
+	reason = dmar_get_fault_reason(fault_reason);
+
+	printk(KERN_ERR
+		"DMAR:[%s] Request device [%02x:%02x.%d] "
+		"fault addr %llx \n"
+		"DMAR:[fault reason %02d] %s\n",
+		(type ? "DMA Read" : "DMA Write"),
+		(source_id >> 8), PCI_SLOT(source_id & 0xFF),
+		PCI_FUNC(source_id & 0xFF), addr, fault_reason, reason);
+	return 0;
+}
+
+#define PRIMARY_FAULT_REG_LEN (16)
+static irqreturn_t iommu_page_fault(int irq, void *dev_id)
+{
+	struct intel_iommu *iommu = dev_id;
+	int reg, fault_index;
+	u32 fault_status;
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	fault_status = dmar_readl(iommu->reg, DMAR_FSTS_REG);
+
+	/* TBD: ignore advanced fault log currently */
+	if (!(fault_status & DMA_FSTS_PPF))
+		goto clear_overflow;
+
+	fault_index = dma_fsts_fault_record_index(fault_status);
+	reg = cap_fault_reg_offset(iommu->cap);
+	while (1) {
+		u8 fault_reason;
+		u16 source_id;
+		u64 guest_addr;
+		int type;
+		u32 data;
+
+		/* highest 32 bits */
+		data = dmar_readl(iommu->reg, reg +
+				fault_index * PRIMARY_FAULT_REG_LEN + 12);
+		if (!(data & DMA_FRCD_F))
+			break;
+
+		fault_reason = dma_frcd_fault_reason(data);
+		type = dma_frcd_type(data);
+
+		data = dmar_readl(iommu->reg, reg +
+				fault_index * PRIMARY_FAULT_REG_LEN + 8);
+		source_id = dma_frcd_source_id(data);
+
+		guest_addr = dmar_readq(iommu->reg, reg +
+				fault_index * PRIMARY_FAULT_REG_LEN);
+		guest_addr = dma_frcd_page_addr(guest_addr);
+		/* clear the fault */
+		dmar_writel(iommu->reg, reg +
+			fault_index * PRIMARY_FAULT_REG_LEN + 12, DMA_FRCD_F);
+
+		spin_unlock_irqrestore(&iommu->register_lock, flag);
+
+		iommu_page_fault_do_one(iommu, type, fault_reason,
+				source_id, guest_addr);
+
+		fault_index++;
+		if (fault_index > cap_num_fault_regs(iommu->cap))
+			fault_index = 0;
+		spin_lock_irqsave(&iommu->register_lock, flag);
+	}
+clear_overflow:
+	/* clear primary fault overflow */
+	fault_status = dmar_readl(iommu->reg, DMAR_FSTS_REG);
+	if (fault_status & DMA_FSTS_PFO)
+		dmar_writel(iommu->reg, DMAR_FSTS_REG, DMA_FSTS_PFO);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+	return IRQ_HANDLED;
+}
+
+int dmar_set_interrupt(struct intel_iommu *iommu)
+{
+	int irq, ret;
+
+	irq = create_irq();
+	if (!irq) {
+		printk(KERN_ERR "IOMMU: no free vectors\n");
+		return -EINVAL;
+	}
+
+	set_irq_data(irq, iommu);
+	iommu->irq = irq;
+
+	ret = arch_setup_dmar_msi(irq);
+	if (ret) {
+		set_irq_data(irq, NULL);
+		iommu->irq = 0;
+		destroy_irq(irq);
+		return 0;
+	}
+
+	/* Force fault register is cleared */
+	iommu_page_fault(irq, iommu);
+
+	ret = request_irq(irq, iommu_page_fault, 0, iommu->name, iommu);
+	if (ret)
+		printk(KERN_ERR "IOMMU: can't request irq\n");
+	return ret;
+}
+
 static int iommu_init_domains(struct intel_iommu *iommu)
 {
 	unsigned long ndomains;
@@ -1436,6 +1626,10 @@
 
 		iommu_flush_write_buffer(iommu);
 
+		ret = dmar_set_interrupt(iommu);
+		if (ret)
+			goto error;
+
 		iommu_set_root_entry(iommu);
 
 		iommu_flush_context_global(iommu, 0);
Index: linux-2.6.22-rc3/include/linux/dmar.h
===================================================================
--- linux-2.6.22-rc3.orig/include/linux/dmar.h	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-04 12:40:58.000000000 -0700
@@ -28,6 +28,18 @@
 
 struct intel_iommu;
 
+extern char *dmar_get_fault_reason(u8 fault_reason);
+
+/* Can't use the common MSI interrupt functions
+ * since DMAR is not a pci device
+ */
+extern void dmar_msi_unmask(unsigned int irq);
+extern void dmar_msi_mask(unsigned int irq);
+extern void dmar_msi_read(int irq, struct msi_msg *msg);
+extern void dmar_msi_write(int irq, struct msi_msg *msg);
+extern int dmar_set_interrupt(struct intel_iommu *iommu);
+extern int arch_setup_dmar_msi(unsigned int irq);
+
 /* Intel IOMMU detection and initialization functions */
 extern void detect_intel_iommu(void);
 extern int intel_iommu_init(void);

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 09/10] Iommu Gfx workaround
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
                   ` (7 preceding siblings ...)
  2007-06-04 21:02 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  2007-06-04 21:02 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: gfx_wrkaround.patch --]
[-- Type: text/plain, Size: 4348 bytes --]

When we fix all the opensource gfx drivers to use the DMA api's,
at that time we can yank this config options out.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/Intel-IOMMU.txt |    5 +++++
 arch/x86_64/Kconfig           |   11 +++++++++++
 arch/x86_64/kernel/e820.c     |   19 +++++++++++++++++++
 drivers/pci/intel-iommu.c     |   32 ++++++++++++++++++++++++++++++++
 4 files changed, 67 insertions(+)

Index: linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/Intel-IOMMU.txt	2007-06-04 12:40:58.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt	2007-06-04 12:41:11.000000000 -0700
@@ -57,6 +57,11 @@
 If you encounter issues with graphics devices, you can try adding
 option intel_iommu=igfx_off to turn off the integrated graphics engine.
 
+If it happens to be a PCI device included in the INCLUDE_ALL Engine,
+then try enabling CONFIG_DMAR_GFX_WA to setup a 1-1 map. We hear
+graphics drivers may be in process of using DMA api's in the near
+future and at that time this option can be yanked out.
+
 Some exceptions to IOVA
 -----------------------
 Interrupt ranges are not address translated, (0xfee00000 - 0xfeefffff).
Index: linux-2.6.22-rc3/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/Kconfig	2007-06-04 12:35:19.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/Kconfig	2007-06-04 12:41:11.000000000 -0700
@@ -727,6 +727,17 @@
 	  and includes pci device scope covered by these DMA
 	  remapping device.
 
+config DMAR_GFX_WA
+	bool "Support for Graphics workaround"
+	depends on DMAR
+	default y
+	help
+	 Current Graphics drivers tend to use physical address
+	 for DMA and avoid using DMA api's. Setting this config
+	 option permits the IOMMU driver to set a unity map for
+	 all the OS visible memory. Hence the driver can continue
+	 to use physical addresses for DMA.
+
 source "drivers/pci/pcie/Kconfig"
 
 source "drivers/pci/Kconfig"
Index: linux-2.6.22-rc3/arch/x86_64/kernel/e820.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/kernel/e820.c	2007-06-04 12:19:13.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/kernel/e820.c	2007-06-04 12:41:11.000000000 -0700
@@ -717,3 +717,22 @@
 	printk(KERN_INFO "Allocating PCI resources starting at %lx (gap: %lx:%lx)\n",
 		pci_mem_start, gapstart, gapsize);
 }
+
+int __init arch_get_ram_range(int slot, u64 *addr, u64 *size)
+{
+	int i;
+
+	if (slot < 0 || slot >= e820.nr_map)
+		return -1;
+	for (i = slot; i < e820.nr_map; i++) {
+		if(e820.map[i].type != E820_RAM)
+			continue;
+		break;
+	}
+	if (i == e820.nr_map || e820.map[i].addr > (max_pfn << PAGE_SHIFT))
+		return -1;
+	*addr = e820.map[i].addr;
+	*size = min_t(u64, e820.map[i].size + e820.map[i].addr,
+		max_pfn << PAGE_SHIFT) - *addr;
+	return i + 1;
+}
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-04 12:40:58.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:41:11.000000000 -0700
@@ -1556,6 +1556,34 @@
 		rmrr->end_address + 1);
 }
 
+#ifdef CONFIG_DMAR_GFX_WA
+extern int arch_get_ram_range(int slot, u64 *addr, u64 *size);
+static void __init iommu_prepare_gfx_mapping(void)
+{
+	struct pci_dev *pdev = NULL;
+	u64 base, size;
+	int slot;
+	int ret;
+
+	for_each_pci_dev(pdev) {
+		if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO ||
+				!IS_GFX_DEVICE(pdev))
+			continue;
+		printk(KERN_INFO "IOMMU: gfx device %s 1-1 mapping\n",
+			pci_name(pdev));
+		slot = 0;
+		while ((slot = arch_get_ram_range(slot, &base, &size)) >= 0) {
+			ret = iommu_prepare_identity_map(pdev, base, base + size);
+			if (ret)
+				goto error;
+		}
+		continue;
+error:
+		printk(KERN_ERR "IOMMU: mapping reserved region failed\n");
+	}
+}
+#endif
+
 int __init init_dmars(void)
 {
 	struct dmar_drhd_unit *drhd;
@@ -1611,6 +1639,10 @@
 			printk(KERN_ERR "IOMMU: mapping reserved region failed\n");
 	end_for_each_rmrr_device(rmrr, pdev)
 
+#ifdef CONFIG_DMAR_GFX_WA
+	iommu_prepare_gfx_mapping();
+#endif
+
 	/*
 	 * for each drhd
 	 *   enable fault log

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 10/10] Iommu floppy workaround
  2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
                   ` (8 preceding siblings ...)
  2007-06-04 21:02 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
@ 2007-06-04 21:02 ` anil.s.keshavamurthy
  9 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: floppy_disk_wrkaround.patch --]
[-- Type: text/plain, Size: 2143 bytes --]

	This config option (DMAR_FLPY_WA) sets up 1:1 mapping for the
floppy device so that the floppy device which does not use
DMA api's will continue to work.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 arch/x86_64/Kconfig       |   10 ++++++++++
 drivers/pci/intel-iommu.c |   23 +++++++++++++++++++++++
 2 files changed, 33 insertions(+)

Index: linux-2.6.22-rc3/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/Kconfig	2007-06-04 12:41:11.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/Kconfig	2007-06-04 12:41:23.000000000 -0700
@@ -738,6 +738,16 @@
 	 all the OS visible memory. Hence the driver can continue
 	 to use physical addresses for DMA.
 
+config DMAR_FLPY_WA
+	bool "Support for Floppy disk workaround"
+	depends on DMAR
+	default y
+	help
+	 Floppy disk drivers are know to by pass dma api calls
+	 their by failing to work when IOMMU is enabled. This
+	 work around will setup a 1 to 1 mappings for the first
+	 16M to make floppy(isa device) work.
+
 source "drivers/pci/pcie/Kconfig"
 
 source "drivers/pci/Kconfig"
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-04 12:41:11.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:41:23.000000000 -0700
@@ -1584,6 +1584,25 @@
 }
 #endif
 
+#ifdef CONFIG_DMAR_FLPY_WA
+static void iommu_prepare_isa(void)
+{
+	struct pci_dev *pdev = NULL;
+	int ret;
+
+	pdev = pci_get_class (PCI_CLASS_BRIDGE_ISA << 8, NULL);
+	if (!pdev)
+		return;
+
+	printk (KERN_INFO "IOMMU: Prepare 0-16M unity mapping for LPC\n");
+	ret = iommu_prepare_identity_map(pdev, 0, 16*1024*1024);
+
+	if (ret)
+		printk ("IOMMU: Failed to create 0-64M identity map, Floppy might not work\n");
+
+}
+#endif
+
 int __init init_dmars(void)
 {
 	struct dmar_drhd_unit *drhd;
@@ -1643,6 +1662,10 @@
 	iommu_prepare_gfx_mapping();
 #endif
 
+#ifdef CONFIG_DMAR_FLPY_WA
+	iommu_prepare_isa();
+#endif
+
 	/*
 	 * for each drhd
 	 *   enable fault log

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 01/10] DMAR detection and parsing logic
  2007-06-04 21:02 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
@ 2007-06-04 22:54   ` Jeff Garzik
  2007-06-04 22:58     ` Keshavamurthy, Anil S
  2007-06-04 23:03   ` Jeff Garzik
  1 sibling, 1 reply; 23+ messages in thread
From: Jeff Garzik @ 2007-06-04 22:54 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, akpm, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Mon, Jun 04, 2007 at 02:02:43PM -0700, anil.s.keshavamurthy@intel.com wrote:
> --- linux-2.6.22-rc3.orig/drivers/pci/Makefile	2007-06-04 12:28:13.000000000 -0700
> +++ linux-2.6.22-rc3/drivers/pci/Makefile	2007-06-04 12:33:15.000000000 -0700
> @@ -20,6 +20,9 @@
>  # Build the Hypertransport interrupt support
>  obj-$(CONFIG_HT_IRQ) += htirq.o
>  
> +# Build Intel IOMMU support
> +obj-$(CONFIG_DMAR) += dmar.o

It's not Intel IOMMU support though, is it?

It's x86 PCI IOMMU support (as opposed to x86 GART IOMMU).

I just want to avoid Intel branding on something that will not be
specific to Intel-manufacured products in the long term.



> Index: linux-2.6.22-rc3/drivers/pci/dmar.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6.22-rc3/drivers/pci/dmar.c	2007-06-04 12:33:15.000000000 -0700
> @@ -0,0 +1,318 @@
> +/*
> + * Copyright (c) 2006, Intel Corporation.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + *
> + * 	Copyright (C) Ashok Raj <ashok.raj@intel.com>
> + *	Copyright (C) Shaohua Li <shaohua.li@intel.com>
> + *
> + * 	This file implements early detection/parsing of DMA Remapping Devices
> + * reported to OS through BIOS via DMA remapping reporting (DMAR) ACPI
> + * tables.
> + */
> +
> +#include <linux/pci.h>
> +#include <linux/dmar.h>
> +
> +#undef PREFIX
> +#define PREFIX "DMAR:"
> +
> +/* No locks are needed as DMA remapping hardware unit
> + * list is constructed at boot time and hotplug of
> + * these units are not supported by the architecture.
> + */
> +LIST_HEAD(dmar_drhd_units);
> +LIST_HEAD(dmar_rmrr_units);
> +
> +static struct acpi_table_header * __initdata dmar_tbl;
> +
> +static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
> +{
> +	/*
> +	 * add INCLUDE_ALL at the tail, so scan the list will find it at
> +	 * the very end.
> +	 */
> +	if (drhd->include_all)
> +		list_add_tail(&drhd->list, &dmar_drhd_units);
> +	else
> +		list_add(&drhd->list, &dmar_drhd_units);
> +}
> +
> +static void __init dmar_register_rmrr_unit(struct dmar_rmrr_unit *rmrr)
> +{
> +	list_add(&rmrr->list, &dmar_rmrr_units);
> +}
> +
> +static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope,
> +					   struct pci_dev **dev, u16 segment)
> +{
> +	struct pci_bus *bus;
> +	struct pci_dev *pdev = NULL;
> +	struct acpi_dmar_pci_path *path;
> +	int count;
> +
> +	bus = pci_find_bus(segment, scope->bus);
> +	path = (struct acpi_dmar_pci_path *)(scope + 1);
> +	count = (scope->length - sizeof(struct acpi_dmar_device_scope))
> +		/sizeof(struct acpi_dmar_pci_path);

add a space.

But overall this casting and typed-struct pointer addition is a bit
fragile.  No comment (I haven't read enough yet) whether it is needed or
not.




> +
> +	while (count) {
> +		if (pdev)
> +			pci_dev_put(pdev);
> +		/*
> +		 * Some BIOSes list non-exist devices in DMAR table, just
> +		 * ignore it
> +		 */
> +		if (!bus) {
> +			printk(KERN_WARNING
> +			PREFIX "Device scope bus [%d] not found\n",
> +			scope->bus);
> +			break;
> +		}
> +		pdev = pci_get_slot(bus, PCI_DEVFN(path->dev, path->fn));
> +		if (!pdev) {
> +			printk(KERN_WARNING PREFIX
> +			"Device scope device [%04x:%02x:%02x.%02x] not found\n",
> +				segment, bus->number, path->dev, path->fn);
> +			break;
> +		}
> +		path ++;
> +		count --;
> +		bus = pdev->subordinate;
> +	}
> +	if (!pdev) {
> +		printk(KERN_WARNING PREFIX
> +		"Device scope device [%04x:%02x:%02x.%02x] not found\n",
> +		segment, scope->bus, path->dev, path->fn);
> +		*dev = NULL;
> +		return 0;
> +	}
> +	if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT && pdev->subordinate)
> +	   || (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE && !pdev->subordinate)) {
> +		pci_dev_put(pdev);
> +		printk(KERN_WARNING PREFIX "Device scope type does not match for %s\n", pci_name(pdev));
> +		return -EINVAL;
> +	}
> +	*dev = pdev;
> +	return 0;
> +}
> +
> +static int __init dmar_parse_dev_scope(void *start, void *end, int *cnt,
> +				       struct pci_dev ***devices, u16 segment)
> +{
> +	struct acpi_dmar_device_scope *scope;
> +	void * tmp = start;
> +	int index;
> +	int ret;
> +
> +	*cnt = 0;
> +	while (start < end) {
> +		scope = start;
> +		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
> +		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE)
> +			(*cnt)++;
> +		else
> +			printk(KERN_WARNING PREFIX "Unsupported device scope\n");
> +		start += scope->length;
> +	}
> +	if (*cnt == 0)
> +		return 0;
> +
> +	*devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL);
> +	if (!*devices)
> +		return -ENOMEM;
> +
> +	start = tmp;
> +	index = 0;
> +	while (start < end) {
> +		scope = start;
> +		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
> +		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) {
> +			ret = dmar_parse_one_dev_scope(scope,
> +				&(*devices)[index], segment);
> +			if (ret) {
> +				kfree(*devices);
> +				return ret;
> +			}
> +			index ++;
> +		}
> +		start += scope->length;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition
> + * structure which uniquely represent one DMA remapping hardware unit
> + * present in the platform
> + */
> +static int __init
> +dmar_parse_one_drhd(struct acpi_dmar_header *header)
> +{
> +	struct acpi_dmar_hardware_unit * drhd = (struct acpi_dmar_hardware_unit *)header;
> +	struct dmar_drhd_unit *dmaru;
> +	int ret = 0;
> +	static int include_all;
> +
> +	dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL);
> +	if (!dmaru)
> +		return -ENOMEM;
> +
> +	dmaru->reg_base_addr = drhd->address;
> +	dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */
> +
> +	if (!dmaru->include_all)
> +		ret = dmar_parse_dev_scope((void *)(drhd + 1),
> +				((void *)drhd) + header->length,
> +				&dmaru->devices_cnt, &dmaru->devices,
> +				drhd->segment);
> +	else {
> +		/* Only allow one INCLUDE_ALL */
> +		if (include_all) {
> +			printk(KERN_WARNING PREFIX "Only one INCLUDE_ALL "
> +				"device scope is allowed\n");
> +			ret = -EINVAL;
> +		}
> +		include_all = 1;
> +	}
> +
> +	if (ret || (dmaru->devices_cnt == 0 && !dmaru->include_all))
> +		kfree(dmaru);
> +	else
> +		dmar_register_drhd_unit(dmaru);
> +	return ret;
> +}
> +
> +static int __init
> +dmar_parse_one_rmrr(struct acpi_dmar_header *header)
> +{
> +	struct acpi_dmar_reserved_memory *rmrr = (struct acpi_dmar_reserved_memory *)header;
> +	struct dmar_rmrr_unit *rmrru;
> +	int ret = 0;
> +
> +	rmrru = kzalloc(sizeof(*rmrru), GFP_KERNEL);
> +	if (!rmrru)
> +		return -ENOMEM;
> +
> +	rmrru->base_address = rmrr->base_address;
> +	rmrru->end_address = rmrr->end_address;
> +	ret = dmar_parse_dev_scope((void *)(rmrr + 1),
> +		((void*)rmrr) + header->length,
> +		&rmrru->devices_cnt, &rmrru->devices, rmrr->segment);
> +
> +	if (ret || (rmrru->devices_cnt == 0))
> +		kfree(rmrru);
> +	else
> +		dmar_register_rmrr_unit(rmrru);
> +	return ret;
> +}
> +
> +static void __init
> +dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
> +{
> +	struct acpi_dmar_hardware_unit *drhd;
> +	struct acpi_dmar_reserved_memory *rmrr;
> +
> +	switch (header->type) {
> +	case ACPI_DMAR_TYPE_HARDWARE_UNIT:
> +		drhd = (struct acpi_dmar_hardware_unit *)header;
> +		printk (KERN_INFO PREFIX
> +			"DRHD (flags: 0x%08x)base: 0x%016Lx\n",
> +			drhd->flags, drhd->address);
> +		break;
> +	case ACPI_DMAR_TYPE_RESERVED_MEMORY:
> +		rmrr = (struct acpi_dmar_reserved_memory *)header;
> +
> +		printk (KERN_INFO PREFIX
> +			"RMRR base: 0x%016Lx end: 0x%016Lx\n",
> +			rmrr->base_address, rmrr->end_address);
> +		break;
> +	}
> +}
> +
> +/**
> + * parse_dmar_table - parses the DMA reporting table
> + */
> +static int __init
> +parse_dmar_table(void)
> +{
> +	struct acpi_table_dmar *dmar;
> +	struct acpi_dmar_header *entry_header;
> +	int ret = 0;
> +
> +	dmar = (struct acpi_table_dmar *)dmar_tbl;
> +
> +	if (!dmar->width) {
> +		printk (KERN_WARNING PREFIX "Zero: Invalid DMAR haw\n");
> +		return -EINVAL;
> +	}
> +
> +	printk (KERN_INFO PREFIX "Host address width %d\n",
> +		dmar->width + 1);
> +
> +	entry_header = (struct acpi_dmar_header *)(dmar + 1);
> +	while (((unsigned long)entry_header) < (((unsigned long)dmar) + dmar_tbl->length)) {
> +		dmar_table_print_dmar_entry(entry_header);
> +
> +		switch (entry_header->type) {
> +		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
> +			ret = dmar_parse_one_drhd(entry_header);
> +			break;
> +		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
> +			ret = dmar_parse_one_rmrr(entry_header);
> +			break;
> +		default:
> +			printk(KERN_WARNING PREFIX "Unknown DMAR structure type\n");
> +			ret = 0; /* for forward compatibility */
> +			break;
> +		}
> +		if (ret)
> +			break;
> +
> +		entry_header = ((void *)entry_header + entry_header->length);
> +	}
> +	return ret;
> +}
> +
> +
> +int __init dmar_table_init(void)
> +{
> +
> +	parse_dmar_table();
> +	if (list_empty(&dmar_drhd_units)) {
> +		printk(KERN_ERR PREFIX "No DMAR devices found\n");
> +		return -ENODEV;
> +	}
> +	return 0;
> +}
> +
> +/**
> + * early_dmar_detect - checks to see if the platform supports DMAR devices
> + */
> +int __init early_dmar_detect(void)
> +{
> +	acpi_status status = AE_OK;
> +
> +	/* if we could find DMAR table, then there are DMAR devices */
> +	status = acpi_get_table(ACPI_SIG_DMAR, 0,
> +				(struct acpi_table_header **)&dmar_tbl);
> +
> +	if (ACPI_SUCCESS(status) && !dmar_tbl) {
> +		printk (KERN_WARNING PREFIX "Unable to map DMAR\n");
> +		status = AE_NOT_FOUND;
> +	}
> +
> +	return (ACPI_SUCCESS(status) ? 1 : 0);
> +}
> Index: linux-2.6.22-rc3/include/acpi/actbl1.h
> ===================================================================
> --- linux-2.6.22-rc3.orig/include/acpi/actbl1.h	2007-06-04 12:28:13.000000000 -0700
> +++ linux-2.6.22-rc3/include/acpi/actbl1.h	2007-06-04 12:33:15.000000000 -0700
> @@ -257,7 +257,8 @@
>  struct acpi_table_dmar {
>  	struct acpi_table_header header;	/* Common ACPI table header */
>  	u8 width;		/* Host Address Width */
> -	u8 reserved[11];
> +	u8 flags;
> +	u8 reserved[10];
>  };
>  
>  /* DMAR subtable header */
> @@ -265,8 +266,6 @@
>  struct acpi_dmar_header {
>  	u16 type;
>  	u16 length;
> -	u8 flags;
> -	u8 reserved[3];
>  };
>  
>  /* Values for subtable type in struct acpi_dmar_header */
> @@ -274,13 +273,15 @@
>  enum acpi_dmar_type {
>  	ACPI_DMAR_TYPE_HARDWARE_UNIT = 0,
>  	ACPI_DMAR_TYPE_RESERVED_MEMORY = 1,
> -	ACPI_DMAR_TYPE_RESERVED = 2	/* 2 and greater are reserved */
> +	ACPI_DMAR_TYPE_ATSR = 2,
> +	ACPI_DMAR_TYPE_RESERVED = 3	/* 3 and greater are reserved */
>  };
>  
>  struct acpi_dmar_device_scope {
>  	u8 entry_type;
>  	u8 length;
> -	u8 segment;
> +	u16 reserved;
> +	u8 enumeration_id;
>  	u8 bus;
>  };
>  
> @@ -290,7 +291,14 @@
>  	ACPI_DMAR_SCOPE_TYPE_NOT_USED = 0,
>  	ACPI_DMAR_SCOPE_TYPE_ENDPOINT = 1,
>  	ACPI_DMAR_SCOPE_TYPE_BRIDGE = 2,
> -	ACPI_DMAR_SCOPE_TYPE_RESERVED = 3	/* 3 and greater are reserved */
> +	ACPI_DMAR_SCOPE_TYPE_IOAPIC = 3,
> +	ACPI_DMAR_SCOPE_TYPE_HPET = 4,
> +	ACPI_DMAR_SCOPE_TYPE_RESERVED = 5	/* 5 and greater are reserved */
> +};
> +
> +struct acpi_dmar_pci_path {
> +	u8 dev;
> +	u8 fn;
>  };
>  
>  /*
> @@ -301,6 +309,9 @@
>  
>  struct acpi_dmar_hardware_unit {
>  	struct acpi_dmar_header header;
> +	u8 flags;
> +	u8 reserved;
> +	u16 segment;
>  	u64 address;		/* Register Base Address */
>  };
>  
> @@ -312,7 +323,9 @@
>  
>  struct acpi_dmar_reserved_memory {
>  	struct acpi_dmar_header header;
> -	u64 address;		/* 4_k aligned base address */
> +	u16 reserved;
> +	u16 segment;
> +	u64 base_address;		/* 4_k aligned base address */
>  	u64 end_address;	/* 4_k aligned limit address */
>  };
>  
> Index: linux-2.6.22-rc3/include/linux/dmar.h
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-04 12:33:15.000000000 -0700
> @@ -0,0 +1,52 @@
> +/*
> + * Copyright (c) 2006, Intel Corporation.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + *
> + * Copyright (C) Ashok Raj <ashok.raj@intel.com>
> + * Copyright (C) Shaohua Li <shaohua.li@intel.com>
> + */
> +
> +#ifndef __DMAR_H__
> +#define __DMAR_H__
> +
> +#include <linux/acpi.h>
> +#include <linux/types.h>
> +
> +
> +extern int dmar_table_init(void);
> +extern int early_dmar_detect(void);
> +
> +extern struct list_head dmar_drhd_units;
> +extern struct list_head dmar_rmrr_units;
> +
> +struct dmar_drhd_unit {
> +	struct list_head list;		/* list of drhd units	*/
> +	u64	reg_base_addr;		/* register base address*/
> +	struct	pci_dev **devices; 	/* target device array	*/
> +	int	devices_cnt;		/* target device count	*/
> +	u8	ignored:1; 		/* ignore drhd		*/
> +	u8	include_all:1;
> +	struct intel_iommu *iommu;
> +};
> +
> +struct dmar_rmrr_unit {
> +	struct list_head list;		/* list of rmrr units	*/
> +	u64	base_address;		/* reserved base address*/
> +	u64	end_address;		/* reserved end address */
> +	struct pci_dev **devices;	/* target devices */
> +	int	devices_cnt;		/* target device count */
> +};
> +
> +#endif /* __DMAR_H__ */
> 
> -- 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-04 21:02 ` [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects anil.s.keshavamurthy
@ 2007-06-04 22:57   ` Jeff Garzik
  2007-06-04 23:06     ` Keshavamurthy, Anil S
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff Garzik @ 2007-06-04 22:57 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, akpm, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Mon, Jun 04, 2007 at 02:02:44PM -0700, anil.s.keshavamurthy@intel.com wrote:
> 	This patch provides a common interface for pre allocating and 
> managing pool of objects.
> 
> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> ---
>  include/linux/respool.h |   43 +++++++++++
>  lib/Makefile            |    1 
>  lib/respool.c           |  176 ++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 220 insertions(+)
> 
> Index: linux-2.6.22-rc3/include/linux/respool.h
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6.22-rc3/include/linux/respool.h	2007-06-04 12:36:17.000000000 -0700
> @@ -0,0 +1,43 @@
> +/*
> + * respool.c - library routines for handling generic pre-allocated pool of objects
> + *
> + * Copyright (c) 2006, Intel Corporation.
> + *
> + * This file is released under the GPLv2.
> + *
> + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> + *
> + */
> +
> +#ifndef _RESPOOL_H_
> +#define _RESPOOL_H_
> +
> +#include <linux/types.h>
> +#include <linux/kernel.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +
> +typedef void *(*rpool_alloc_t)(unsigned int, gfp_t);
> +typedef void (*rpool_free_t)(void *, unsigned int);
> +
> +struct resource_pool {
> +	struct work_struct work;
> +	spinlock_t	pool_lock;	/* pool lock to walk the pool_head */
> +	struct list_head pool_head;	/* pool objects list head	*/
> +	unsigned int	min_count;	/* min count to maintain	*/
> +	unsigned int	grow_count;	/* grow by count when time to grow */
> +	unsigned int	curr_count;	/* count of current free objects */
> +	unsigned int	alloc_size;	/* objects size			*/
> +	rpool_alloc_t 	alloc_mem;	/* pool mem alloc function pointer */
> +	rpool_free_t 	free_mem;	/* pool mem free function pointer */
> +};
> +
> +void *get_resource_pool_obj(struct resource_pool *ppool);
> +void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool);
> +void destroy_resource_pool(struct resource_pool *ppool);
> +int init_resource_pool(struct resource_pool *res,
> +	unsigned int min_count, unsigned int alloc_size,
> +	unsigned int grow_count, rpool_alloc_t alloc_fn,
> +	rpool_free_t free_fn);
> +
> +#endif
> Index: linux-2.6.22-rc3/lib/Makefile
> ===================================================================
> --- linux-2.6.22-rc3.orig/lib/Makefile	2007-06-04 12:28:10.000000000 -0700
> +++ linux-2.6.22-rc3/lib/Makefile	2007-06-04 12:36:17.000000000 -0700
> @@ -58,6 +58,7 @@
>  obj-$(CONFIG_AUDIT_GENERIC) += audit.o
>  
>  obj-$(CONFIG_SWIOTLB) += swiotlb.o
> +obj-$(CONFIG_DMAR) += respool.o
>  obj-$(CONFIG_FAULT_INJECTION) += fault-inject.o
>  
>  lib-$(CONFIG_GENERIC_BUG) += bug.o
> Index: linux-2.6.22-rc3/lib/respool.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6.22-rc3/lib/respool.c	2007-06-04 12:36:17.000000000 -0700
> @@ -0,0 +1,176 @@
> +/*
> + * respool.c - library routines for handling generic pre-allocated pool of objects
> + *
> + * Copyright (c) 2006, Intel Corporation.
> + *
> + * This file is released under the GPLv2.
> + *
> + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> + */
> +
> +#include <linux/respool.h>
> +
> +/**
> + * get_resource_pool_obj - gets an object from the pool
> + * @ppool - resource pool in question
> + * This function gets an object from the pool and
> + * if the pool count drops below min_count, this
> + * function schedules work to grow the pool. If
> + * no elements are fount in the pool then this function
> + * tries to get memory from kernel.
> + */
> +void * get_resource_pool_obj(struct resource_pool *ppool)
> +{
> +	unsigned long	flags;
> +	struct list_head *plist;
> +	bool queue_work = 0;
> +
> +	spin_lock_irqsave(&ppool->pool_lock, flags);
> +	if (!list_empty(&ppool->pool_head)) {
> +		plist = ppool->pool_head.next;
> +		list_del(plist);
> +		ppool->curr_count--;
> +	} else {
> +		/*Making sure that curr_count is 0 when list is empty */
> +		plist = NULL;
> +		BUG_ON(ppool->curr_count != 0);
> +	}
> +
> +	/* Check if pool needs to grow */
> +	if (ppool->curr_count <= ppool->min_count)
> +		queue_work = 1;
> +	spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +
> +	if (queue_work)
> +		schedule_work(&ppool->work); /* queue work to grow the pool */
> +
> +
> +	if (plist) {
> +		memset(plist, 0, ppool->alloc_size); /* Zero out memory */
> +		return plist;
> +	}
> +
> +	/* Out of luck, try to get memory from kernel */
> +	plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
> +			GFP_ATOMIC);

Since you are outside the lock, you should pass in gfp_flags, to enhance
the chance of being able to use GFP_KERNEL in certain contexts.


> +	return plist;
> +}
> +
> +/**
> + * put_resource_pool_obj - puts an object back to the pool
> + * @vaddr - object's address
> + * @ppool - resource pool in question.
> + * This function puts an object back to the pool.
> + */
> +void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool)
> +{
> +	unsigned long	flags;
> +	struct list_head *plist = (struct list_head *)vaddr;
> +
> +	BUG_ON(!vaddr);
> +	BUG_ON(!ppool);
> +
> +	spin_lock_irqsave(&ppool->pool_lock, flags);
> +	list_add(plist, &ppool->pool_head);
> +	ppool->curr_count++;
> +	spin_unlock_irqrestore(&ppool->pool_lock, flags);

you should add logic to free resources here (or queue_work to free the
resources), if the pool grows beyond a certain size.



> +/**
> + * grow_resource_pool - grows the given resource pool
> + * @work - work struct
> + * This functions gets the resource pool pointer from the
> + * work struct and grows the resource pool by grow_count.
> + */
> +static void
> +grow_resource_pool(struct work_struct * work)
> +{
> +	struct resource_pool *ppool;
> +	struct list_head *plist;
> +	unsigned int min_count, grow_count = 0;
> +	unsigned long	flags;
> +
> +	ppool = container_of(work, struct resource_pool, work);
> +
> +	/* compute the minimum count to grow */
> +	spin_lock_irqsave(&ppool->pool_lock, flags);
> +	min_count = ppool->min_count + ppool->grow_count;
> +	if (ppool->curr_count < min_count)
> +		grow_count = min_count - ppool->curr_count;
> +	spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +
> +	while(grow_count) {
> +		plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
> +			GFP_KERNEL);
> +
> +		if (!plist)
> +			break;
> +
> +		/* Add the element to the list */
> +		spin_lock_irqsave(&ppool->pool_lock, flags);
> +		list_add(plist, &ppool->pool_head);
> +		ppool->curr_count++;
> +		spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +		grow_count--;
> +	}
> +}
> +
> +/**
> + * destroy_resource_pool - destroys the given resource pool
> + * @ppool - resource pool in question.
> + * This function walks throuhg its list and frees up the
> + * preallocated objects.
> + */
> +void
> +destroy_resource_pool(struct resource_pool *ppool)
> +{
> +	unsigned long	flags;
> +	struct list_head *plist;
> +
> +	spin_lock_irqsave(&ppool->pool_lock, flags);
> +	while (!list_empty(&ppool->pool_head)) {
> +		plist = &ppool->pool_head;
> +		list_del(plist);
> +
> +		ppool->free_mem(plist, ppool->alloc_size);
> +
> +	}
> +	ppool->curr_count = 0;
> +	spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +}
> +
> +/**
> + * init_resource_pool - initializes the resource pool
> + * @res: resource pool in question.
> + * @min_count: count of objectes to pre-allocate
> + * @alloc_size: size of each objects
> + * @grow_count: count of objects to grow when required
> + * @alloc_fn: function which allocates memory
> + * @free_fn: function which frees memory
> + *
> + * This function initializes the given resource pool and
> + * populates the min_count of objects to begin with.
> + */
> +int
> +init_resource_pool(struct resource_pool *res,
> +	unsigned int min_count, unsigned int alloc_size,
> +	unsigned int grow_count, rpool_alloc_t alloc_fn,
> +	rpool_free_t free_fn)
> +{
> +	res->min_count = min_count;
> +	res->alloc_size = alloc_size;
> +	res->grow_count = grow_count;
> +	res->curr_count = 0;
> +	res->alloc_mem = alloc_fn;
> +	res->free_mem = free_fn;
> +	spin_lock_init(&res->pool_lock);
> +	INIT_LIST_HEAD(&res->pool_head);
> +	INIT_WORK(&res->work, grow_resource_pool);
> +
> +	/* grow the pool */
> +	grow_resource_pool(&res->work);
> +
> +	return (res->curr_count == 0);
> +}
> +
> 
> -- 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 01/10] DMAR detection and parsing logic
  2007-06-04 22:54   ` Jeff Garzik
@ 2007-06-04 22:58     ` Keshavamurthy, Anil S
  0 siblings, 0 replies; 23+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-04 22:58 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: anil.s.keshavamurthy, linux-kernel, akpm, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, Jun 04, 2007 at 06:54:21PM -0400, Jeff Garzik wrote:
> On Mon, Jun 04, 2007 at 02:02:43PM -0700, anil.s.keshavamurthy@intel.com wrote:
> > --- linux-2.6.22-rc3.orig/drivers/pci/Makefile	2007-06-04 12:28:13.000000000 -0700
> > +++ linux-2.6.22-rc3/drivers/pci/Makefile	2007-06-04 12:33:15.000000000 -0700
> > @@ -20,6 +20,9 @@
> >  # Build the Hypertransport interrupt support
> >  obj-$(CONFIG_HT_IRQ) += htirq.o
> >  
> > +# Build Intel IOMMU support
> > +obj-$(CONFIG_DMAR) += dmar.o
> 
> It's not Intel IOMMU support though, is it?
Yes, it is Intel IOMMU support.

> 
> It's x86 PCI IOMMU support (as opposed to x86 GART IOMMU).
> 
> I just want to avoid Intel branding on something that will not be
> specific to Intel-manufacured products in the long term.

This DMAR (DMA Remapping unit) is Intel specific and
its spec is available at www.intel.com/technology/virtualization

-Anil

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 01/10] DMAR detection and parsing logic
  2007-06-04 21:02 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
  2007-06-04 22:54   ` Jeff Garzik
@ 2007-06-04 23:03   ` Jeff Garzik
  2007-06-04 23:17     ` Keshavamurthy, Anil S
  1 sibling, 1 reply; 23+ messages in thread
From: Jeff Garzik @ 2007-06-04 23:03 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, akpm, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem


Is there no way at all, other than ACPI, to find this stuff?

We would prefer to avoid hardware if the hardware enumeration is sane.

	Jeff




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-04 22:57   ` Jeff Garzik
@ 2007-06-04 23:06     ` Keshavamurthy, Anil S
  2007-06-04 23:43       ` Jeff Garzik
  0 siblings, 1 reply; 23+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-04 23:06 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: anil.s.keshavamurthy, linux-kernel, akpm, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, Jun 04, 2007 at 06:57:14PM -0400, Jeff Garzik wrote:
> On Mon, Jun 04, 2007 at 02:02:44PM -0700, anil.s.keshavamurthy@intel.com wrote:
> > 	This patch provides a common interface for pre allocating and 
> > managing pool of objects.
> > 
> > Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> > ---
> >  include/linux/respool.h |   43 +++++++++++
> >  lib/Makefile            |    1 
> >  lib/respool.c           |  176 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 220 insertions(+)
> > 
> > Index: linux-2.6.22-rc3/include/linux/respool.h
> > ===================================================================
> > --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> > +++ linux-2.6.22-rc3/include/linux/respool.h	2007-06-04 12:36:17.000000000 -0700
> > @@ -0,0 +1,43 @@
> > +/*
> > + * respool.c - library routines for handling generic pre-allocated pool of objects
> > + *
> > + * Copyright (c) 2006, Intel Corporation.
> > + *
> > + * This file is released under the GPLv2.
> > + *
> > + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> > + *
> > + */
> > +
> > +#ifndef _RESPOOL_H_
> > +#define _RESPOOL_H_
> > +
> > +#include <linux/types.h>
> > +#include <linux/kernel.h>
> > +#include <linux/slab.h>
> > +#include <linux/workqueue.h>
> > +
> > +typedef void *(*rpool_alloc_t)(unsigned int, gfp_t);
> > +typedef void (*rpool_free_t)(void *, unsigned int);
> > +
> > +struct resource_pool {
> > +	struct work_struct work;
> > +	spinlock_t	pool_lock;	/* pool lock to walk the pool_head */
> > +	struct list_head pool_head;	/* pool objects list head	*/
> > +	unsigned int	min_count;	/* min count to maintain	*/
> > +	unsigned int	grow_count;	/* grow by count when time to grow */
> > +	unsigned int	curr_count;	/* count of current free objects */
> > +	unsigned int	alloc_size;	/* objects size			*/
> > +	rpool_alloc_t 	alloc_mem;	/* pool mem alloc function pointer */
> > +	rpool_free_t 	free_mem;	/* pool mem free function pointer */
> > +};
> > +
> > +void *get_resource_pool_obj(struct resource_pool *ppool);
> > +void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool);
> > +void destroy_resource_pool(struct resource_pool *ppool);
> > +int init_resource_pool(struct resource_pool *res,
> > +	unsigned int min_count, unsigned int alloc_size,
> > +	unsigned int grow_count, rpool_alloc_t alloc_fn,
> > +	rpool_free_t free_fn);
> > +
> > +#endif
> > Index: linux-2.6.22-rc3/lib/Makefile
> > ===================================================================
> > --- linux-2.6.22-rc3.orig/lib/Makefile	2007-06-04 12:28:10.000000000 -0700
> > +++ linux-2.6.22-rc3/lib/Makefile	2007-06-04 12:36:17.000000000 -0700
> > @@ -58,6 +58,7 @@
> >  obj-$(CONFIG_AUDIT_GENERIC) += audit.o
> >  
> >  obj-$(CONFIG_SWIOTLB) += swiotlb.o
> > +obj-$(CONFIG_DMAR) += respool.o
> >  obj-$(CONFIG_FAULT_INJECTION) += fault-inject.o
> >  
> >  lib-$(CONFIG_GENERIC_BUG) += bug.o
> > Index: linux-2.6.22-rc3/lib/respool.c
> > ===================================================================
> > --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> > +++ linux-2.6.22-rc3/lib/respool.c	2007-06-04 12:36:17.000000000 -0700
> > @@ -0,0 +1,176 @@
> > +/*
> > + * respool.c - library routines for handling generic pre-allocated pool of objects
> > + *
> > + * Copyright (c) 2006, Intel Corporation.
> > + *
> > + * This file is released under the GPLv2.
> > + *
> > + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> > + */
> > +
> > +#include <linux/respool.h>
> > +
> > +/**
> > + * get_resource_pool_obj - gets an object from the pool
> > + * @ppool - resource pool in question
> > + * This function gets an object from the pool and
> > + * if the pool count drops below min_count, this
> > + * function schedules work to grow the pool. If
> > + * no elements are fount in the pool then this function
> > + * tries to get memory from kernel.
> > + */
> > +void * get_resource_pool_obj(struct resource_pool *ppool)
> > +{
> > +	unsigned long	flags;
> > +	struct list_head *plist;
> > +	bool queue_work = 0;
> > +
> > +	spin_lock_irqsave(&ppool->pool_lock, flags);
> > +	if (!list_empty(&ppool->pool_head)) {
> > +		plist = ppool->pool_head.next;
> > +		list_del(plist);
> > +		ppool->curr_count--;
> > +	} else {
> > +		/*Making sure that curr_count is 0 when list is empty */
> > +		plist = NULL;
> > +		BUG_ON(ppool->curr_count != 0);
> > +	}
> > +
> > +	/* Check if pool needs to grow */
> > +	if (ppool->curr_count <= ppool->min_count)
> > +		queue_work = 1;
> > +	spin_unlock_irqrestore(&ppool->pool_lock, flags);
> > +
> > +	if (queue_work)
> > +		schedule_work(&ppool->work); /* queue work to grow the pool */
> > +
> > +
> > +	if (plist) {
> > +		memset(plist, 0, ppool->alloc_size); /* Zero out memory */
> > +		return plist;
> > +	}
> > +
> > +	/* Out of luck, try to get memory from kernel */
> > +	plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
> > +			GFP_ATOMIC);
> 
> Since you are outside the lock, you should pass in gfp_flags, to enhance
> the chance of being able to use GFP_KERNEL in certain contexts.

No, you got it wrong. This above function get_resource_pool_obj() itself
is called from interrrupt time so the logic is to get an object
from the pre-allocated pool if available else get it from kernel
without blocking which is to pass GFP_ATOMIC flag.

> 
> 
> > +	return plist;
> > +}
> > +
> > +/**
> > + * put_resource_pool_obj - puts an object back to the pool
> > + * @vaddr - object's address
> > + * @ppool - resource pool in question.
> > + * This function puts an object back to the pool.
> > + */
> > +void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool)
> > +{
> > +	unsigned long	flags;
> > +	struct list_head *plist = (struct list_head *)vaddr;
> > +
> > +	BUG_ON(!vaddr);
> > +	BUG_ON(!ppool);
> > +
> > +	spin_lock_irqsave(&ppool->pool_lock, flags);
> > +	list_add(plist, &ppool->pool_head);
> > +	ppool->curr_count++;
> > +	spin_unlock_irqrestore(&ppool->pool_lock, flags);
> 
> you should add logic to free resources here (or queue_work to free the
> resources), if the pool grows beyond a certain size.

Can be added as an add on, testing showed that pool 
grows to a certain size and will not grow beyond that 
as we tend to reuse the elements.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 01/10] DMAR detection and parsing logic
  2007-06-04 23:03   ` Jeff Garzik
@ 2007-06-04 23:17     ` Keshavamurthy, Anil S
  0 siblings, 0 replies; 23+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-04 23:17 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: anil.s.keshavamurthy, linux-kernel, akpm, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, Jun 04, 2007 at 07:03:56PM -0400, Jeff Garzik wrote:
> 
> Is there no way at all, other than ACPI, to find this stuff?
This is as clean as possible without letting ACPI to parse this
DMA remapping unit. The only thing we might be using is the
acpi data struct.

-Anil

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-04 23:06     ` Keshavamurthy, Anil S
@ 2007-06-04 23:43       ` Jeff Garzik
  2007-06-04 23:51         ` Keshavamurthy, Anil S
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff Garzik @ 2007-06-04 23:43 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: linux-kernel, akpm, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Mon, Jun 04, 2007 at 04:06:49PM -0700, Keshavamurthy, Anil S wrote:
> On Mon, Jun 04, 2007 at 06:57:14PM -0400, Jeff Garzik wrote:
> > you should add logic to free resources here (or queue_work to free the
> > resources), if the pool grows beyond a certain size.

> Can be added as an add on, testing showed that pool 
> grows to a certain size and will not grow beyond that 
> as we tend to reuse the elements.

Yes, but is it possible?  If no, what part of the code guarantees the
pool is limited?

We should not merge code that allows the pool to grow without bound.
In-house testing certainly never covers all the cases seen in the
field, so I wouldn't make too many assumptions based on that.  Some
vendor will inevitably build a $BigNum system where the IOMMU is very
heavily used.

	Jeff




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-04 23:43       ` Jeff Garzik
@ 2007-06-04 23:51         ` Keshavamurthy, Anil S
  2007-06-05 20:24           ` Keshavamurthy, Anil S
  0 siblings, 1 reply; 23+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-04 23:51 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: Keshavamurthy, Anil S, linux-kernel, akpm, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, Jun 04, 2007 at 07:43:54PM -0400, Jeff Garzik wrote:
> On Mon, Jun 04, 2007 at 04:06:49PM -0700, Keshavamurthy, Anil S wrote:
> > On Mon, Jun 04, 2007 at 06:57:14PM -0400, Jeff Garzik wrote:
> > > you should add logic to free resources here (or queue_work to free the
> > > resources), if the pool grows beyond a certain size.
> 
> > Can be added as an add on, testing showed that pool 
> > grows to a certain size and will not grow beyond that 
> > as we tend to reuse the elements.
> 
> Yes, but is it possible?  If no, what part of the code guarantees the
> pool is limited?
> 
> We should not merge code that allows the pool to grow without bound.
> In-house testing certainly never covers all the cases seen in the
> field, so I wouldn't make too many assumptions based on that.  Some
> vendor will inevitably build a $BigNum system where the IOMMU is very
> heavily used.

No problem, I can add the code to free the pool element if
the curr_count ever goes greater than (min_count + 2 * grow_count)
then bring the curr_count to min_count + grow_count by freeing 
some pool objects.

A patch which will apply to this current patch will follow soon.

Thanks Jeff for making your case stronger :)

-Anil


> 
> 	Jeff
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-04 23:51         ` Keshavamurthy, Anil S
@ 2007-06-05 20:24           ` Keshavamurthy, Anil S
  2007-06-05 20:30             ` Jeff Garzik
  0 siblings, 1 reply; 23+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-05 20:24 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Jeff Garzik, linux-kernel, akpm, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Mon, Jun 04, 2007 at 04:51:05PM -0700, Keshavamurthy, Anil S wrote:
> On Mon, Jun 04, 2007 at 07:43:54PM -0400, Jeff Garzik wrote:
> > On Mon, Jun 04, 2007 at 04:06:49PM -0700, Keshavamurthy, Anil S wrote:
> > > On Mon, Jun 04, 2007 at 06:57:14PM -0400, Jeff Garzik wrote:
> > > > you should add logic to free resources here (or queue_work to free the
> > > > resources), if the pool grows beyond a certain size.
> > 
> > > Can be added as an add on, testing showed that pool 
> > > grows to a certain size and will not grow beyond that 
> > > as we tend to reuse the elements.
> > 
> > Yes, but is it possible?  If no, what part of the code guarantees the
> > pool is limited?
> > 
> > We should not merge code that allows the pool to grow without bound.
> > In-house testing certainly never covers all the cases seen in the
> > field, so I wouldn't make too many assumptions based on that.  Some
> > vendor will inevitably build a $BigNum system where the IOMMU is very
> > heavily used.
> 
> No problem, I can add the code to free the pool element if
> the curr_count ever goes greater than (min_count + 2 * grow_count)
> then bring the curr_count to min_count + grow_count by freeing 
> some pool objects.
> 
> A patch which will apply to this current patch will follow soon.

Here goes the patch as promised...
------------------------------------

Now adding the logic to shrink the resource pool
and give back the excess pool object's memory space 
back to OS.

This patch add's on top of my previous posting.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>

---
 lib/respool.c |   84 ++++++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 65 insertions(+), 19 deletions(-)

Index: linux-2.6.22-rc3/lib/respool.c
===================================================================
--- linux-2.6.22-rc3.orig/lib/respool.c	2007-06-05 12:22:26.000000000 -0700
+++ linux-2.6.22-rc3/lib/respool.c	2007-06-05 12:58:01.000000000 -0700
@@ -67,6 +67,7 @@
 {
 	unsigned long	flags;
 	struct list_head *plist = (struct list_head *)vaddr;
+	bool queue_work = 0;
 
 	BUG_ON(!vaddr);
 	BUG_ON(!ppool);
@@ -74,21 +75,74 @@
 	spin_lock_irqsave(&ppool->pool_lock, flags);
 	list_add(plist, &ppool->pool_head);
 	ppool->curr_count++;
+	if (ppool->curr_count > (ppool->min_count +
+		ppool->grow_count * 2))
+		queue_work = 1;
 	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+
+	if (queue_work)
+		schedule_work(&ppool->work); /* queue work to shrink the pool */
+}
+
+void
+__grow_resource_pool(struct resource_pool *ppool,
+	unsigned int grow_count)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+
+	while (grow_count) {
+		plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
+			GFP_KERNEL);
+
+		if (!plist)
+			break;
+
+		/* Add the element to the list */
+		spin_lock_irqsave(&ppool->pool_lock, flags);
+		list_add(plist, &ppool->pool_head);
+		ppool->curr_count++;
+		spin_unlock_irqrestore(&ppool->pool_lock, flags);
+		grow_count--;
+	}
 }
 
+void
+__shrink_resource_pool(struct resource_pool *ppool,
+	unsigned int shrink_count)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+
+	while (shrink_count) {
+		/* remove an object from the pool */
+		spin_lock_irqsave(&ppool->pool_lock, flags);
+		if (list_empty(&ppool->pool_head)) {
+			spin_unlock_irqrestore(&ppool->pool_lock, flags);
+			break;
+		}
+		plist = ppool->pool_head.next;
+		list_del(plist);
+		ppool->curr_count--;
+		spin_unlock_irqrestore(&ppool->pool_lock, flags);
+		ppool->free_mem(plist, ppool->alloc_size);
+		shrink_count--;
+	}
+}
+
+
 /**
- * grow_resource_pool - grows the given resource pool
+ * resize_resource_pool - resize the given resource pool
  * @work - work struct
  * This functions gets the resource pool pointer from the
  * work struct and grows the resource pool by grow_count.
  */
 static void
-grow_resource_pool(struct work_struct * work)
+resize_resource_pool(struct work_struct * work)
 {
 	struct resource_pool *ppool;
-	struct list_head *plist;
 	unsigned int min_count, grow_count = 0;
+	unsigned int shrink_count = 0;
 	unsigned long	flags;
 
 	ppool = container_of(work, struct resource_pool, work);
@@ -98,22 +152,14 @@
 	min_count = ppool->min_count + ppool->grow_count;
 	if (ppool->curr_count < min_count)
 		grow_count = min_count - ppool->curr_count;
+	else if (ppool->curr_count > min_count + ppool->grow_count)
+		shrink_count = ppool->curr_count - min_count;
 	spin_unlock_irqrestore(&ppool->pool_lock, flags);
 
-	while(grow_count) {
-		plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
-			GFP_KERNEL);
-
-		if (!plist)
-			break;
-
-		/* Add the element to the list */
-		spin_lock_irqsave(&ppool->pool_lock, flags);
-		list_add(plist, &ppool->pool_head);
-		ppool->curr_count++;
-		spin_unlock_irqrestore(&ppool->pool_lock, flags);
-		grow_count--;
-	}
+	if (grow_count)
+		__grow_resource_pool(ppool, grow_count);
+	else if (shrink_count)
+		__shrink_resource_pool(ppool, shrink_count);
 }
 
 /**
@@ -166,10 +212,10 @@
 	res->free_mem = free_fn;
 	spin_lock_init(&res->pool_lock);
 	INIT_LIST_HEAD(&res->pool_head);
-	INIT_WORK(&res->work, grow_resource_pool);
+	INIT_WORK(&res->work, resize_resource_pool);
 
 	/* grow the pool */
-	grow_resource_pool(&res->work);
+	resize_resource_pool(&res->work);
 
 	return (res->curr_count == 0);
 }

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-05 20:24           ` Keshavamurthy, Anil S
@ 2007-06-05 20:30             ` Jeff Garzik
  2007-06-05 20:48               ` Keshavamurthy, Anil S
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff Garzik @ 2007-06-05 20:30 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: linux-kernel, akpm, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Tue, Jun 05, 2007 at 01:24:33PM -0700, Keshavamurthy, Anil S wrote:
>  1 file changed, 65 insertions(+), 19 deletions(-)
> 
> Index: linux-2.6.22-rc3/lib/respool.c
> ===================================================================
> --- linux-2.6.22-rc3.orig/lib/respool.c	2007-06-05 12:22:26.000000000 -0700
> +++ linux-2.6.22-rc3/lib/respool.c	2007-06-05 12:58:01.000000000 -0700
> @@ -67,6 +67,7 @@
>  {
>  	unsigned long	flags;
>  	struct list_head *plist = (struct list_head *)vaddr;
> +	bool queue_work = 0;

Seems sane to me.

I only have a very minor nit to pick:  naming a variable (queue_work)
the same as a public function may cause confusion.

Also, if its a bool you should initialize it to 'true' or 'false'.

	Jeff




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects
  2007-06-05 20:30             ` Jeff Garzik
@ 2007-06-05 20:48               ` Keshavamurthy, Anil S
  0 siblings, 0 replies; 23+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-05 20:48 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: Keshavamurthy, Anil S, linux-kernel, akpm, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Tue, Jun 05, 2007 at 04:30:48PM -0400, Jeff Garzik wrote:
> On Tue, Jun 05, 2007 at 01:24:33PM -0700, Keshavamurthy, Anil S wrote:
> >  1 file changed, 65 insertions(+), 19 deletions(-)
> > 
> > Index: linux-2.6.22-rc3/lib/respool.c
> > ===================================================================
> > --- linux-2.6.22-rc3.orig/lib/respool.c	2007-06-05 12:22:26.000000000 -0700
> > +++ linux-2.6.22-rc3/lib/respool.c	2007-06-05 12:58:01.000000000 -0700
> > @@ -67,6 +67,7 @@
> >  {
> >  	unsigned long	flags;
> >  	struct list_head *plist = (struct list_head *)vaddr;
> > +	bool queue_work = 0;
> 
> Seems sane to me.
Thanks so much.

> 
> I only have a very minor nit to pick:  naming a variable (queue_work)
> the same as a public function may cause confusion.
> 
> Also, if its a bool you should initialize it to 'true' or 'false'.
I will address when this patch gets into Andrew's mm tree.

Thanks once again,
Anil


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Intel-IOMMU 01/10] DMAR detection and parsing logic
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
@ 2007-06-06 18:56 ` anil.s.keshavamurthy
  0 siblings, 0 replies; 23+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: dmar_detection.patch --]
[-- Type: text/plain, Size: 14490 bytes --]

This patch adds support for early detection and parsing of DMAR's
reported to OS via ACPI tables.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 arch/x86_64/Kconfig   |   11 +
 drivers/pci/Makefile  |    3 
 drivers/pci/dmar.c    |  318 ++++++++++++++++++++++++++++++++++++++++++++++++++
 include/acpi/actbl1.h |   27 +++-
 include/linux/dmar.h  |   52 ++++++++
 5 files changed, 404 insertions(+), 7 deletions(-)

Index: linux-2.6.22-rc3/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/Kconfig	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/Kconfig	2007-06-04 12:33:15.000000000 -0700
@@ -716,6 +716,17 @@
 	bool "Support mmconfig PCI config space access"
 	depends on PCI && ACPI
 
+config DMAR
+	bool "Support for DMA Remapping Devices (EXPERIMENTAL)"
+	depends on PCI_MSI && ACPI && EXPERIMENTAL
+	default y
+	help
+	  DMA remapping(DMAR) devices support enables independent address
+	  translations for Direct Memory Access(DMA) from Devices.
+	  These DMA remapping devices are reported via ACPI tables
+	  and includes pci device scope covered by these DMA
+	  remapping device.
+
 source "drivers/pci/pcie/Kconfig"
 
 source "drivers/pci/Kconfig"
Index: linux-2.6.22-rc3/drivers/pci/Makefile
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/Makefile	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/Makefile	2007-06-04 12:33:15.000000000 -0700
@@ -20,6 +20,9 @@
 # Build the Hypertransport interrupt support
 obj-$(CONFIG_HT_IRQ) += htirq.o
 
+# Build Intel IOMMU support
+obj-$(CONFIG_DMAR) += dmar.o
+
 #
 # Some architectures use the generic PCI setup functions
 #
Index: linux-2.6.22-rc3/drivers/pci/dmar.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/dmar.c	2007-06-04 12:33:15.000000000 -0700
@@ -0,0 +1,318 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * 	Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ *	Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ *
+ * 	This file implements early detection/parsing of DMA Remapping Devices
+ * reported to OS through BIOS via DMA remapping reporting (DMAR) ACPI
+ * tables.
+ */
+
+#include <linux/pci.h>
+#include <linux/dmar.h>
+
+#undef PREFIX
+#define PREFIX "DMAR:"
+
+/* No locks are needed as DMA remapping hardware unit
+ * list is constructed at boot time and hotplug of
+ * these units are not supported by the architecture.
+ */
+LIST_HEAD(dmar_drhd_units);
+LIST_HEAD(dmar_rmrr_units);
+
+static struct acpi_table_header * __initdata dmar_tbl;
+
+static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
+{
+	/*
+	 * add INCLUDE_ALL at the tail, so scan the list will find it at
+	 * the very end.
+	 */
+	if (drhd->include_all)
+		list_add_tail(&drhd->list, &dmar_drhd_units);
+	else
+		list_add(&drhd->list, &dmar_drhd_units);
+}
+
+static void __init dmar_register_rmrr_unit(struct dmar_rmrr_unit *rmrr)
+{
+	list_add(&rmrr->list, &dmar_rmrr_units);
+}
+
+static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope,
+					   struct pci_dev **dev, u16 segment)
+{
+	struct pci_bus *bus;
+	struct pci_dev *pdev = NULL;
+	struct acpi_dmar_pci_path *path;
+	int count;
+
+	bus = pci_find_bus(segment, scope->bus);
+	path = (struct acpi_dmar_pci_path *)(scope + 1);
+	count = (scope->length - sizeof(struct acpi_dmar_device_scope))
+		/sizeof(struct acpi_dmar_pci_path);
+
+	while (count) {
+		if (pdev)
+			pci_dev_put(pdev);
+		/*
+		 * Some BIOSes list non-exist devices in DMAR table, just
+		 * ignore it
+		 */
+		if (!bus) {
+			printk(KERN_WARNING
+			PREFIX "Device scope bus [%d] not found\n",
+			scope->bus);
+			break;
+		}
+		pdev = pci_get_slot(bus, PCI_DEVFN(path->dev, path->fn));
+		if (!pdev) {
+			printk(KERN_WARNING PREFIX
+			"Device scope device [%04x:%02x:%02x.%02x] not found\n",
+				segment, bus->number, path->dev, path->fn);
+			break;
+		}
+		path ++;
+		count --;
+		bus = pdev->subordinate;
+	}
+	if (!pdev) {
+		printk(KERN_WARNING PREFIX
+		"Device scope device [%04x:%02x:%02x.%02x] not found\n",
+		segment, scope->bus, path->dev, path->fn);
+		*dev = NULL;
+		return 0;
+	}
+	if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT && pdev->subordinate)
+	   || (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE && !pdev->subordinate)) {
+		pci_dev_put(pdev);
+		printk(KERN_WARNING PREFIX "Device scope type does not match for %s\n", pci_name(pdev));
+		return -EINVAL;
+	}
+	*dev = pdev;
+	return 0;
+}
+
+static int __init dmar_parse_dev_scope(void *start, void *end, int *cnt,
+				       struct pci_dev ***devices, u16 segment)
+{
+	struct acpi_dmar_device_scope *scope;
+	void * tmp = start;
+	int index;
+	int ret;
+
+	*cnt = 0;
+	while (start < end) {
+		scope = start;
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
+		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE)
+			(*cnt)++;
+		else
+			printk(KERN_WARNING PREFIX "Unsupported device scope\n");
+		start += scope->length;
+	}
+	if (*cnt == 0)
+		return 0;
+
+	*devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL);
+	if (!*devices)
+		return -ENOMEM;
+
+	start = tmp;
+	index = 0;
+	while (start < end) {
+		scope = start;
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
+		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) {
+			ret = dmar_parse_one_dev_scope(scope,
+				&(*devices)[index], segment);
+			if (ret) {
+				kfree(*devices);
+				return ret;
+			}
+			index ++;
+		}
+		start += scope->length;
+	}
+
+	return 0;
+}
+
+/**
+ * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition
+ * structure which uniquely represent one DMA remapping hardware unit
+ * present in the platform
+ */
+static int __init
+dmar_parse_one_drhd(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_hardware_unit * drhd = (struct acpi_dmar_hardware_unit *)header;
+	struct dmar_drhd_unit *dmaru;
+	int ret = 0;
+	static int include_all;
+
+	dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL);
+	if (!dmaru)
+		return -ENOMEM;
+
+	dmaru->reg_base_addr = drhd->address;
+	dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */
+
+	if (!dmaru->include_all)
+		ret = dmar_parse_dev_scope((void *)(drhd + 1),
+				((void *)drhd) + header->length,
+				&dmaru->devices_cnt, &dmaru->devices,
+				drhd->segment);
+	else {
+		/* Only allow one INCLUDE_ALL */
+		if (include_all) {
+			printk(KERN_WARNING PREFIX "Only one INCLUDE_ALL "
+				"device scope is allowed\n");
+			ret = -EINVAL;
+		}
+		include_all = 1;
+	}
+
+	if (ret || (dmaru->devices_cnt == 0 && !dmaru->include_all))
+		kfree(dmaru);
+	else
+		dmar_register_drhd_unit(dmaru);
+	return ret;
+}
+
+static int __init
+dmar_parse_one_rmrr(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_reserved_memory *rmrr = (struct acpi_dmar_reserved_memory *)header;
+	struct dmar_rmrr_unit *rmrru;
+	int ret = 0;
+
+	rmrru = kzalloc(sizeof(*rmrru), GFP_KERNEL);
+	if (!rmrru)
+		return -ENOMEM;
+
+	rmrru->base_address = rmrr->base_address;
+	rmrru->end_address = rmrr->end_address;
+	ret = dmar_parse_dev_scope((void *)(rmrr + 1),
+		((void*)rmrr) + header->length,
+		&rmrru->devices_cnt, &rmrru->devices, rmrr->segment);
+
+	if (ret || (rmrru->devices_cnt == 0))
+		kfree(rmrru);
+	else
+		dmar_register_rmrr_unit(rmrru);
+	return ret;
+}
+
+static void __init
+dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_hardware_unit *drhd;
+	struct acpi_dmar_reserved_memory *rmrr;
+
+	switch (header->type) {
+	case ACPI_DMAR_TYPE_HARDWARE_UNIT:
+		drhd = (struct acpi_dmar_hardware_unit *)header;
+		printk (KERN_INFO PREFIX
+			"DRHD (flags: 0x%08x)base: 0x%016Lx\n",
+			drhd->flags, drhd->address);
+		break;
+	case ACPI_DMAR_TYPE_RESERVED_MEMORY:
+		rmrr = (struct acpi_dmar_reserved_memory *)header;
+
+		printk (KERN_INFO PREFIX
+			"RMRR base: 0x%016Lx end: 0x%016Lx\n",
+			rmrr->base_address, rmrr->end_address);
+		break;
+	}
+}
+
+/**
+ * parse_dmar_table - parses the DMA reporting table
+ */
+static int __init
+parse_dmar_table(void)
+{
+	struct acpi_table_dmar *dmar;
+	struct acpi_dmar_header *entry_header;
+	int ret = 0;
+
+	dmar = (struct acpi_table_dmar *)dmar_tbl;
+
+	if (!dmar->width) {
+		printk (KERN_WARNING PREFIX "Zero: Invalid DMAR haw\n");
+		return -EINVAL;
+	}
+
+	printk (KERN_INFO PREFIX "Host address width %d\n",
+		dmar->width + 1);
+
+	entry_header = (struct acpi_dmar_header *)(dmar + 1);
+	while (((unsigned long)entry_header) < (((unsigned long)dmar) + dmar_tbl->length)) {
+		dmar_table_print_dmar_entry(entry_header);
+
+		switch (entry_header->type) {
+		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
+			ret = dmar_parse_one_drhd(entry_header);
+			break;
+		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
+			ret = dmar_parse_one_rmrr(entry_header);
+			break;
+		default:
+			printk(KERN_WARNING PREFIX "Unknown DMAR structure type\n");
+			ret = 0; /* for forward compatibility */
+			break;
+		}
+		if (ret)
+			break;
+
+		entry_header = ((void *)entry_header + entry_header->length);
+	}
+	return ret;
+}
+
+
+int __init dmar_table_init(void)
+{
+
+	parse_dmar_table();
+	if (list_empty(&dmar_drhd_units)) {
+		printk(KERN_ERR PREFIX "No DMAR devices found\n");
+		return -ENODEV;
+	}
+	return 0;
+}
+
+/**
+ * early_dmar_detect - checks to see if the platform supports DMAR devices
+ */
+int __init early_dmar_detect(void)
+{
+	acpi_status status = AE_OK;
+
+	/* if we could find DMAR table, then there are DMAR devices */
+	status = acpi_get_table(ACPI_SIG_DMAR, 0,
+				(struct acpi_table_header **)&dmar_tbl);
+
+	if (ACPI_SUCCESS(status) && !dmar_tbl) {
+		printk (KERN_WARNING PREFIX "Unable to map DMAR\n");
+		status = AE_NOT_FOUND;
+	}
+
+	return (ACPI_SUCCESS(status) ? 1 : 0);
+}
Index: linux-2.6.22-rc3/include/acpi/actbl1.h
===================================================================
--- linux-2.6.22-rc3.orig/include/acpi/actbl1.h	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/include/acpi/actbl1.h	2007-06-04 12:33:15.000000000 -0700
@@ -257,7 +257,8 @@
 struct acpi_table_dmar {
 	struct acpi_table_header header;	/* Common ACPI table header */
 	u8 width;		/* Host Address Width */
-	u8 reserved[11];
+	u8 flags;
+	u8 reserved[10];
 };
 
 /* DMAR subtable header */
@@ -265,8 +266,6 @@
 struct acpi_dmar_header {
 	u16 type;
 	u16 length;
-	u8 flags;
-	u8 reserved[3];
 };
 
 /* Values for subtable type in struct acpi_dmar_header */
@@ -274,13 +273,15 @@
 enum acpi_dmar_type {
 	ACPI_DMAR_TYPE_HARDWARE_UNIT = 0,
 	ACPI_DMAR_TYPE_RESERVED_MEMORY = 1,
-	ACPI_DMAR_TYPE_RESERVED = 2	/* 2 and greater are reserved */
+	ACPI_DMAR_TYPE_ATSR = 2,
+	ACPI_DMAR_TYPE_RESERVED = 3	/* 3 and greater are reserved */
 };
 
 struct acpi_dmar_device_scope {
 	u8 entry_type;
 	u8 length;
-	u8 segment;
+	u16 reserved;
+	u8 enumeration_id;
 	u8 bus;
 };
 
@@ -290,7 +291,14 @@
 	ACPI_DMAR_SCOPE_TYPE_NOT_USED = 0,
 	ACPI_DMAR_SCOPE_TYPE_ENDPOINT = 1,
 	ACPI_DMAR_SCOPE_TYPE_BRIDGE = 2,
-	ACPI_DMAR_SCOPE_TYPE_RESERVED = 3	/* 3 and greater are reserved */
+	ACPI_DMAR_SCOPE_TYPE_IOAPIC = 3,
+	ACPI_DMAR_SCOPE_TYPE_HPET = 4,
+	ACPI_DMAR_SCOPE_TYPE_RESERVED = 5	/* 5 and greater are reserved */
+};
+
+struct acpi_dmar_pci_path {
+	u8 dev;
+	u8 fn;
 };
 
 /*
@@ -301,6 +309,9 @@
 
 struct acpi_dmar_hardware_unit {
 	struct acpi_dmar_header header;
+	u8 flags;
+	u8 reserved;
+	u16 segment;
 	u64 address;		/* Register Base Address */
 };
 
@@ -312,7 +323,9 @@
 
 struct acpi_dmar_reserved_memory {
 	struct acpi_dmar_header header;
-	u64 address;		/* 4_k aligned base address */
+	u16 reserved;
+	u16 segment;
+	u64 base_address;		/* 4_k aligned base address */
 	u64 end_address;	/* 4_k aligned limit address */
 };
 
Index: linux-2.6.22-rc3/include/linux/dmar.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-04 12:33:15.000000000 -0700
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ * Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ */
+
+#ifndef __DMAR_H__
+#define __DMAR_H__
+
+#include <linux/acpi.h>
+#include <linux/types.h>
+
+
+extern int dmar_table_init(void);
+extern int early_dmar_detect(void);
+
+extern struct list_head dmar_drhd_units;
+extern struct list_head dmar_rmrr_units;
+
+struct dmar_drhd_unit {
+	struct list_head list;		/* list of drhd units	*/
+	u64	reg_base_addr;		/* register base address*/
+	struct	pci_dev **devices; 	/* target device array	*/
+	int	devices_cnt;		/* target device count	*/
+	u8	ignored:1; 		/* ignore drhd		*/
+	u8	include_all:1;
+	struct intel_iommu *iommu;
+};
+
+struct dmar_rmrr_unit {
+	struct list_head list;		/* list of rmrr units	*/
+	u64	base_address;		/* reserved base address*/
+	u64	end_address;		/* reserved end address */
+	struct pci_dev **devices;	/* target devices */
+	int	devices_cnt;		/* target device count */
+};
+
+#endif /* __DMAR_H__ */

-- 

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2007-06-06 19:17 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
2007-06-04 22:54   ` Jeff Garzik
2007-06-04 22:58     ` Keshavamurthy, Anil S
2007-06-04 23:03   ` Jeff Garzik
2007-06-04 23:17     ` Keshavamurthy, Anil S
2007-06-04 21:02 ` [Intel-IOMMU 02/10] Library routines for handling pre-allocated pool of objects anil.s.keshavamurthy
2007-06-04 22:57   ` Jeff Garzik
2007-06-04 23:06     ` Keshavamurthy, Anil S
2007-06-04 23:43       ` Jeff Garzik
2007-06-04 23:51         ` Keshavamurthy, Anil S
2007-06-05 20:24           ` Keshavamurthy, Anil S
2007-06-05 20:30             ` Jeff Garzik
2007-06-05 20:48               ` Keshavamurthy, Anil S
2007-06-04 21:02 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
2007-06-04 21:02 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy
  -- strict thread matches above, loose matches on Subject: below --
2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
2007-06-06 18:56 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox