* [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel
@ 2015-01-12 7:06 Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers Li, Zhen-Hua
` (10 more replies)
0 siblings, 11 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
This patchset is an update of Bill Sumner's patchset, implements a fix for:
If a kernel boots with intel_iommu=on on a system that supports intel vt-d,
when a panic happens, the kdump kernel will boot with these faults:
dmar: DRHD: handling fault status reg 102
dmar: DMAR:[DMA Read] Request device [01:00.0] fault addr fff80000
DMAR:[fault reason 01] Present bit in root entry is clear
dmar: DRHD: handling fault status reg 2
dmar: INTR-REMAP: Request device [[61:00.0] fault index 42
INTR-REMAP:[fault reason 34] Present field in the IRTE entry is clear
On some system, the interrupt remapping fault will also happen even if the
intel_iommu is not set to on, because the interrupt remapping will be enabled
when x2apic is needed by the system.
The cause of the DMA fault is described in Bill's original version, and the
INTR-Remap fault is caused by a similar reason. In short, the initialization
of vt-d drivers causes the in-flight DMA and interrupt requests get wrong
response.
To fix this problem, we modifies the behaviors of the intel vt-d in the
crashdump kernel:
For DMA Remapping:
1. To accept the vt-d hardware in an active state,
2. Do not disable and re-enable the translation, keep it enabled.
3. Use the old root entry table, do not rewrite the RTA register.
4. Malloc and use new context entry table and page table, copy data from the
old ones that used by the old kernel.
5. to use different portions of the iova address ranges for the device drivers
in the crashdump kernel than the iova ranges that were in-use at the time
of the panic.
6. After device driver is loaded, when it issues the first dma_map command,
free the dmar_domain structure for this device, and generate a new one, so
that the device can be assigned a new and empty page table.
7. When a new context entry table is generated, we also save its address to
the old root entry table.
For Interrupt Remapping:
1. To accept the vt-d hardware in an active state,
2. Do not disable and re-enable the interrupt remapping, keep it enabled.
3. Use the old interrupt remapping table, do not rewrite the IRTA register.
4. When ioapic entry is setup, the interrupt remapping table is changed, and
the updated data will be stored to the old interrupt remapping table.
Advantages of this approach:
1. All manipulation of the IO-device is done by the Linux device-driver
for that device.
2. This approach behaves in a manner very similar to operation without an
active iommu.
3. Any activity between the IO-device and its RMRR areas is handled by the
device-driver in the same manner as during a non-kdump boot.
4. If an IO-device has no driver in the kdump kernel, it is simply left alone.
This supports the practice of creating a special kdump kernel without
drivers for any devices that are not required for taking a crashdump.
5. Minimal code-changes among the existing mainline intel vt-d code.
Summary of changes in this patch set:
1. Added some useful function for root entry table in code intel-iommu.c
2. Added new members to struct root_entry and struct irte;
3. Functions to load old root entry table to iommu->root_entry from the memory
of old kernel.
4. Functions to malloc new context entry table and page table and copy the data
from the old ones to the malloced new ones.
5. Functions to enable support for DMA remapping in kdump kernel.
6. Functions to load old irte data from the old kernel to the kdump kernel.
7. Some code changes that support other behaviours that have been listed.
8. In the new functions, use physical address as "unsigned long" type, not
pointers.
Original version by Bill Sumner:
https://lkml.org/lkml/2014/1/10/518
https://lkml.org/lkml/2014/4/15/716
https://lkml.org/lkml/2014/4/24/836
Zhenhua's updates:
https://lkml.org/lkml/2014/10/21/134
https://lkml.org/lkml/2014/12/15/121
https://lkml.org/lkml/2014/12/22/53
https://lkml.org/lkml/2015/1/6/1166
Changelog[v8]:
1. Add a missing __iommu_flush_cache in function copy_page_table.
Changelog[v7]:
1. Use __iommu_flush_cache to flush the data to hardware.
Changelog[v6]:
1. Use "unsigned long" as type of physical address.
2. Use new function unmap_device_dma to unmap the old dma.
3. Some small incorrect bits order for aw shift.
Changelog[v5]:
1. Do not disable and re-enable traslation and interrupt remapping.
2. Use old root entry table.
3. Use old interrupt remapping table.
4. New functions to copy data from old kernel, and save to old kernel mem.
5. New functions to save updated root entry table and irte table.
6. Use intel_unmap to unmap the old dma;
7. Allocate new pages while driver is being loaded.
Changelog[v4]:
1. Cut off the patches that move some defines and functions to new files.
2. Reduce the numbers of patches to five, make it more easier to read.
3. Changed the name of functions, make them consistent with current context
get/set functions.
4. Add change to function __iommu_attach_domain.
Changelog[v3]:
1. Commented-out "#define DEBUG 1" to eliminate debug messages.
2. Updated the comments about changes in each version.
3. Fixed: one-line added to Copy-Translations patch to initialize the iovad
struct as recommended by Baoquan He [bhe@redhat.com]
init_iova_domain(&domain->iovad, DMA_32BIT_PFN);
Changelog[v2]:
The following series implements a fix for:
A kdump problem about DMA that has been discussed for a long time. That is,
when a kernel panics and boots into the kdump kernel, DMA started by the
panicked kernel is not stopped before the kdump kernel is booted and the
kdump kernel disables the IOMMU while this DMA continues. This causes the
IOMMU to stop translating the DMA addresses as IOVAs and begin to treat
them as physical memory addresses -- which causes the DMA to either:
(1) generate DMAR errors or
(2) generate PCI SERR errors or
(3) transfer data to or from incorrect areas of memory. Often this
causes the dump to fail.
Changelog[v1]:
The original version.
Changed in this version:
1. Do not disable and re-enable traslation and interrupt remapping.
2. Use old root entry table.
3. Use old interrupt remapping table.
4. Use "unsigned long" as physical address.
5. Use intel_unmap to unmap the old dma;
Baoquan He <bhe@redhat.com> helps testing this patchset.
Takao Indoh <indou.takao@jp.fujitsu.com> gives valuable suggestions.
iommu/vt-d: Update iommu_attach_domain() and its callers
iommu/vt-d: Items required for kdump
iommu/vt-d: Add domain-id functions
iommu/vt-d: functions to copy data from old mem
iommu/vt-d: Add functions to load and save old re
iommu/vt-d: datatypes and functions used for kdump
iommu/vt-d: enable kdump support in iommu module
iommu/vt-d: assign new page table for dma_map
iommu/vt-d: Copy functions for irte
iommu/vt-d: Use old irte in kdump kernel
Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com>
Tested-by: Baoquan He <bhe@redhat.com>
---
drivers/iommu/intel-iommu.c | 1054 +++++++++++++++++++++++++++++++++--
drivers/iommu/intel_irq_remapping.c | 104 +++-
include/linux/intel-iommu.h | 18 +
3 files changed, 1134 insertions(+), 42 deletions(-)
--
2.0.0-rc0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 15:18 ` Joerg Roedel
2015-01-12 7:06 ` [PATCH v8 02/10] iommu/vt-d: Items required for kdump Li, Zhen-Hua
` (9 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Allow specification of the domain-id for the new domain.
This patch only adds the 'did' parameter to iommu_attach_domain()
and modifies all of its callers to specify the default value of -1
which says "no did specified, allocate a new one".
This is no functional change from current behaviour -- just enables
a functional change to be made in a later patch.
Bill Sumner:
Original version.
Li, Zhenhua:
Minor change, add change to function __iommu_attach_domain.
Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel-iommu.c | 34 ++++++++++++++++++++--------------
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 40dfbc0..8d5c400 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1534,31 +1534,36 @@ static struct dmar_domain *alloc_domain(int flags)
}
static int __iommu_attach_domain(struct dmar_domain *domain,
- struct intel_iommu *iommu)
+ struct intel_iommu *iommu,
+ int domain_number)
{
int num;
unsigned long ndomains;
ndomains = cap_ndoms(iommu->cap);
- num = find_first_zero_bit(iommu->domain_ids, ndomains);
- if (num < ndomains) {
- set_bit(num, iommu->domain_ids);
- iommu->domains[num] = domain;
- } else {
- num = -ENOSPC;
- }
+ if (domain_number < 0) {
+ num = find_first_zero_bit(iommu->domain_ids, ndomains);
+ if (num < ndomains) {
+ set_bit(num, iommu->domain_ids);
+ iommu->domains[num] = domain;
+ } else {
+ num = -ENOSPC;
+ }
+ } else
+ num = domain_number;
return num;
}
static int iommu_attach_domain(struct dmar_domain *domain,
- struct intel_iommu *iommu)
+ struct intel_iommu *iommu,
+ int domain_number)
{
int num;
unsigned long flags;
spin_lock_irqsave(&iommu->lock, flags);
- num = __iommu_attach_domain(domain, iommu);
+ num = __iommu_attach_domain(domain, iommu, domain_number);
spin_unlock_irqrestore(&iommu->lock, flags);
if (num < 0)
pr_err("IOMMU: no free domain ids\n");
@@ -1577,7 +1582,7 @@ static int iommu_attach_vm_domain(struct dmar_domain *domain,
if (iommu->domains[num] == domain)
return num;
- return __iommu_attach_domain(domain, iommu);
+ return __iommu_attach_domain(domain, iommu, -1);
}
static void iommu_detach_domain(struct dmar_domain *domain,
@@ -2231,6 +2236,7 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
u16 dma_alias;
unsigned long flags;
u8 bus, devfn;
+ int did = -1; /* Default to "no domain_id supplied" */
domain = find_domain(dev);
if (domain)
@@ -2264,7 +2270,7 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
domain = alloc_domain(0);
if (!domain)
return NULL;
- domain->id = iommu_attach_domain(domain, iommu);
+ domain->id = iommu_attach_domain(domain, iommu, did);
if (domain->id < 0) {
free_domain_mem(domain);
return NULL;
@@ -2442,7 +2448,7 @@ static int __init si_domain_init(int hw)
return -EFAULT;
for_each_active_iommu(iommu, drhd) {
- ret = iommu_attach_domain(si_domain, iommu);
+ ret = iommu_attach_domain(si_domain, iommu, -1);
if (ret < 0) {
domain_exit(si_domain);
return -EFAULT;
@@ -3866,7 +3872,7 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
iommu_enable_translation(iommu);
if (si_domain) {
- ret = iommu_attach_domain(si_domain, iommu);
+ ret = iommu_attach_domain(si_domain, iommu, -1);
if (ret < 0 || si_domain->id != ret)
goto disable_iommu;
domain_attach_iommu(si_domain, iommu);
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 15:22 ` Joerg Roedel
2015-01-12 7:06 ` [PATCH v8 03/10] iommu/vt-d: Add domain-id functions Li, Zhen-Hua
` (8 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Add structure type domain_values_entry used for kdump;
Add context entry functions needed for kdump.
Bill Sumner:
Original version;
Li, Zhenhua:
Changed the name of new functions, make them consistent with current
context get/set functions.
Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel-iommu.c | 70 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 70 insertions(+)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 8d5c400..a71de3f 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -40,6 +40,7 @@
#include <linux/pci-ats.h>
#include <linux/memblock.h>
#include <linux/dma-contiguous.h>
+#include <linux/crash_dump.h>
#include <asm/irq_remapping.h>
#include <asm/cacheflush.h>
#include <asm/iommu.h>
@@ -208,6 +209,12 @@ get_context_addr_from_root(struct root_entry *root)
NULL);
}
+static inline unsigned long
+get_context_phys_from_root(struct root_entry *root)
+{
+ return root_present(root) ? (root->val & VTD_PAGE_MASK) : 0;
+}
+
/*
* low 64 bits:
* 0: present
@@ -228,6 +235,32 @@ static inline bool context_present(struct context_entry *context)
{
return (context->lo & 1);
}
+
+static inline int context_fault_enable(struct context_entry *c)
+{
+ return((c->lo >> 1) & 0x1);
+}
+
+static inline int context_translation_type(struct context_entry *c)
+{
+ return((c->lo >> 2) & 0x3);
+}
+
+static inline u64 context_address_root(struct context_entry *c)
+{
+ return((c->lo >> VTD_PAGE_SHIFT));
+}
+
+static inline int context_address_width(struct context_entry *c)
+{
+ return((c->hi >> 0) & 0x7);
+}
+
+static inline int context_domain_id(struct context_entry *c)
+{
+ return((c->hi >> 8) & 0xffff);
+}
+
static inline void context_set_present(struct context_entry *context)
{
context->lo |= 1;
@@ -313,6 +346,43 @@ static inline int first_pte_in_page(struct dma_pte *pte)
return !((unsigned long)pte & ~VTD_PAGE_MASK);
}
+
+#ifdef CONFIG_CRASH_DUMP
+
+/*
+ * Fix Crashdump failure caused by leftover DMA through a hardware IOMMU
+ *
+ * Fixes the crashdump kernel to deal with an active iommu and legacy
+ * DMA from the (old) panicked kernel in a manner similar to how legacy
+ * DMA is handled when no hardware iommu was in use by the old kernel --
+ * allow the legacy DMA to continue into its current buffers.
+ *
+ * In the crashdump kernel, this code:
+ * 1. skips disabling the IOMMU's translating of IO Virtual Addresses (IOVA).
+ * 2. Do not re-enable IOMMU's translating.
+ * 3. In kdump kernel, use the old root entry table.
+ * 4. Leaves the current translations in-place so that legacy DMA will
+ * continue to use its current buffers.
+ * 5. Allocates to the device drivers in the crashdump kernel
+ * portions of the iova address ranges that are different
+ * from the iova address ranges that were being used by the old kernel
+ * at the time of the panic.
+ *
+ */
+
+struct domain_values_entry {
+ struct list_head link; /* link entries into a list */
+ struct iova_domain iovad; /* iova's that belong to this domain */
+ struct dma_pte *pgd; /* virtual address */
+ int did; /* domain id */
+ int gaw; /* max guest address width */
+ int iommu_superpage; /* Level of superpages supported:
+ 0 == 4KiB (no superpages), 1 == 2MiB,
+ 2 == 1GiB, 3 == 512GiB, 4 == 1TiB */
+};
+
+#endif /* CONFIG_CRASH_DUMP */
+
/*
* This domain is a statically identity mapping domain.
* 1. This domain creats a static 1:1 mapping to all usable memory.
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 03/10] iommu/vt-d: Add domain-id functions
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 02/10] iommu/vt-d: Items required for kdump Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 04/10] iommu/vt-d: functions to copy data from old mem Li, Zhen-Hua
` (7 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Interfaces for when a new domain in the crashdump kernel needs some
values from the panicked kernel's context entries.
Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel-iommu.c | 62 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 62 insertions(+)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index a71de3f..c594b2c 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -381,6 +381,13 @@ struct domain_values_entry {
2 == 1GiB, 3 == 512GiB, 4 == 1TiB */
};
+static struct domain_values_entry *intel_iommu_did_to_domain_values_entry(
+ int did, struct intel_iommu *iommu);
+
+static int intel_iommu_get_dids_from_old_kernel(struct intel_iommu *iommu);
+
+static int device_to_domain_id(struct intel_iommu *iommu, u8 bus, u8 devfn);
+
#endif /* CONFIG_CRASH_DUMP */
/*
@@ -4828,3 +4835,58 @@ static void __init check_tylersburg_isoch(void)
printk(KERN_WARNING "DMAR: Recommended TLB entries for ISOCH unit is 16; your BIOS set %d\n",
vtisochctrl);
}
+
+#ifdef CONFIG_CRASH_DUMP
+
+/*
+ * Interfaces for when a new domain in the crashdump kernel needs some
+ * values from the panicked kernel's context entries
+ *
+ */
+static struct domain_values_entry *intel_iommu_did_to_domain_values_entry(
+ int did, struct intel_iommu *iommu)
+{
+ struct domain_values_entry *dve; /* iterator */
+
+ list_for_each_entry(dve, &domain_values_list[iommu->seq_id], link)
+ if (dve->did == did)
+ return dve;
+ return NULL;
+}
+
+/* Mark domain-id's from old kernel as in-use on this iommu so that a new
+ * domain-id is allocated in the case where there is a device in the new kernel
+ * that was not in the old kernel -- and therefore a new domain-id is needed.
+ */
+static int intel_iommu_get_dids_from_old_kernel(struct intel_iommu *iommu)
+{
+ struct domain_values_entry *dve; /* iterator */
+
+ pr_info("IOMMU:%d Domain ids from panicked kernel:\n", iommu->seq_id);
+
+ list_for_each_entry(dve, &domain_values_list[iommu->seq_id], link) {
+ set_bit(dve->did, iommu->domain_ids);
+ pr_info("DID did:%d(0x%4.4x)\n", dve->did, dve->did);
+ }
+
+ pr_info("----------------------------------------\n");
+ return 0;
+}
+
+static int device_to_domain_id(struct intel_iommu *iommu, u8 bus, u8 devfn)
+{
+ int did = -1; /* domain-id returned */
+ struct root_entry *root;
+ struct context_entry *context;
+ unsigned long flags;
+
+ spin_lock_irqsave(&iommu->lock, flags);
+ root = &iommu->root_entry[bus];
+ context = get_context_addr_from_root(root);
+ if (context && context_present(context+devfn))
+ did = context_domain_id(context+devfn);
+ spin_unlock_irqrestore(&iommu->lock, flags);
+ return did;
+}
+
+#endif /* CONFIG_CRASH_DUMP */
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 04/10] iommu/vt-d: functions to copy data from old mem
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (2 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 03/10] iommu/vt-d: Add domain-id functions Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 05/10] iommu/vt-d: Add functions to load and save old re Li, Zhen-Hua
` (6 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Add some functions to copy the data from old kernel.
These functions are used to copy context tables and page tables.
To avoid calling iounmap between spin_lock_irqsave and spin_unlock_irqrestore,
use a link here, store the pointers , and then use iounmap to free them in
another place.
Li, Zhen-hua:
The functions and logics.
Takao Indoh:
Check if pfn is ram:
if (page_is_ram(pfn))
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com>
---
drivers/iommu/intel-iommu.c | 97 +++++++++++++++++++++++++++++++++++++++++++++
include/linux/intel-iommu.h | 9 +++++
2 files changed, 106 insertions(+)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index c594b2c..2335831 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -388,6 +388,13 @@ static int intel_iommu_get_dids_from_old_kernel(struct intel_iommu *iommu);
static int device_to_domain_id(struct intel_iommu *iommu, u8 bus, u8 devfn);
+struct iommu_remapped_entry {
+ struct list_head list;
+ void __iomem *mem;
+};
+static LIST_HEAD(__iommu_remapped_mem);
+static DEFINE_MUTEX(__iommu_mem_list_lock);
+
#endif /* CONFIG_CRASH_DUMP */
/*
@@ -4839,6 +4846,96 @@ static void __init check_tylersburg_isoch(void)
#ifdef CONFIG_CRASH_DUMP
/*
+ * Copy memory from a physically-addressed area into a virtually-addressed area
+ */
+int __iommu_load_from_oldmem(void *to, unsigned long from, unsigned long size)
+{
+ unsigned long pfn; /* Page Frame Number */
+ size_t csize = (size_t)size; /* Num(bytes to copy) */
+ unsigned long offset; /* Lower 12 bits of to */
+ void __iomem *virt_mem;
+ struct iommu_remapped_entry *mapped;
+
+ pfn = from >> VTD_PAGE_SHIFT;
+ offset = from & (~VTD_PAGE_MASK);
+
+ if (page_is_ram(pfn)) {
+ memcpy(to, pfn_to_kaddr(pfn) + offset, csize);
+ } else{
+
+ mapped = kzalloc(sizeof(struct iommu_remapped_entry),
+ GFP_KERNEL);
+ if (!mapped)
+ return -ENOMEM;
+
+ virt_mem = ioremap_cache((unsigned long)from, size);
+ if (!virt_mem) {
+ kfree(mapped);
+ return -ENOMEM;
+ }
+ memcpy(to, virt_mem, size);
+
+ mutex_lock(&__iommu_mem_list_lock);
+ mapped->mem = virt_mem;
+ list_add_tail(&mapped->list, &__iommu_remapped_mem);
+ mutex_unlock(&__iommu_mem_list_lock);
+ }
+ return size;
+}
+
+/*
+ * Copy memory from a virtually-addressed area into a physically-addressed area
+ */
+int __iommu_save_to_oldmem(unsigned long to, void *from, unsigned long size)
+{
+ unsigned long pfn; /* Page Frame Number */
+ size_t csize = (size_t)size; /* Num(bytes to copy) */
+ unsigned long offset; /* Lower 12 bits of to */
+ void __iomem *virt_mem;
+ struct iommu_remapped_entry *mapped;
+
+ pfn = to >> VTD_PAGE_SHIFT;
+ offset = to & (~VTD_PAGE_MASK);
+
+ if (page_is_ram(pfn)) {
+ memcpy(pfn_to_kaddr(pfn) + offset, from, csize);
+ } else{
+ mapped = kzalloc(sizeof(struct iommu_remapped_entry),
+ GFP_KERNEL);
+ if (!mapped)
+ return -ENOMEM;
+
+ virt_mem = ioremap_cache((unsigned long)to, size);
+ if (!virt_mem) {
+ kfree(mapped);
+ return -ENOMEM;
+ }
+ memcpy(virt_mem, from, size);
+ mutex_lock(&__iommu_mem_list_lock);
+ mapped->mem = virt_mem;
+ list_add_tail(&mapped->list, &__iommu_remapped_mem);
+ mutex_unlock(&__iommu_mem_list_lock);
+ }
+ return size;
+}
+
+/*
+ * Free the mapped memory for ioremap;
+ */
+int __iommu_free_mapped_mem(void)
+{
+ struct iommu_remapped_entry *mem_entry, *tmp;
+
+ mutex_lock(&__iommu_mem_list_lock);
+ list_for_each_entry_safe(mem_entry, tmp, &__iommu_remapped_mem, list) {
+ iounmap(mem_entry->mem);
+ list_del(&mem_entry->list);
+ kfree(mem_entry);
+ }
+ mutex_unlock(&__iommu_mem_list_lock);
+ return 0;
+}
+/*
* Interfaces for when a new domain in the crashdump kernel needs some
* values from the panicked kernel's context entries
*
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index a65208a..8ffa523 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -26,6 +26,7 @@
#include <linux/iova.h>
#include <linux/io.h>
#include <linux/dma_remapping.h>
+#include <linux/crash_dump.h>
#include <asm/cacheflush.h>
#include <asm/iommu.h>
@@ -368,4 +369,12 @@ extern int dmar_ir_support(void);
extern const struct attribute_group *intel_iommu_groups[];
+#ifdef CONFIG_CRASH_DUMP
+extern int __iommu_load_from_oldmem(void *to, unsigned long from,
+ unsigned long size);
+extern int __iommu_save_to_oldmem(unsigned long to, void *from,
+ unsigned long size);
+extern int __iommu_free_mapped_mem(void);
+#endif /* CONFIG_CRASH_DUMP */
+
#endif
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 05/10] iommu/vt-d: Add functions to load and save old re
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (3 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 04/10] iommu/vt-d: functions to copy data from old mem Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump Li, Zhen-Hua
` (5 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Add functions to load root entry table from old kernel, and to save updated
root entry table.
Add two member in struct intel_iommu, to store the RTA in old kernel, and
the mapped virt address of it.
We use the old RTA in dump kernel, and when the iommu->root_entry is used as
a cache in kdump kernel, its phys address will not be save to RTA register,
but when its data is changed, we will save the new data to old root entry table.
Li, Zhen-hua:
The functions and logics.
Takao Indoh:
Add __iommu_flush_cache.
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com>
---
drivers/iommu/intel-iommu.c | 53 +++++++++++++++++++++++++++++++++++++++++++++
include/linux/intel-iommu.h | 5 +++++
2 files changed, 58 insertions(+)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 2335831..5f11f43 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -388,6 +388,10 @@ static int intel_iommu_get_dids_from_old_kernel(struct intel_iommu *iommu);
static int device_to_domain_id(struct intel_iommu *iommu, u8 bus, u8 devfn);
+static void __iommu_load_old_root_entry(struct intel_iommu *iommu);
+
+static void __iommu_update_old_root_entry(struct intel_iommu *iommu, int index);
+
struct iommu_remapped_entry {
struct list_head list;
void __iomem *mem;
@@ -4986,4 +4990,53 @@ static int device_to_domain_id(struct intel_iommu *iommu, u8 bus, u8 devfn)
return did;
}
+/*
+ * Load the old root entry table to new root entry table.
+ */
+static void __iommu_load_old_root_entry(struct intel_iommu *iommu)
+{
+ if ((!iommu)
+ || (!iommu->root_entry)
+ || (!iommu->root_entry_old_virt)
+ || (!iommu->root_entry_old_phys))
+ return;
+ memcpy(iommu->root_entry, iommu->root_entry_old_virt, PAGE_SIZE);
+
+ __iommu_flush_cache(iommu, iommu->root_entry, PAGE_SIZE);
+}
+
+/*
+ * When the data in new root entry table is changed, this function
+ * must be called to save the updated data to old root entry table.
+ */
+static void __iommu_update_old_root_entry(struct intel_iommu *iommu, int index)
+{
+ u8 start;
+ unsigned long size;
+ void __iomem *to;
+ void *from;
+
+ if ((!iommu)
+ || (!iommu->root_entry)
+ || (!iommu->root_entry_old_virt)
+ || (!iommu->root_entry_old_phys))
+ return;
+
+ if (index < -1 || index >= ROOT_ENTRY_NR)
+ return;
+
+ if (index == -1) {
+ start = 0;
+ size = ROOT_ENTRY_NR * sizeof(struct root_entry);
+ } else {
+ start = index * sizeof(struct root_entry);
+ size = sizeof(struct root_entry);
+ }
+ to = iommu->root_entry_old_virt;
+ from = iommu->root_entry;
+ memcpy(to + start, from + start, size);
+
+ __iommu_flush_cache(iommu, to + start, size);
+}
+
#endif /* CONFIG_CRASH_DUMP */
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 8ffa523..8e29b97 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -329,6 +329,11 @@ struct intel_iommu {
spinlock_t lock; /* protect context, domain ids */
struct root_entry *root_entry; /* virtual address */
+#ifdef CONFIG_CRASH_DUMP
+ void __iomem *root_entry_old_virt; /* mapped from old root entry */
+ unsigned long root_entry_old_phys; /* root entry in old kernel */
+#endif
+
struct iommu_flush flush;
#endif
struct q_inval *qi; /* Queued invalidation info */
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (4 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 05/10] iommu/vt-d: Add functions to load and save old re Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-15 3:28 ` Baoquan He
2015-01-12 7:06 ` [PATCH v8 07/10] iommu/vt-d: enable kdump support in iommu module Li, Zhen-Hua
` (4 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Populate it with support functions to copy iommu translation tables from
from the panicked kernel into the kdump kernel in the event of a crash.
Functions:
malloc new context table and copy old context table to the new one.
malloc new page table and copy old page table to the new one.
Bill Sumner:
Original version, the creation of the data types and functions.
Li, Zhenhua:
Minor change:
Update the usage of context_get_* and context_put*, use context_*
and context_set_* for replacement.
Update the name of the function that copies root entry table.
Use new function to copy old context entry tables and page tables.
Use "unsigned long" for physical address.
Change incorrect aw_shift[4] and a few comments in copy_context_entry().
Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel-iommu.c | 547 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 547 insertions(+)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 5f11f43..277b294 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -399,6 +399,62 @@ struct iommu_remapped_entry {
static LIST_HEAD(__iommu_remapped_mem);
static DEFINE_MUTEX(__iommu_mem_list_lock);
+/* ========================================================================
+ * Copy iommu translation tables from old kernel into new kernel.
+ * Entry to this set of functions is: intel_iommu_load_translation_tables()
+ * ------------------------------------------------------------------------
+ */
+
+/*
+ * Lists of domain_values_entry to hold domain values found during the copy.
+ * One list for each iommu in g_number_of_iommus.
+ */
+static struct list_head *domain_values_list;
+
+
+#define RET_BADCOPY -1 /* Return-code: Cannot copy translate tables */
+
+/*
+ * Struct copy_page_addr_parms is used to allow copy_page_addr()
+ * to accumulate values across multiple calls and returns.
+ */
+struct copy_page_addr_parms {
+ u32 first; /* flag: first-time */
+ u32 last; /* flag: last-time */
+ u32 bus; /* last bus number we saw */
+ u32 devfn; /* last devfn we saw */
+ u32 shift; /* last shift we saw */
+ u64 pte; /* Page Table Entry */
+ u64 next_addr; /* next-expected page_addr */
+
+ u64 page_addr; /* page_addr accumulating size */
+ u64 page_size; /* page_size accumulated */
+
+ struct domain_values_entry *dve; /* to accumulate iova ranges */
+};
+
+enum returns_from_copy_context_entry {
+RET_CCE_NOT_PRESENT = 1,
+RET_CCE_NEW_PAGE_TABLES,
+RET_CCE_PASS_THROUGH_1,
+RET_CCE_PASS_THROUGH_2,
+RET_CCE_RESERVED_VALUE,
+RET_CCE_PREVIOUS_DID
+};
+
+static int copy_context_entry(struct intel_iommu *iommu, u32 bus, u32 devfn,
+ void *ppap, struct context_entry *ce);
+
+static int copy_context_entry_table(struct intel_iommu *iommu,
+ u32 bus, void *ppap,
+ unsigned long *context_new_p,
+ unsigned long context_old_phys);
+
+static int copy_root_entry_table(struct intel_iommu *iommu, void *ppap);
+
+static int intel_iommu_load_translation_tables(struct dmar_drhd_unit *drhd,
+ int g_num_of_iommus);
+
#endif /* CONFIG_CRASH_DUMP */
/*
@@ -5039,4 +5095,495 @@ static void __iommu_update_old_root_entry(struct intel_iommu *iommu, int index)
__iommu_flush_cache(iommu, to + start, size);
}
+/*
+ * constant for initializing instances of copy_page_addr_parms properly.
+ */
+static struct copy_page_addr_parms copy_page_addr_parms_init = {1, 0};
+
+
+
+/*
+ * Lowest-level function in the 'Copy Page Tables' set
+ * Called once for each page_addr present in an iommu page-address table.
+ *
+ * Because of the depth-first traversal of the page-tables by the
+ * higher-level functions that call 'copy_page_addr', all pages
+ * of a domain will be presented in ascending order of IO Virtual Address.
+ *
+ * This function accumulates each contiguous range of these IOVAs and
+ * reserves it within the proper domain in the crashdump kernel when a
+ * non-contiguous range is detected, as determined by any of the following:
+ * 1. a change in the bus or device owning the presented page
+ * 2. a change in the page-size of the presented page (parameter shift)
+ * 3. a change in the page-table entry of the presented page
+ * 4. a presented IOVA that does not match the expected next-page address
+ * 5. the 'last' flag is set, indicating that all IOVAs have been seen.
+ */
+static int copy_page_addr(u64 page_addr, u32 shift, u32 bus, u32 devfn,
+ u64 pte, struct domain_values_entry *dve,
+ void *parms)
+{
+ struct copy_page_addr_parms *ppap = parms;
+
+ u64 page_size = ((u64)1 << shift); /* page_size */
+ u64 pfn_lo; /* For reserving IOVA range */
+ u64 pfn_hi; /* For reserving IOVA range */
+ struct iova *iova_p; /* For reserving IOVA range */
+
+ if (!ppap) {
+ pr_err("ERROR: ppap is NULL: 0x%3.3x(%3.3d) DevFn: 0x%3.3x(%3.3d) Page: 0x%16.16llx Size: 0x%16.16llx(%lld)\n",
+ bus, bus, devfn, devfn, page_addr,
+ page_size, page_size);
+ return 0;
+ }
+
+ /* If (only extending current addr range) */
+ if (ppap->first == 0 &&
+ ppap->last == 0 &&
+ ppap->bus == bus &&
+ ppap->devfn == devfn &&
+ ppap->shift == shift &&
+ (ppap->pte & ~VTD_PAGE_MASK) == (pte & ~VTD_PAGE_MASK) &&
+ ppap->next_addr == page_addr) {
+
+ /* Update page size and next-expected address */
+ ppap->next_addr += page_size;
+ ppap->page_size += page_size;
+ return 0;
+ }
+
+ if (!ppap->first) {
+ /* Close-out the accumulated IOVA address range */
+
+ if (!ppap->dve) {
+ pr_err("%s ERROR: ppap->dve is NULL -- needed to reserve range for B:D:F=%2.2x:%2.2x:%1.1x\n",
+ __func__,
+ ppap->bus, ppap->devfn >> 3, ppap->devfn & 0x7);
+ return RET_BADCOPY;
+ }
+ pfn_lo = IOVA_PFN(ppap->page_addr);
+ pfn_hi = IOVA_PFN(ppap->page_addr + ppap->page_size);
+ iova_p = reserve_iova(&ppap->dve->iovad, pfn_lo, pfn_hi);
+ }
+
+ /* Prepare for a new IOVA address range */
+ ppap->first = 0; /* Not first-time anymore */
+ ppap->bus = bus;
+ ppap->devfn = devfn;
+ ppap->shift = shift;
+ ppap->pte = pte;
+ ppap->next_addr = page_addr + page_size; /* Next-expected page_addr */
+
+ ppap->page_addr = page_addr; /* Addr(new page) */
+ ppap->page_size = page_size; /* Size(new page) */
+
+ ppap->dve = dve; /* adr(device_values_entry for new range) */
+
+ return 0;
+}
+
+/*
+ * Recursive function to copy the tree of page tables (max 6 recursions)
+ * Parameter 'shift' controls the recursion
+ */
+static int copy_page_table(unsigned long *dma_pte_new_p,
+ unsigned long dma_pte_phys,
+ u32 shift, u64 page_addr,
+ struct intel_iommu *iommu,
+ u32 bus, u32 devfn,
+ struct domain_values_entry *dve, void *ppap)
+{
+ int ret; /* Integer return code */
+ struct dma_pte *p; /* Virtual adr(each entry) iterator */
+ struct dma_pte *pgt_new_virt; /* Adr(dma_pte in new kernel) */
+ unsigned long dma_pte_next; /* Adr(next table down) */
+ u64 u; /* index(each entry in page_table) */
+
+
+ /* If (already done all levels -- problem) */
+ if (shift < 12) {
+ pr_err("ERROR %s shift < 12 %lx\n", __func__, dma_pte_phys);
+ pr_err("shift %d, page_addr %16.16llu bus %3.3u devfn %3.3u\n",
+ shift, page_addr, bus, devfn);
+ return RET_BADCOPY;
+ }
+
+ /* allocate a page table in the new kernel
+ * copy contents from old kernel
+ * then update each entry in the table in the new kernel
+ */
+
+ pgt_new_virt = (struct dma_pte *)alloc_pgtable_page(iommu->node);
+ if (!pgt_new_virt)
+ return -ENOMEM;
+
+ ret = __iommu_load_from_oldmem(pgt_new_virt,
+ dma_pte_phys,
+ VTD_PAGE_SIZE);
+
+ if (ret <= 0)
+ return ret;
+
+ for (u = 0, p = pgt_new_virt; u < 512; u++, p++) {
+
+ if (((p->val & DMA_PTE_READ) == 0) &&
+ ((p->val & DMA_PTE_WRITE) == 0))
+ continue;
+
+ if (dma_pte_superpage(p) || (shift == 12)) {
+
+ ret = copy_page_addr(page_addr | (u << shift),
+ shift, bus, devfn, p->val, dve, ppap);
+ if (ret)
+ return ret;
+ continue;
+ }
+
+ ret = copy_page_table(&dma_pte_next,
+ (p->val & VTD_PAGE_MASK),
+ shift-9, page_addr | (u << shift),
+ iommu, bus, devfn, dve, ppap);
+
+ __iommu_flush_cache(iommu, phys_to_virt(dma_pte_next),
+ VTD_PAGE_SIZE);
+
+ if (ret)
+ return ret;
+
+ p->val &= ~VTD_PAGE_MASK; /* Clear old and set new pgd */
+ p->val |= ((u64)dma_pte_next & VTD_PAGE_MASK);
+ }
+
+ *dma_pte_new_p = virt_to_phys(pgt_new_virt);
+
+ return 0;
+}
+
+
+/*
+ * Called once for each context_entry found in a copied context_entry_table
+ * Each context_entry represents one PCIe device handled by the IOMMU.
+ *
+ * The 'domain_values_list' contains one 'domain_values_entry' for each
+ * unique domain-id found while copying the context entries for each iommu.
+ *
+ * The Intel-iommu spec. requires that every context_entry that contains
+ * the same domain-id point to the same set of page translation tables.
+ * The hardware uses this to improve the use of its translation cache.
+ * In order to insure that the copied translate tables abide by this
+ * requirement, this function keeps a list of domain-ids (dids) that
+ * have already been seen for this iommu. This function checks each entry
+ * already on the list for a domain-id that matches the domain-id in this
+ * context_entry. If found, this function places the address of the previous
+ * context's tree of page translation tables into this context_entry.
+ * If a matching previous entry is not found, a new 'domain_values_entry'
+ * structure is created for the domain-id in this context_entry and
+ * copy_page_table is called to duplicate its tree of page tables.
+ */
+static int copy_context_entry(struct intel_iommu *iommu, u32 bus, u32 devfn,
+ void *ppap, struct context_entry *ce)
+{
+ int ret = 0; /* Integer Return Code */
+ u32 shift = 0; /* bits to shift page_addr */
+ u64 page_addr = 0; /* Address of translated page */
+ unsigned long pgt_old_phys; /* Adr(page_table in the old kernel) */
+ unsigned long pgt_new_phys; /* Adr(page_table in the new kernel) */
+ u8 t; /* Translation-type from context */
+ u8 aw; /* Address-width from context */
+ u32 aw_shift[8] = {
+ 12+9+9, /* [000b] 30-bit AGAW (2-level page table) */
+ 12+9+9+9, /* [001b] 39-bit AGAW (3-level page table) */
+ 12+9+9+9+9, /* [010b] 48-bit AGAW (4-level page table) */
+ 12+9+9+9+9+9, /* [011b] 57-bit AGAW (5-level page table) */
+ 12+9+9+9+9+9+7, /* [100b] 64-bit AGAW (6-level page table) */
+ 0, /* [101b] Reserved */
+ 0, /* [110b] Reserved */
+ 0, /* [111b] Reserved */
+ };
+
+ struct domain_values_entry *dve = NULL;
+
+ if (!context_present(ce)) { /* If (context not present) */
+ ret = RET_CCE_NOT_PRESENT; /* Skip it */
+ goto exit;
+ }
+
+ t = context_translation_type(ce);
+ /* If we have seen this domain-id before on this iommu,
+ * give this context the same page-tables and we are done.
+ */
+ list_for_each_entry(dve, &domain_values_list[iommu->seq_id], link) {
+ if (dve->did == (int) context_domain_id(ce)) {
+ switch (t) {
+ case 0: /* page tables */
+ case 1: /* page tables */
+ context_set_address_root(ce,
+ virt_to_phys(dve->pgd));
+ ret = RET_CCE_PREVIOUS_DID;
+ break;
+
+ case 2: /* Pass through */
+ if (dve->pgd == NULL)
+ ret = RET_CCE_PASS_THROUGH_2;
+ else
+ ret = RET_BADCOPY;
+ break;
+
+ default: /* Bad value of 't'*/
+ ret = RET_BADCOPY;
+ break;
+ }
+ goto exit;
+ }
+ }
+
+ /* Since we now know that this is a new domain-id for this iommu,
+ * create a new entry, add it to the list, and handle its
+ * page tables.
+ */
+
+ dve = kcalloc(1, sizeof(struct domain_values_entry), GFP_KERNEL);
+ if (!dve) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ dve->did = (int) context_domain_id(ce);
+ dve->gaw = (int) agaw_to_width(context_address_width(ce));
+ dve->pgd = NULL;
+ init_iova_domain(&dve->iovad, DMA_32BIT_PFN);
+
+ list_add(&dve->link, &domain_values_list[iommu->seq_id]);
+
+
+ if (t == 0 || t == 1) { /* If (context has page tables) */
+ aw = context_address_width(ce);
+ shift = aw_shift[aw];
+
+ pgt_old_phys = context_address_root(ce) << VTD_PAGE_SHIFT;
+
+ ret = copy_page_table(&pgt_new_phys, pgt_old_phys,
+ shift-9, page_addr, iommu, bus, devfn, dve, ppap);
+
+ __iommu_flush_cache(iommu, phys_to_virt(pgt_new_phys),
+ VTD_PAGE_SIZE);
+
+ if (ret) /* if (problem) bail out */
+ goto exit;
+
+ context_set_address_root(ce, pgt_new_phys);
+ dve->pgd = phys_to_virt(pgt_new_phys);
+ ret = RET_CCE_NEW_PAGE_TABLES;
+ goto exit;
+ }
+
+ if (t == 2) { /* If (Identity mapped pass-through) */
+ ret = RET_CCE_PASS_THROUGH_1; /* REVISIT: Skip for now */
+ goto exit;
+ }
+
+ ret = RET_CCE_RESERVED_VALUE; /* Else ce->t is a Reserved value */
+ /* Note fall-through */
+
+exit: /* all returns come through here to insure good clean-up */
+ return ret;
+}
+
+
+/*
+ * Called once for each context_entry_table found in the root_entry_table
+ */
+static int copy_context_entry_table(struct intel_iommu *iommu,
+ u32 bus, void *ppap,
+ unsigned long *context_new_p,
+ unsigned long context_old_phys)
+{
+ int ret = 0; /* Integer return code */
+ struct context_entry *ce; /* Iterator */
+ unsigned long context_new_phys; /* adr(table in new kernel) */
+ struct context_entry *context_new_virt; /* adr(table in new kernel) */
+ u32 devfn = 0; /* PCI Device & function */
+
+ /* allocate a context-entry table in the new kernel
+ * copy contents from old kernel
+ * then update each entry in the table in the new kernel
+ */
+ context_new_virt =
+ (struct context_entry *)alloc_pgtable_page(iommu->node);
+ if (!context_new_virt)
+ return -ENOMEM;
+
+ context_new_phys = virt_to_phys(context_new_virt);
+
+ __iommu_load_from_oldmem(context_new_virt,
+ context_old_phys,
+ VTD_PAGE_SIZE);
+
+ for (devfn = 0, ce = context_new_virt; devfn < 256; devfn++, ce++) {
+
+ if (!context_present(ce)) /* If (context not present) */
+ continue; /* Skip it */
+
+ ret = copy_context_entry(iommu, bus, devfn, ppap, ce);
+ if (ret < 0) /* if (problem) */
+ return RET_BADCOPY;
+
+ switch (ret) {
+ case RET_CCE_NOT_PRESENT:
+ continue;
+ case RET_CCE_NEW_PAGE_TABLES:
+ continue;
+ case RET_CCE_PASS_THROUGH_1:
+ continue;
+ case RET_CCE_PASS_THROUGH_2:
+ continue;
+ case RET_CCE_RESERVED_VALUE:
+ return RET_BADCOPY;
+ case RET_CCE_PREVIOUS_DID:
+ continue;
+ default:
+ return RET_BADCOPY;
+ };
+ }
+
+ *context_new_p = context_new_phys;
+ return 0;
+}
+
+
+/*
+ * Highest-level function in the 'copy translation tables' set of functions
+ */
+static int copy_root_entry_table(struct intel_iommu *iommu, void *ppap)
+{
+ int ret = 0; /* Integer return code */
+ u32 bus; /* Index: root-entry-table */
+ struct root_entry *re; /* Virt(iterator: new table) */
+ unsigned long context_old_phys; /* Phys(context table entry) */
+ unsigned long context_new_phys; /* Phys(new context_entry) */
+
+ /*
+ * allocate a root-entry table in the new kernel
+ * copy contents from old kernel
+ * then update each entry in the table in the new kernel
+ */
+
+ if (!iommu->root_entry_old_phys)
+ return -ENOMEM;
+
+ for (bus = 0, re = iommu->root_entry; bus < 256; bus += 1, re += 1) {
+ if (!root_present(re))
+ continue;
+
+ context_old_phys = get_context_phys_from_root(re);
+
+ if (!context_old_phys)
+ continue;
+
+ context_new_phys = 0;
+ ret = copy_context_entry_table(iommu, bus, ppap,
+ &context_new_phys,
+ context_old_phys);
+ __iommu_flush_cache(iommu,
+ phys_to_virt(context_new_phys),
+ VTD_PAGE_SIZE);
+
+ if (ret)
+ return ret;
+
+ set_root_value(re, context_new_phys);
+ }
+
+ return 0;
+}
+/*
+ * Interface to the "copy translation tables" set of functions
+ * from mainline code.
+ */
+static int intel_iommu_load_translation_tables(struct dmar_drhd_unit *drhd,
+ int g_num_of_iommus)
+{
+ struct intel_iommu *iommu; /* Virt(iommu hardware registers) */
+ unsigned long long q; /* quadword scratch */
+ int ret = 0; /* Integer return code */
+ int i = 0; /* Loop index */
+ unsigned long flags;
+
+ /* Structure so copy_page_addr() can accumulate things
+ * over multiple calls and returns
+ */
+ struct copy_page_addr_parms ppa_parms = copy_page_addr_parms_init;
+ struct copy_page_addr_parms *ppap = &ppa_parms;
+
+
+ iommu = drhd->iommu;
+ q = dmar_readq(iommu->reg + DMAR_RTADDR_REG);
+ if (!q)
+ return -1;
+
+ /* If (list needs initializing) do it here */
+ if (!domain_values_list) {
+ domain_values_list =
+ kcalloc(g_num_of_iommus, sizeof(struct list_head),
+ GFP_KERNEL);
+
+ if (!domain_values_list) {
+ pr_err("Allocation failed for domain_values_list array\n");
+ return -ENOMEM;
+ }
+ for (i = 0; i < g_num_of_iommus; i++)
+ INIT_LIST_HEAD(&domain_values_list[i]);
+ }
+
+ spin_lock_irqsave(&iommu->lock, flags);
+
+ /* Load the root-entry table from the old kernel
+ * foreach context_entry_table in root_entry
+ * foreach context_entry in context_entry_table
+ * foreach level-1 page_table_entry in context_entry
+ * foreach level-2 page_table_entry in level 1 page_table_entry
+ * Above pattern continues up to 6 levels of page tables
+ * Sanity-check the entry
+ * Process the bus, devfn, page_address, page_size
+ */
+ if (!iommu->root_entry) {
+ iommu->root_entry =
+ (struct root_entry *)alloc_pgtable_page(iommu->node);
+ if (!iommu->root_entry) {
+ spin_unlock_irqrestore(&iommu->lock, flags);
+ return -ENOMEM;
+ }
+ }
+
+ iommu->root_entry_old_phys = q & VTD_PAGE_MASK;
+ if (!iommu->root_entry_old_phys) {
+ pr_err("Could not read old root entry address.");
+ return -1;
+ }
+
+ iommu->root_entry_old_virt = ioremap_cache(iommu->root_entry_old_phys,
+ VTD_PAGE_SIZE);
+ if (!iommu->root_entry_old_virt) {
+ pr_err("Could not map the old root entry.");
+ return -ENOMEM;
+ }
+
+ __iommu_load_old_root_entry(iommu);
+ ret = copy_root_entry_table(iommu, ppap);
+ __iommu_flush_cache(iommu, iommu->root_entry, PAGE_SIZE);
+ __iommu_update_old_root_entry(iommu, -1);
+
+ spin_unlock_irqrestore(&iommu->lock, flags);
+
+ __iommu_free_mapped_mem();
+
+ if (ret)
+ return ret;
+
+ ppa_parms.last = 1;
+ copy_page_addr(0, 0, 0, 0, 0, NULL, ppap);
+
+ return 0;
+}
+
#endif /* CONFIG_CRASH_DUMP */
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 07/10] iommu/vt-d: enable kdump support in iommu module
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (5 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 08/10] iommu/vt-d: assign new page table for dma_map Li, Zhen-Hua
` (3 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Modify the operation of the following functions when called during crash dump:
device_to_domain_id
get_domain_for_dev
init_dmars
intel_iommu_init
Bill Sumner:
Original version.
Zhenhua:
Minor change,
The name of new calling functions.
Do not disable and re-enable TE in kdump kernel.
Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel-iommu.c | 135 +++++++++++++++++++++++++++++++++++++++-----
1 file changed, 120 insertions(+), 15 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 277b294..324c504 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -907,6 +907,11 @@ static struct context_entry * device_to_context_entry(struct intel_iommu *iommu,
set_root_value(root, phy_addr);
set_root_present(root);
__iommu_flush_cache(iommu, root, sizeof(*root));
+
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel())
+ __iommu_update_old_root_entry(iommu, bus);
+#endif
}
spin_unlock_irqrestore(&iommu->lock, flags);
return &context[devfn];
@@ -958,7 +963,8 @@ static void free_context_table(struct intel_iommu *iommu)
spin_lock_irqsave(&iommu->lock, flags);
if (!iommu->root_entry) {
- goto out;
+ spin_unlock_irqrestore(&iommu->lock, flags);
+ return;
}
for (i = 0; i < ROOT_ENTRY_NR; i++) {
root = &iommu->root_entry[i];
@@ -966,10 +972,23 @@ static void free_context_table(struct intel_iommu *iommu)
if (context)
free_pgtable_page(context);
}
+
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel()) {
+ iommu->root_entry_old_phys = 0;
+ root = iommu->root_entry_old_virt;
+ iommu->root_entry_old_virt = NULL;
+ }
+#endif
free_pgtable_page(iommu->root_entry);
iommu->root_entry = NULL;
-out:
+
spin_unlock_irqrestore(&iommu->lock, flags);
+
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel())
+ iounmap(root);
+#endif
}
static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
@@ -2381,6 +2400,9 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
unsigned long flags;
u8 bus, devfn;
int did = -1; /* Default to "no domain_id supplied" */
+#ifdef CONFIG_CRASH_DUMP
+ struct domain_values_entry *dve = NULL;
+#endif /* CONFIG_CRASH_DUMP */
domain = find_domain(dev);
if (domain)
@@ -2414,6 +2436,24 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
domain = alloc_domain(0);
if (!domain)
return NULL;
+
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel()) {
+ /*
+ * if this device had a did in the old kernel
+ * use its values instead of generating new ones
+ */
+ did = device_to_domain_id(iommu, bus, devfn);
+ if (did > 0 || (did == 0 && !cap_caching_mode(iommu->cap)))
+ dve = intel_iommu_did_to_domain_values_entry(did,
+ iommu);
+ if (dve)
+ gaw = dve->gaw;
+ else
+ did = -1;
+ }
+#endif /* CONFIG_CRASH_DUMP */
+
domain->id = iommu_attach_domain(domain, iommu, did);
if (domain->id < 0) {
free_domain_mem(domain);
@@ -2425,6 +2465,18 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
return NULL;
}
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel() && dve) {
+
+ if (domain->pgd)
+ free_pgtable_page(domain->pgd);
+
+ domain->pgd = dve->pgd;
+
+ copy_reserved_iova(&dve->iovad, &domain->iovad);
+ }
+#endif /* CONFIG_CRASH_DUMP */
+
/* register PCI DMA alias device */
if (dev_is_pci(dev)) {
tmp = dmar_insert_dev_info(iommu, PCI_BUS_NUM(dma_alias),
@@ -2948,14 +3000,35 @@ static int __init init_dmars(void)
if (ret)
goto free_iommu;
- /*
- * TBD:
- * we could share the same root & context tables
- * among all IOMMU's. Need to Split it later.
- */
- ret = iommu_alloc_root_entry(iommu);
- if (ret)
- goto free_iommu;
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel()) {
+ pr_info("IOMMU Copying translate tables from panicked kernel\n");
+ ret = intel_iommu_load_translation_tables(drhd,
+ g_num_of_iommus);
+ if (ret) {
+ pr_err("IOMMU: Copy translate tables failed\n");
+
+ /* Best to stop trying */
+ goto free_iommu;
+ }
+ pr_info("IOMMU: root_cache:0x%12.12llx phys:0x%12.12llx\n",
+ (u64)iommu->root_entry,
+ (u64)iommu->root_entry_old_phys);
+ intel_iommu_get_dids_from_old_kernel(iommu);
+ } else {
+#endif /* CONFIG_CRASH_DUMP */
+ /*
+ * TBD:
+ * we could share the same root & context tables
+ * among all IOMMU's. Need to Split it later.
+ */
+ ret = iommu_alloc_root_entry(iommu);
+ if (ret)
+ goto free_iommu;
+#ifdef CONFIG_CRASH_DUMP
+ }
+#endif
+
if (!ecap_pass_through(iommu->ecap))
hw_pass_through = 0;
}
@@ -2972,6 +3045,16 @@ static int __init init_dmars(void)
check_tylersburg_isoch();
+#ifdef CONFIG_CRASH_DUMP
+ /*
+ * In the crashdump kernel: Skip setting-up new domains for
+ * si, rmrr, and the isa bus on the expectation that these
+ * translations were copied from the old kernel.
+ */
+ if (is_kdump_kernel())
+ goto skip_new_domains_for_si_rmrr_isa;
+#endif /* CONFIG_CRASH_DUMP */
+
/*
* If pass through is not set or not enabled, setup context entries for
* identity mappings for rmrr, gfx, and isa and may fall back to static
@@ -3012,6 +3095,10 @@ static int __init init_dmars(void)
iommu_prepare_isa();
+#ifdef CONFIG_CRASH_DUMP
+skip_new_domains_for_si_rmrr_isa:;
+#endif /* CONFIG_CRASH_DUMP */
+
/*
* for each drhd
* enable fault log
@@ -3040,7 +3127,15 @@ static int __init init_dmars(void)
iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
- iommu_enable_translation(iommu);
+
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel()) {
+ if (!(iommu->gcmd & DMA_GCMD_TE))
+ iommu_enable_translation(iommu);
+ } else
+#endif
+ iommu_enable_translation(iommu);
+
iommu_disable_protect_mem_regions(iommu);
}
@@ -4343,12 +4438,22 @@ int __init intel_iommu_init(void)
goto out_free_dmar;
}
+#ifdef CONFIG_CRASH_DUMP
/*
- * Disable translation if already enabled prior to OS handover.
+ * If (This is the crash kernel)
+ * Set: copy iommu translate tables from old kernel
+ * Skip disabling the iommu hardware translations
*/
- for_each_active_iommu(iommu, drhd)
- if (iommu->gcmd & DMA_GCMD_TE)
- iommu_disable_translation(iommu);
+ if (is_kdump_kernel()) {
+ pr_info("IOMMU Skip disabling iommu hardware translations\n");
+ } else
+#endif /* CONFIG_CRASH_DUMP */
+ /*
+ * Disable translation if already enabled prior to OS handover.
+ */
+ for_each_active_iommu(iommu, drhd)
+ if (iommu->gcmd & DMA_GCMD_TE)
+ iommu_disable_translation(iommu);
if (dmar_dev_scope_init() < 0) {
if (force_on)
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 08/10] iommu/vt-d: assign new page table for dma_map
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (6 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 07/10] iommu/vt-d: enable kdump support in iommu module Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 09/10] iommu/vt-d: Copy functions for irte Li, Zhen-Hua
` (2 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
When a device driver issues the first dma_map command for a
device, we assign a new and empty page-table, thus removing all
mappings from the old kernel for the device.
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel-iommu.c | 56 ++++++++++++++++++++++++++++++++++++++-------
1 file changed, 48 insertions(+), 8 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 324c504..ccbad3f 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -44,6 +44,7 @@
#include <asm/irq_remapping.h>
#include <asm/cacheflush.h>
#include <asm/iommu.h>
+#include <linux/dma-mapping.h>
#include "irq_remapping.h"
@@ -455,6 +456,8 @@ static int copy_root_entry_table(struct intel_iommu *iommu, void *ppap);
static int intel_iommu_load_translation_tables(struct dmar_drhd_unit *drhd,
int g_num_of_iommus);
+static void unmap_device_dma(struct dmar_domain *domain, struct device *dev);
+
#endif /* CONFIG_CRASH_DUMP */
/*
@@ -3196,14 +3199,30 @@ static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev)
return NULL;
}
- /* make sure context mapping is ok */
- if (unlikely(!domain_context_mapped(dev))) {
- ret = domain_context_mapping(domain, dev, CONTEXT_TT_MULTI_LEVEL);
- if (ret) {
- printk(KERN_ERR "Domain context map for %s failed",
- dev_name(dev));
- return NULL;
- }
+ /* if in kdump kernel, we need to unmap the mapped dma pages,
+ * detach this device first.
+ */
+ if (likely(domain_context_mapped(dev))) {
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel()) {
+ unmap_device_dma(domain, dev);
+ domain = get_domain_for_dev(dev,
+ DEFAULT_DOMAIN_ADDRESS_WIDTH);
+ if (!domain) {
+ pr_err("Allocating domain for %s failed",
+ dev_name(dev));
+ return NULL;
+ }
+ } else
+#endif
+ return domain;
+ }
+
+ ret = domain_context_mapping(domain, dev, CONTEXT_TT_MULTI_LEVEL);
+ if (ret) {
+ pr_err("Domain context map for %s failed",
+ dev_name(dev));
+ return NULL;
}
return domain;
@@ -5691,4 +5710,25 @@ static int intel_iommu_load_translation_tables(struct dmar_drhd_unit *drhd,
return 0;
}
+static void unmap_device_dma(struct dmar_domain *domain, struct device *dev)
+{
+ struct intel_iommu *iommu;
+ struct context_entry *ce;
+ struct iova *iova;
+ u8 bus, devfn;
+ phys_addr_t phys_addr;
+ dma_addr_t dev_addr;
+
+ iommu = device_to_iommu(dev, &bus, &devfn);
+ ce = device_to_context_entry(iommu, bus, devfn);
+ phys_addr = context_address_root(ce) << VTD_PAGE_SHIFT;
+ dev_addr = phys_to_dma(dev, phys_addr);
+
+ iova = find_iova(&domain->iovad, IOVA_PFN(dev_addr));
+ if (iova)
+ intel_unmap(dev, dev_addr);
+
+ domain_remove_one_dev_info(domain, dev);
+}
+
#endif /* CONFIG_CRASH_DUMP */
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 09/10] iommu/vt-d: Copy functions for irte
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (7 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 08/10] iommu/vt-d: assign new page table for dma_map Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 10/10] iommu/vt-d: Use old irte in kdump kernel Li, Zhen-Hua
2015-01-12 8:00 ` [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults " Li, ZhenHua
10 siblings, 0 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Functions to copy the irte data from the old kernel into the kdump kernel.
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel_irq_remapping.c | 62 +++++++++++++++++++++++++++++++++++++
include/linux/intel-iommu.h | 4 +++
2 files changed, 66 insertions(+)
diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index a55b207..d37fd62 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -8,6 +8,7 @@
#include <linux/irq.h>
#include <linux/intel-iommu.h>
#include <linux/acpi.h>
+#include <linux/crash_dump.h>
#include <asm/io_apic.h>
#include <asm/smp.h>
#include <asm/cpu.h>
@@ -17,6 +18,11 @@
#include "irq_remapping.h"
+#ifdef CONFIG_CRASH_DUMP
+static int __iommu_load_old_irte(struct intel_iommu *iommu);
+static int __iommu_update_old_irte(struct intel_iommu *iommu, int index);
+#endif /* CONFIG_CRASH_DUMP */
+
struct ioapic_scope {
struct intel_iommu *iommu;
unsigned int id;
@@ -1296,3 +1302,59 @@ int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
return ret;
}
+
+#ifdef CONFIG_CRASH_DUMP
+
+static int __iommu_load_old_irte(struct intel_iommu *iommu)
+{
+ if ((!iommu)
+ || (!iommu->ir_table)
+ || (!iommu->ir_table->base)
+ || (!iommu->ir_table->base_old_phys)
+ || (!iommu->ir_table->base_old_virt))
+ return -1;
+
+ memcpy(iommu->ir_table->base,
+ iommu->ir_table->base_old_virt,
+ INTR_REMAP_TABLE_ENTRIES*sizeof(struct irte));
+
+ __iommu_flush_cache(iommu, iommu->ir_table->base,
+ INTR_REMAP_TABLE_ENTRIES*sizeof(struct irte));
+
+ return 0;
+}
+
+static int __iommu_update_old_irte(struct intel_iommu *iommu, int index)
+{
+ int start;
+ unsigned long size;
+ void __iomem *to;
+ void *from;
+
+ if ((!iommu)
+ || (!iommu->ir_table)
+ || (!iommu->ir_table->base)
+ || (!iommu->ir_table->base_old_phys)
+ || (!iommu->ir_table->base_old_virt))
+ return -1;
+
+ if (index < -1 || index >= INTR_REMAP_TABLE_ENTRIES)
+ return -1;
+
+ if (index == -1) {
+ start = 0;
+ size = INTR_REMAP_TABLE_ENTRIES * sizeof(struct irte);
+ } else {
+ start = index * sizeof(struct irte);
+ size = sizeof(struct irte);
+ }
+
+ to = iommu->ir_table->base_old_virt;
+ from = iommu->ir_table->base;
+ memcpy(to + start, from + start, size);
+
+ __iommu_flush_cache(iommu, to + start, size);
+
+ return 0;
+}
+#endif /* CONFIG_CRASH_DUMP */
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 8e29b97..76c6ea5 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -290,6 +290,10 @@ struct q_inval {
struct ir_table {
struct irte *base;
unsigned long *bitmap;
+#ifdef CONFIG_CRASH_DUMP
+ void __iomem *base_old_virt;
+ unsigned long base_old_phys;
+#endif
};
#endif
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v8 10/10] iommu/vt-d: Use old irte in kdump kernel
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (8 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 09/10] iommu/vt-d: Copy functions for irte Li, Zhen-Hua
@ 2015-01-12 7:06 ` Li, Zhen-Hua
2015-01-12 8:00 ` [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults " Li, ZhenHua
10 siblings, 0 replies; 27+ messages in thread
From: Li, Zhen-Hua @ 2015-01-12 7:06 UTC (permalink / raw)
To: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung
Cc: iommu, linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, zhen-hual, rwright
Fix the intr-remapping fault.
[1.594890] dmar: DRHD: handling fault status reg 2
[1.594894] dmar: INTR-REMAP: Request device [[41:00.0] fault index 4d
[1.594894] INTR-REMAP:[fault reason 34] Present field in the IRTE entry
is clear
Use old irte in kdump kernel, do not disable and re-enable interrupt
remapping.
Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
---
drivers/iommu/intel_irq_remapping.c | 42 ++++++++++++++++++++++++++++++++-----
1 file changed, 37 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index d37fd62..58356cb 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -198,6 +198,11 @@ static int modify_irte(int irq, struct irte *irte_modified)
set_64bit(&irte->low, irte_modified->low);
set_64bit(&irte->high, irte_modified->high);
+
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel())
+ __iommu_update_old_irte(iommu, index);
+#endif
__iommu_flush_cache(iommu, irte, sizeof(*irte));
rc = qi_flush_iec(iommu, index, 0);
@@ -259,6 +264,11 @@ static int clear_entries(struct irq_2_iommu *irq_iommu)
bitmap_release_region(iommu->ir_table->bitmap, index,
irq_iommu->irte_mask);
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel())
+ __iommu_update_old_irte(iommu, -1);
+#endif
+
return qi_flush_iec(iommu, index, irq_iommu->irte_mask);
}
@@ -640,11 +650,20 @@ static int __init intel_enable_irq_remapping(void)
*/
dmar_fault(-1, iommu);
- /*
- * Disable intr remapping and queued invalidation, if already
- * enabled prior to OS handover.
- */
- iommu_disable_irq_remapping(iommu);
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel()) {
+ /* Do notdisable irq and then re-enable again. */
+ } else {
+#endif
+ /*
+ * Disable intr remapping and queued invalidation,
+ * if already enabled prior to OS handover.
+ */
+ iommu_disable_irq_remapping(iommu);
+
+#ifdef CONFIG_CRASH_DUMP
+ }
+#endif
dmar_disable_qi(iommu);
}
@@ -687,7 +706,20 @@ static int __init intel_enable_irq_remapping(void)
if (intel_setup_irq_remapping(iommu))
goto error;
+#ifdef CONFIG_CRASH_DUMP
+ if (is_kdump_kernel()) {
+ unsigned long long q;
+
+ q = dmar_readq(iommu->reg + DMAR_IRTA_REG);
+ iommu->ir_table->base_old_phys = q & VTD_PAGE_MASK;
+ iommu->ir_table->base_old_virt = ioremap_cache(
+ iommu->ir_table->base_old_phys,
+ INTR_REMAP_TABLE_ENTRIES*sizeof(struct irte));
+ __iommu_load_old_irte(iommu);
+ } else
+#endif
iommu_set_irq_remapping(iommu, eim);
+
setup = 1;
}
--
2.0.0-rc0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
` (9 preceding siblings ...)
2015-01-12 7:06 ` [PATCH v8 10/10] iommu/vt-d: Use old irte in kdump kernel Li, Zhen-Hua
@ 2015-01-12 8:00 ` Li, ZhenHua
2015-01-12 9:07 ` Baoquan He
10 siblings, 1 reply; 27+ messages in thread
From: Li, ZhenHua @ 2015-01-12 8:00 UTC (permalink / raw)
To: Li, Zhen-Hua
Cc: dwmw2, indou.takao, bhe, joro, vgoyal, dyoung, iommu,
linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, rwright
Comparing to v7, this version adds only a few lines code:
In function copy_page_table,
+ __iommu_flush_cache(iommu, phys_to_virt(dma_pte_next),
+ VTD_PAGE_SIZE);
On 01/12/2015 03:06 PM, Li, Zhen-Hua wrote:
> This patchset is an update of Bill Sumner's patchset, implements a fix for:
> If a kernel boots with intel_iommu=on on a system that supports intel vt-d,
> when a panic happens, the kdump kernel will boot with these faults:
>
> dmar: DRHD: handling fault status reg 102
> dmar: DMAR:[DMA Read] Request device [01:00.0] fault addr fff80000
> DMAR:[fault reason 01] Present bit in root entry is clear
>
> dmar: DRHD: handling fault status reg 2
> dmar: INTR-REMAP: Request device [[61:00.0] fault index 42
> INTR-REMAP:[fault reason 34] Present field in the IRTE entry is clear
>
> On some system, the interrupt remapping fault will also happen even if the
> intel_iommu is not set to on, because the interrupt remapping will be enabled
> when x2apic is needed by the system.
>
> The cause of the DMA fault is described in Bill's original version, and the
> INTR-Remap fault is caused by a similar reason. In short, the initialization
> of vt-d drivers causes the in-flight DMA and interrupt requests get wrong
> response.
>
> To fix this problem, we modifies the behaviors of the intel vt-d in the
> crashdump kernel:
>
> For DMA Remapping:
> 1. To accept the vt-d hardware in an active state,
> 2. Do not disable and re-enable the translation, keep it enabled.
> 3. Use the old root entry table, do not rewrite the RTA register.
> 4. Malloc and use new context entry table and page table, copy data from the
> old ones that used by the old kernel.
> 5. to use different portions of the iova address ranges for the device drivers
> in the crashdump kernel than the iova ranges that were in-use at the time
> of the panic.
> 6. After device driver is loaded, when it issues the first dma_map command,
> free the dmar_domain structure for this device, and generate a new one, so
> that the device can be assigned a new and empty page table.
> 7. When a new context entry table is generated, we also save its address to
> the old root entry table.
>
> For Interrupt Remapping:
> 1. To accept the vt-d hardware in an active state,
> 2. Do not disable and re-enable the interrupt remapping, keep it enabled.
> 3. Use the old interrupt remapping table, do not rewrite the IRTA register.
> 4. When ioapic entry is setup, the interrupt remapping table is changed, and
> the updated data will be stored to the old interrupt remapping table.
>
> Advantages of this approach:
> 1. All manipulation of the IO-device is done by the Linux device-driver
> for that device.
> 2. This approach behaves in a manner very similar to operation without an
> active iommu.
> 3. Any activity between the IO-device and its RMRR areas is handled by the
> device-driver in the same manner as during a non-kdump boot.
> 4. If an IO-device has no driver in the kdump kernel, it is simply left alone.
> This supports the practice of creating a special kdump kernel without
> drivers for any devices that are not required for taking a crashdump.
> 5. Minimal code-changes among the existing mainline intel vt-d code.
>
> Summary of changes in this patch set:
> 1. Added some useful function for root entry table in code intel-iommu.c
> 2. Added new members to struct root_entry and struct irte;
> 3. Functions to load old root entry table to iommu->root_entry from the memory
> of old kernel.
> 4. Functions to malloc new context entry table and page table and copy the data
> from the old ones to the malloced new ones.
> 5. Functions to enable support for DMA remapping in kdump kernel.
> 6. Functions to load old irte data from the old kernel to the kdump kernel.
> 7. Some code changes that support other behaviours that have been listed.
> 8. In the new functions, use physical address as "unsigned long" type, not
> pointers.
>
> Original version by Bill Sumner:
> https://lkml.org/lkml/2014/1/10/518
> https://lkml.org/lkml/2014/4/15/716
> https://lkml.org/lkml/2014/4/24/836
>
> Zhenhua's updates:
> https://lkml.org/lkml/2014/10/21/134
> https://lkml.org/lkml/2014/12/15/121
> https://lkml.org/lkml/2014/12/22/53
> https://lkml.org/lkml/2015/1/6/1166
>
> Changelog[v8]:
> 1. Add a missing __iommu_flush_cache in function copy_page_table.
>
> Changelog[v7]:
> 1. Use __iommu_flush_cache to flush the data to hardware.
>
> Changelog[v6]:
> 1. Use "unsigned long" as type of physical address.
> 2. Use new function unmap_device_dma to unmap the old dma.
> 3. Some small incorrect bits order for aw shift.
>
> Changelog[v5]:
> 1. Do not disable and re-enable traslation and interrupt remapping.
> 2. Use old root entry table.
> 3. Use old interrupt remapping table.
> 4. New functions to copy data from old kernel, and save to old kernel mem.
> 5. New functions to save updated root entry table and irte table.
> 6. Use intel_unmap to unmap the old dma;
> 7. Allocate new pages while driver is being loaded.
>
> Changelog[v4]:
> 1. Cut off the patches that move some defines and functions to new files.
> 2. Reduce the numbers of patches to five, make it more easier to read.
> 3. Changed the name of functions, make them consistent with current context
> get/set functions.
> 4. Add change to function __iommu_attach_domain.
>
> Changelog[v3]:
> 1. Commented-out "#define DEBUG 1" to eliminate debug messages.
> 2. Updated the comments about changes in each version.
> 3. Fixed: one-line added to Copy-Translations patch to initialize the iovad
> struct as recommended by Baoquan He [bhe@redhat.com]
> init_iova_domain(&domain->iovad, DMA_32BIT_PFN);
>
> Changelog[v2]:
> The following series implements a fix for:
> A kdump problem about DMA that has been discussed for a long time. That is,
> when a kernel panics and boots into the kdump kernel, DMA started by the
> panicked kernel is not stopped before the kdump kernel is booted and the
> kdump kernel disables the IOMMU while this DMA continues. This causes the
> IOMMU to stop translating the DMA addresses as IOVAs and begin to treat
> them as physical memory addresses -- which causes the DMA to either:
> (1) generate DMAR errors or
> (2) generate PCI SERR errors or
> (3) transfer data to or from incorrect areas of memory. Often this
> causes the dump to fail.
>
> Changelog[v1]:
> The original version.
>
> Changed in this version:
> 1. Do not disable and re-enable traslation and interrupt remapping.
> 2. Use old root entry table.
> 3. Use old interrupt remapping table.
> 4. Use "unsigned long" as physical address.
> 5. Use intel_unmap to unmap the old dma;
>
> Baoquan He <bhe@redhat.com> helps testing this patchset.
> Takao Indoh <indou.takao@jp.fujitsu.com> gives valuable suggestions.
>
> iommu/vt-d: Update iommu_attach_domain() and its callers
> iommu/vt-d: Items required for kdump
> iommu/vt-d: Add domain-id functions
> iommu/vt-d: functions to copy data from old mem
> iommu/vt-d: Add functions to load and save old re
> iommu/vt-d: datatypes and functions used for kdump
> iommu/vt-d: enable kdump support in iommu module
> iommu/vt-d: assign new page table for dma_map
> iommu/vt-d: Copy functions for irte
> iommu/vt-d: Use old irte in kdump kernel
>
> Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
> Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
> Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com>
> Tested-by: Baoquan He <bhe@redhat.com>
> ---
> drivers/iommu/intel-iommu.c | 1054 +++++++++++++++++++++++++++++++++--
> drivers/iommu/intel_irq_remapping.c | 104 +++-
> include/linux/intel-iommu.h | 18 +
> 3 files changed, 1134 insertions(+), 42 deletions(-)
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel
2015-01-12 8:00 ` [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults " Li, ZhenHua
@ 2015-01-12 9:07 ` Baoquan He
2015-01-12 9:28 ` Li, ZhenHua
0 siblings, 1 reply; 27+ messages in thread
From: Baoquan He @ 2015-01-12 9:07 UTC (permalink / raw)
To: Li, ZhenHua
Cc: dwmw2, indou.takao, joro, vgoyal, dyoung, iommu, linux-kernel,
linux-pci, kexec, alex.williamson, ddutile, ishii.hironobu,
bhelgaas, doug.hatch, jerry.hoemann, tom.vaden, li.zhang6,
lisa.mitchell, billsumnerlinux, rwright
On 01/12/15 at 04:00pm, Li, ZhenHua wrote:
> Comparing to v7, this version adds only a few lines code:
>
> In function copy_page_table,
>
> + __iommu_flush_cache(iommu, phys_to_virt(dma_pte_next),
> + VTD_PAGE_SIZE);
So this adding fixs the reported dmar fault on Takao's system, right?
>
>
> On 01/12/2015 03:06 PM, Li, Zhen-Hua wrote:
> >This patchset is an update of Bill Sumner's patchset, implements a fix for:
> >If a kernel boots with intel_iommu=on on a system that supports intel vt-d,
> >when a panic happens, the kdump kernel will boot with these faults:
> >
> > dmar: DRHD: handling fault status reg 102
> > dmar: DMAR:[DMA Read] Request device [01:00.0] fault addr fff80000
> > DMAR:[fault reason 01] Present bit in root entry is clear
> >
> > dmar: DRHD: handling fault status reg 2
> > dmar: INTR-REMAP: Request device [[61:00.0] fault index 42
> > INTR-REMAP:[fault reason 34] Present field in the IRTE entry is clear
> >
> >On some system, the interrupt remapping fault will also happen even if the
> >intel_iommu is not set to on, because the interrupt remapping will be enabled
> >when x2apic is needed by the system.
> >
> >The cause of the DMA fault is described in Bill's original version, and the
> >INTR-Remap fault is caused by a similar reason. In short, the initialization
> >of vt-d drivers causes the in-flight DMA and interrupt requests get wrong
> >response.
> >
> >To fix this problem, we modifies the behaviors of the intel vt-d in the
> >crashdump kernel:
> >
> >For DMA Remapping:
> >1. To accept the vt-d hardware in an active state,
> >2. Do not disable and re-enable the translation, keep it enabled.
> >3. Use the old root entry table, do not rewrite the RTA register.
> >4. Malloc and use new context entry table and page table, copy data from the
> > old ones that used by the old kernel.
> >5. to use different portions of the iova address ranges for the device drivers
> > in the crashdump kernel than the iova ranges that were in-use at the time
> > of the panic.
> >6. After device driver is loaded, when it issues the first dma_map command,
> > free the dmar_domain structure for this device, and generate a new one, so
> > that the device can be assigned a new and empty page table.
> >7. When a new context entry table is generated, we also save its address to
> > the old root entry table.
> >
> >For Interrupt Remapping:
> >1. To accept the vt-d hardware in an active state,
> >2. Do not disable and re-enable the interrupt remapping, keep it enabled.
> >3. Use the old interrupt remapping table, do not rewrite the IRTA register.
> >4. When ioapic entry is setup, the interrupt remapping table is changed, and
> > the updated data will be stored to the old interrupt remapping table.
> >
> >Advantages of this approach:
> >1. All manipulation of the IO-device is done by the Linux device-driver
> > for that device.
> >2. This approach behaves in a manner very similar to operation without an
> > active iommu.
> >3. Any activity between the IO-device and its RMRR areas is handled by the
> > device-driver in the same manner as during a non-kdump boot.
> >4. If an IO-device has no driver in the kdump kernel, it is simply left alone.
> > This supports the practice of creating a special kdump kernel without
> > drivers for any devices that are not required for taking a crashdump.
> >5. Minimal code-changes among the existing mainline intel vt-d code.
> >
> >Summary of changes in this patch set:
> >1. Added some useful function for root entry table in code intel-iommu.c
> >2. Added new members to struct root_entry and struct irte;
> >3. Functions to load old root entry table to iommu->root_entry from the memory
> > of old kernel.
> >4. Functions to malloc new context entry table and page table and copy the data
> > from the old ones to the malloced new ones.
> >5. Functions to enable support for DMA remapping in kdump kernel.
> >6. Functions to load old irte data from the old kernel to the kdump kernel.
> >7. Some code changes that support other behaviours that have been listed.
> >8. In the new functions, use physical address as "unsigned long" type, not
> > pointers.
> >
> >Original version by Bill Sumner:
> > https://lkml.org/lkml/2014/1/10/518
> > https://lkml.org/lkml/2014/4/15/716
> > https://lkml.org/lkml/2014/4/24/836
> >
> >Zhenhua's updates:
> > https://lkml.org/lkml/2014/10/21/134
> > https://lkml.org/lkml/2014/12/15/121
> > https://lkml.org/lkml/2014/12/22/53
> > https://lkml.org/lkml/2015/1/6/1166
> >
> >Changelog[v8]:
> > 1. Add a missing __iommu_flush_cache in function copy_page_table.
> >
> >Changelog[v7]:
> > 1. Use __iommu_flush_cache to flush the data to hardware.
> >
> >Changelog[v6]:
> > 1. Use "unsigned long" as type of physical address.
> > 2. Use new function unmap_device_dma to unmap the old dma.
> > 3. Some small incorrect bits order for aw shift.
> >
> >Changelog[v5]:
> > 1. Do not disable and re-enable traslation and interrupt remapping.
> > 2. Use old root entry table.
> > 3. Use old interrupt remapping table.
> > 4. New functions to copy data from old kernel, and save to old kernel mem.
> > 5. New functions to save updated root entry table and irte table.
> > 6. Use intel_unmap to unmap the old dma;
> > 7. Allocate new pages while driver is being loaded.
> >
> >Changelog[v4]:
> > 1. Cut off the patches that move some defines and functions to new files.
> > 2. Reduce the numbers of patches to five, make it more easier to read.
> > 3. Changed the name of functions, make them consistent with current context
> > get/set functions.
> > 4. Add change to function __iommu_attach_domain.
> >
> >Changelog[v3]:
> > 1. Commented-out "#define DEBUG 1" to eliminate debug messages.
> > 2. Updated the comments about changes in each version.
> > 3. Fixed: one-line added to Copy-Translations patch to initialize the iovad
> > struct as recommended by Baoquan He [bhe@redhat.com]
> > init_iova_domain(&domain->iovad, DMA_32BIT_PFN);
> >
> >Changelog[v2]:
> > The following series implements a fix for:
> > A kdump problem about DMA that has been discussed for a long time. That is,
> > when a kernel panics and boots into the kdump kernel, DMA started by the
> > panicked kernel is not stopped before the kdump kernel is booted and the
> > kdump kernel disables the IOMMU while this DMA continues. This causes the
> > IOMMU to stop translating the DMA addresses as IOVAs and begin to treat
> > them as physical memory addresses -- which causes the DMA to either:
> > (1) generate DMAR errors or
> > (2) generate PCI SERR errors or
> > (3) transfer data to or from incorrect areas of memory. Often this
> > causes the dump to fail.
> >
> >Changelog[v1]:
> > The original version.
> >
> >Changed in this version:
> >1. Do not disable and re-enable traslation and interrupt remapping.
> >2. Use old root entry table.
> >3. Use old interrupt remapping table.
> >4. Use "unsigned long" as physical address.
> >5. Use intel_unmap to unmap the old dma;
> >
> >Baoquan He <bhe@redhat.com> helps testing this patchset.
> >Takao Indoh <indou.takao@jp.fujitsu.com> gives valuable suggestions.
> >
> > iommu/vt-d: Update iommu_attach_domain() and its callers
> > iommu/vt-d: Items required for kdump
> > iommu/vt-d: Add domain-id functions
> > iommu/vt-d: functions to copy data from old mem
> > iommu/vt-d: Add functions to load and save old re
> > iommu/vt-d: datatypes and functions used for kdump
> > iommu/vt-d: enable kdump support in iommu module
> > iommu/vt-d: assign new page table for dma_map
> > iommu/vt-d: Copy functions for irte
> > iommu/vt-d: Use old irte in kdump kernel
> >
> >Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
> >Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
> >Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com>
> >Tested-by: Baoquan He <bhe@redhat.com>
> >---
> > drivers/iommu/intel-iommu.c | 1054 +++++++++++++++++++++++++++++++++--
> > drivers/iommu/intel_irq_remapping.c | 104 +++-
> > include/linux/intel-iommu.h | 18 +
> > 3 files changed, 1134 insertions(+), 42 deletions(-)
> >
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel
2015-01-12 9:07 ` Baoquan He
@ 2015-01-12 9:28 ` Li, ZhenHua
0 siblings, 0 replies; 27+ messages in thread
From: Li, ZhenHua @ 2015-01-12 9:28 UTC (permalink / raw)
To: Baoquan He
Cc: dwmw2, indou.takao, joro, vgoyal, dyoung, iommu, linux-kernel,
linux-pci, kexec, alex.williamson, ddutile, ishii.hironobu,
bhelgaas, doug.hatch, jerry.hoemann, tom.vaden, li.zhang6,
lisa.mitchell, billsumnerlinux, rwright, Li, ZhenHua
On 01/12/2015 05:07 PM, Baoquan He wrote:
> On 01/12/15 at 04:00pm, Li, ZhenHua wrote:
>> Comparing to v7, this version adds only a few lines code:
>>
>> In function copy_page_table,
>>
>> + __iommu_flush_cache(iommu, phys_to_virt(dma_pte_next),
>> + VTD_PAGE_SIZE);
>
> So this adding fixs the reported dmar fault on Takao's system, right?
I am not sure whether it can fix the dmar fault on Takao's system, but
I hope it can fix.
>
>>
>>
>> On 01/12/2015 03:06 PM, Li, Zhen-Hua wrote:
>>> This patchset is an update of Bill Sumner's patchset, implements a fix for:
>>> If a kernel boots with intel_iommu=on on a system that supports intel vt-d,
>>> when a panic happens, the kdump kernel will boot with these faults:
>>>
>>> dmar: DRHD: handling fault status reg 102
>>> dmar: DMAR:[DMA Read] Request device [01:00.0] fault addr fff80000
>>> DMAR:[fault reason 01] Present bit in root entry is clear
>>>
>>> dmar: DRHD: handling fault status reg 2
>>> dmar: INTR-REMAP: Request device [[61:00.0] fault index 42
>>> INTR-REMAP:[fault reason 34] Present field in the IRTE entry is clear
>>>
>>> On some system, the interrupt remapping fault will also happen even if the
>>> intel_iommu is not set to on, because the interrupt remapping will be enabled
>>> when x2apic is needed by the system.
>>>
>>> The cause of the DMA fault is described in Bill's original version, and the
>>> INTR-Remap fault is caused by a similar reason. In short, the initialization
>>> of vt-d drivers causes the in-flight DMA and interrupt requests get wrong
>>> response.
>>>
>>> To fix this problem, we modifies the behaviors of the intel vt-d in the
>>> crashdump kernel:
>>>
>>> For DMA Remapping:
>>> 1. To accept the vt-d hardware in an active state,
>>> 2. Do not disable and re-enable the translation, keep it enabled.
>>> 3. Use the old root entry table, do not rewrite the RTA register.
>>> 4. Malloc and use new context entry table and page table, copy data from the
>>> old ones that used by the old kernel.
>>> 5. to use different portions of the iova address ranges for the device drivers
>>> in the crashdump kernel than the iova ranges that were in-use at the time
>>> of the panic.
>>> 6. After device driver is loaded, when it issues the first dma_map command,
>>> free the dmar_domain structure for this device, and generate a new one, so
>>> that the device can be assigned a new and empty page table.
>>> 7. When a new context entry table is generated, we also save its address to
>>> the old root entry table.
>>>
>>> For Interrupt Remapping:
>>> 1. To accept the vt-d hardware in an active state,
>>> 2. Do not disable and re-enable the interrupt remapping, keep it enabled.
>>> 3. Use the old interrupt remapping table, do not rewrite the IRTA register.
>>> 4. When ioapic entry is setup, the interrupt remapping table is changed, and
>>> the updated data will be stored to the old interrupt remapping table.
>>>
>>> Advantages of this approach:
>>> 1. All manipulation of the IO-device is done by the Linux device-driver
>>> for that device.
>>> 2. This approach behaves in a manner very similar to operation without an
>>> active iommu.
>>> 3. Any activity between the IO-device and its RMRR areas is handled by the
>>> device-driver in the same manner as during a non-kdump boot.
>>> 4. If an IO-device has no driver in the kdump kernel, it is simply left alone.
>>> This supports the practice of creating a special kdump kernel without
>>> drivers for any devices that are not required for taking a crashdump.
>>> 5. Minimal code-changes among the existing mainline intel vt-d code.
>>>
>>> Summary of changes in this patch set:
>>> 1. Added some useful function for root entry table in code intel-iommu.c
>>> 2. Added new members to struct root_entry and struct irte;
>>> 3. Functions to load old root entry table to iommu->root_entry from the memory
>>> of old kernel.
>>> 4. Functions to malloc new context entry table and page table and copy the data
>>> from the old ones to the malloced new ones.
>>> 5. Functions to enable support for DMA remapping in kdump kernel.
>>> 6. Functions to load old irte data from the old kernel to the kdump kernel.
>>> 7. Some code changes that support other behaviours that have been listed.
>>> 8. In the new functions, use physical address as "unsigned long" type, not
>>> pointers.
>>>
>>> Original version by Bill Sumner:
>>> https://lkml.org/lkml/2014/1/10/518
>>> https://lkml.org/lkml/2014/4/15/716
>>> https://lkml.org/lkml/2014/4/24/836
>>>
>>> Zhenhua's updates:
>>> https://lkml.org/lkml/2014/10/21/134
>>> https://lkml.org/lkml/2014/12/15/121
>>> https://lkml.org/lkml/2014/12/22/53
>>> https://lkml.org/lkml/2015/1/6/1166
>>>
>>> Changelog[v8]:
>>> 1. Add a missing __iommu_flush_cache in function copy_page_table.
>>>
>>> Changelog[v7]:
>>> 1. Use __iommu_flush_cache to flush the data to hardware.
>>>
>>> Changelog[v6]:
>>> 1. Use "unsigned long" as type of physical address.
>>> 2. Use new function unmap_device_dma to unmap the old dma.
>>> 3. Some small incorrect bits order for aw shift.
>>>
>>> Changelog[v5]:
>>> 1. Do not disable and re-enable traslation and interrupt remapping.
>>> 2. Use old root entry table.
>>> 3. Use old interrupt remapping table.
>>> 4. New functions to copy data from old kernel, and save to old kernel mem.
>>> 5. New functions to save updated root entry table and irte table.
>>> 6. Use intel_unmap to unmap the old dma;
>>> 7. Allocate new pages while driver is being loaded.
>>>
>>> Changelog[v4]:
>>> 1. Cut off the patches that move some defines and functions to new files.
>>> 2. Reduce the numbers of patches to five, make it more easier to read.
>>> 3. Changed the name of functions, make them consistent with current context
>>> get/set functions.
>>> 4. Add change to function __iommu_attach_domain.
>>>
>>> Changelog[v3]:
>>> 1. Commented-out "#define DEBUG 1" to eliminate debug messages.
>>> 2. Updated the comments about changes in each version.
>>> 3. Fixed: one-line added to Copy-Translations patch to initialize the iovad
>>> struct as recommended by Baoquan He [bhe@redhat.com]
>>> init_iova_domain(&domain->iovad, DMA_32BIT_PFN);
>>>
>>> Changelog[v2]:
>>> The following series implements a fix for:
>>> A kdump problem about DMA that has been discussed for a long time. That is,
>>> when a kernel panics and boots into the kdump kernel, DMA started by the
>>> panicked kernel is not stopped before the kdump kernel is booted and the
>>> kdump kernel disables the IOMMU while this DMA continues. This causes the
>>> IOMMU to stop translating the DMA addresses as IOVAs and begin to treat
>>> them as physical memory addresses -- which causes the DMA to either:
>>> (1) generate DMAR errors or
>>> (2) generate PCI SERR errors or
>>> (3) transfer data to or from incorrect areas of memory. Often this
>>> causes the dump to fail.
>>>
>>> Changelog[v1]:
>>> The original version.
>>>
>>> Changed in this version:
>>> 1. Do not disable and re-enable traslation and interrupt remapping.
>>> 2. Use old root entry table.
>>> 3. Use old interrupt remapping table.
>>> 4. Use "unsigned long" as physical address.
>>> 5. Use intel_unmap to unmap the old dma;
>>>
>>> Baoquan He <bhe@redhat.com> helps testing this patchset.
>>> Takao Indoh <indou.takao@jp.fujitsu.com> gives valuable suggestions.
>>>
>>> iommu/vt-d: Update iommu_attach_domain() and its callers
>>> iommu/vt-d: Items required for kdump
>>> iommu/vt-d: Add domain-id functions
>>> iommu/vt-d: functions to copy data from old mem
>>> iommu/vt-d: Add functions to load and save old re
>>> iommu/vt-d: datatypes and functions used for kdump
>>> iommu/vt-d: enable kdump support in iommu module
>>> iommu/vt-d: assign new page table for dma_map
>>> iommu/vt-d: Copy functions for irte
>>> iommu/vt-d: Use old irte in kdump kernel
>>>
>>> Signed-off-by: Bill Sumner <billsumnerlinux@gmail.com>
>>> Signed-off-by: Li, Zhen-Hua <zhen-hual@hp.com>
>>> Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com>
>>> Tested-by: Baoquan He <bhe@redhat.com>
>>> ---
>>> drivers/iommu/intel-iommu.c | 1054 +++++++++++++++++++++++++++++++++--
>>> drivers/iommu/intel_irq_remapping.c | 104 +++-
>>> include/linux/intel-iommu.h | 18 +
>>> 3 files changed, 1134 insertions(+), 42 deletions(-)
>>>
>>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers
2015-01-12 7:06 ` [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers Li, Zhen-Hua
@ 2015-01-12 15:18 ` Joerg Roedel
2015-01-13 1:28 ` Li, ZhenHua
0 siblings, 1 reply; 27+ messages in thread
From: Joerg Roedel @ 2015-01-12 15:18 UTC (permalink / raw)
To: Li, Zhen-Hua
Cc: dwmw2, indou.takao, bhe, vgoyal, dyoung, iommu, linux-kernel,
linux-pci, kexec, alex.williamson, ddutile, ishii.hironobu,
bhelgaas, doug.hatch, jerry.hoemann, tom.vaden, li.zhang6,
lisa.mitchell, billsumnerlinux, rwright
On Mon, Jan 12, 2015 at 03:06:19PM +0800, Li, Zhen-Hua wrote:
> Allow specification of the domain-id for the new domain.
> This patch only adds the 'did' parameter to iommu_attach_domain()
> and modifies all of its callers to specify the default value of -1
> which says "no did specified, allocate a new one".
I think its better to keep the old iommu_attach_domain() interface in
place and introduce a new function (like iommu_attach_domain_with_id()
or something) which has the additional parameter. Then you can rewrite
iommu_attach_domain():
iommu_attach_domai(...)
{
return iommu_attach_domain_with_id(..., -1);
}
This way you don't have to update all the callers of
iommu_attach_domain() and the interface is more readable.
Joerg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 7:06 ` [PATCH v8 02/10] iommu/vt-d: Items required for kdump Li, Zhen-Hua
@ 2015-01-12 15:22 ` Joerg Roedel
2015-01-12 15:29 ` Vivek Goyal
2015-01-13 8:12 ` Li, ZhenHua
0 siblings, 2 replies; 27+ messages in thread
From: Joerg Roedel @ 2015-01-12 15:22 UTC (permalink / raw)
To: Li, Zhen-Hua
Cc: dwmw2, indou.takao, bhe, vgoyal, dyoung, iommu, linux-kernel,
linux-pci, kexec, alex.williamson, ddutile, ishii.hironobu,
bhelgaas, doug.hatch, jerry.hoemann, tom.vaden, li.zhang6,
lisa.mitchell, billsumnerlinux, rwright
On Mon, Jan 12, 2015 at 03:06:20PM +0800, Li, Zhen-Hua wrote:
> +
> +#ifdef CONFIG_CRASH_DUMP
> +
> +/*
> + * Fix Crashdump failure caused by leftover DMA through a hardware IOMMU
> + *
> + * Fixes the crashdump kernel to deal with an active iommu and legacy
> + * DMA from the (old) panicked kernel in a manner similar to how legacy
> + * DMA is handled when no hardware iommu was in use by the old kernel --
> + * allow the legacy DMA to continue into its current buffers.
> + *
> + * In the crashdump kernel, this code:
> + * 1. skips disabling the IOMMU's translating of IO Virtual Addresses (IOVA).
> + * 2. Do not re-enable IOMMU's translating.
> + * 3. In kdump kernel, use the old root entry table.
> + * 4. Leaves the current translations in-place so that legacy DMA will
> + * continue to use its current buffers.
> + * 5. Allocates to the device drivers in the crashdump kernel
> + * portions of the iova address ranges that are different
> + * from the iova address ranges that were being used by the old kernel
> + * at the time of the panic.
> + *
> + */
It looks like you are still copying the io-page-tables from the old
kernel into the kdump kernel, is that right? With the approach that was
proposed you only need to copy over the context entries 1-1. They are
still pointing to the page-tables in the old kernels memory (which is
just fine).
The root-entry of the old kernel is also re-used, and when the kdump
kernel starts to use a device, its context entry is updated to point to
a newly allocated page-table.
Joerg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 15:22 ` Joerg Roedel
@ 2015-01-12 15:29 ` Vivek Goyal
2015-01-12 16:06 ` Joerg Roedel
2015-01-13 11:41 ` Baoquan He
2015-01-13 8:12 ` Li, ZhenHua
1 sibling, 2 replies; 27+ messages in thread
From: Vivek Goyal @ 2015-01-12 15:29 UTC (permalink / raw)
To: Joerg Roedel
Cc: Li, Zhen-Hua, dwmw2, indou.takao, bhe, dyoung, iommu,
linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, rwright
On Mon, Jan 12, 2015 at 04:22:08PM +0100, Joerg Roedel wrote:
> On Mon, Jan 12, 2015 at 03:06:20PM +0800, Li, Zhen-Hua wrote:
> > +
> > +#ifdef CONFIG_CRASH_DUMP
> > +
> > +/*
> > + * Fix Crashdump failure caused by leftover DMA through a hardware IOMMU
> > + *
> > + * Fixes the crashdump kernel to deal with an active iommu and legacy
> > + * DMA from the (old) panicked kernel in a manner similar to how legacy
> > + * DMA is handled when no hardware iommu was in use by the old kernel --
> > + * allow the legacy DMA to continue into its current buffers.
> > + *
> > + * In the crashdump kernel, this code:
> > + * 1. skips disabling the IOMMU's translating of IO Virtual Addresses (IOVA).
> > + * 2. Do not re-enable IOMMU's translating.
> > + * 3. In kdump kernel, use the old root entry table.
> > + * 4. Leaves the current translations in-place so that legacy DMA will
> > + * continue to use its current buffers.
> > + * 5. Allocates to the device drivers in the crashdump kernel
> > + * portions of the iova address ranges that are different
> > + * from the iova address ranges that were being used by the old kernel
> > + * at the time of the panic.
> > + *
> > + */
>
> It looks like you are still copying the io-page-tables from the old
> kernel into the kdump kernel, is that right? With the approach that was
> proposed you only need to copy over the context entries 1-1. They are
> still pointing to the page-tables in the old kernels memory (which is
> just fine).
Kdump has the notion of backup region. Where certain parts of old kernels
memory can be moved to a different location (first 640K on x86 as of now)
and new kernel can make use of this memory now.
So we will have to just make sure that no parts of this old page table
fall into backup region.
Thanks
Vivek
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 15:29 ` Vivek Goyal
@ 2015-01-12 16:06 ` Joerg Roedel
2015-01-12 16:15 ` Vivek Goyal
2015-01-13 11:41 ` Baoquan He
1 sibling, 1 reply; 27+ messages in thread
From: Joerg Roedel @ 2015-01-12 16:06 UTC (permalink / raw)
To: Vivek Goyal
Cc: Li, Zhen-Hua, dwmw2, indou.takao, bhe, dyoung, iommu,
linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, rwright
On Mon, Jan 12, 2015 at 10:29:19AM -0500, Vivek Goyal wrote:
> Kdump has the notion of backup region. Where certain parts of old kernels
> memory can be moved to a different location (first 640K on x86 as of now)
> and new kernel can make use of this memory now.
>
> So we will have to just make sure that no parts of this old page table
> fall into backup region.
Uuh, looks like the 'iommu-with-kdump-issue' isn't complicated enough
yet ;)
Sadly, your above statement is true for all hardware-accessible data
structures in IOMMU code. I think about how we can solve this, is there
an easy way to allocate memory that is not in any backup region?
Thanks,
Joerg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 16:06 ` Joerg Roedel
@ 2015-01-12 16:15 ` Vivek Goyal
2015-01-12 16:48 ` Joerg Roedel
0 siblings, 1 reply; 27+ messages in thread
From: Vivek Goyal @ 2015-01-12 16:15 UTC (permalink / raw)
To: Joerg Roedel
Cc: Li, Zhen-Hua, dwmw2, indou.takao, bhe, dyoung, iommu,
linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, rwright
On Mon, Jan 12, 2015 at 05:06:46PM +0100, Joerg Roedel wrote:
> On Mon, Jan 12, 2015 at 10:29:19AM -0500, Vivek Goyal wrote:
> > Kdump has the notion of backup region. Where certain parts of old kernels
> > memory can be moved to a different location (first 640K on x86 as of now)
> > and new kernel can make use of this memory now.
> >
> > So we will have to just make sure that no parts of this old page table
> > fall into backup region.
>
> Uuh, looks like the 'iommu-with-kdump-issue' isn't complicated enough
> yet ;)
> Sadly, your above statement is true for all hardware-accessible data
> structures in IOMMU code. I think about how we can solve this, is there
> an easy way to allocate memory that is not in any backup region?
Hmm..., there does not seem to be any easy way to do this. In fact, as of
now, kernel does not even know where is backup region. All these details are
managed by user space completely (except for new kexec_file_load() syscall).
That means we are left with ugly options now.
- Define per arch kexec backup regions in kernel and export it to user
space and let kexec-tools make use of that deinition (instead of
defining its own). That way memory allocation code in kernel can look
at this backup area and skip it for certain allocations.
Thanks
Vivek
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 16:15 ` Vivek Goyal
@ 2015-01-12 16:48 ` Joerg Roedel
0 siblings, 0 replies; 27+ messages in thread
From: Joerg Roedel @ 2015-01-12 16:48 UTC (permalink / raw)
To: Vivek Goyal
Cc: Li, Zhen-Hua, dwmw2, indou.takao, bhe, dyoung, iommu,
linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, rwright
On Mon, Jan 12, 2015 at 11:15:38AM -0500, Vivek Goyal wrote:
> On Mon, Jan 12, 2015 at 05:06:46PM +0100, Joerg Roedel wrote:
> > On Mon, Jan 12, 2015 at 10:29:19AM -0500, Vivek Goyal wrote:
> > > Kdump has the notion of backup region. Where certain parts of old kernels
> > > memory can be moved to a different location (first 640K on x86 as of now)
> > > and new kernel can make use of this memory now.
> > >
> > > So we will have to just make sure that no parts of this old page table
> > > fall into backup region.
> >
> > Uuh, looks like the 'iommu-with-kdump-issue' isn't complicated enough
> > yet ;)
> > Sadly, your above statement is true for all hardware-accessible data
> > structures in IOMMU code. I think about how we can solve this, is there
> > an easy way to allocate memory that is not in any backup region?
>
> Hmm..., there does not seem to be any easy way to do this. In fact, as of
> now, kernel does not even know where is backup region. All these details are
> managed by user space completely (except for new kexec_file_load() syscall).
>
> That means we are left with ugly options now.
>
> - Define per arch kexec backup regions in kernel and export it to user
> space and let kexec-tools make use of that deinition (instead of
> defining its own). That way memory allocation code in kernel can look
> at this backup area and skip it for certain allocations.
Yes, that makes sense. In fact, I think all allocations for DMA memory
need to take this into account to avoid potentially serious data
corruption.
If any memory for a disk superblock gets allocated in backup memory and
a kdump happens, the new kernel might zero out that area and the disk
controler then writes the zeroes to disk instead of the superblock.
Joerg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers
2015-01-12 15:18 ` Joerg Roedel
@ 2015-01-13 1:28 ` Li, ZhenHua
0 siblings, 0 replies; 27+ messages in thread
From: Li, ZhenHua @ 2015-01-13 1:28 UTC (permalink / raw)
To: Joerg Roedel
Cc: dwmw2, indou.takao, bhe, vgoyal, dyoung, iommu, linux-kernel,
linux-pci, kexec, alex.williamson, ddutile, ishii.hironobu,
bhelgaas, doug.hatch, jerry.hoemann, tom.vaden, li.zhang6,
lisa.mitchell, billsumnerlinux, rwright, Li, ZhenHua
On 01/12/2015 11:18 PM, Joerg Roedel wrote:
> On Mon, Jan 12, 2015 at 03:06:19PM +0800, Li, Zhen-Hua wrote:
>> Allow specification of the domain-id for the new domain.
>> This patch only adds the 'did' parameter to iommu_attach_domain()
>> and modifies all of its callers to specify the default value of -1
>> which says "no did specified, allocate a new one".
>
> I think its better to keep the old iommu_attach_domain() interface in
> place and introduce a new function (like iommu_attach_domain_with_id()
> or something) which has the additional parameter. Then you can rewrite
> iommu_attach_domain():
>
> iommu_attach_domai(...)
> {
> return iommu_attach_domain_with_id(..., -1);
> }
>
> This way you don't have to update all the callers of
> iommu_attach_domain() and the interface is more readable.
>
>
> Joerg
>
That's a good way. I will do this in next version.
Thanks
Zhenhua
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 15:22 ` Joerg Roedel
2015-01-12 15:29 ` Vivek Goyal
@ 2015-01-13 8:12 ` Li, ZhenHua
2015-01-13 11:52 ` Joerg Roedel
1 sibling, 1 reply; 27+ messages in thread
From: Li, ZhenHua @ 2015-01-13 8:12 UTC (permalink / raw)
To: Joerg Roedel
Cc: dwmw2, indou.takao, bhe, vgoyal, dyoung, iommu, linux-kernel,
linux-pci, kexec, alex.williamson, ddutile, ishii.hironobu,
bhelgaas, doug.hatch, jerry.hoemann, tom.vaden, li.zhang6,
lisa.mitchell, billsumnerlinux, rwright, Li, ZhenHua
On 01/12/2015 11:22 PM, Joerg Roedel wrote:
> On Mon, Jan 12, 2015 at 03:06:20PM +0800, Li, Zhen-Hua wrote:
>> +
>> +#ifdef CONFIG_CRASH_DUMP
>> +
>> +/*
>> + * Fix Crashdump failure caused by leftover DMA through a hardware IOMMU
>> + *
>> + * Fixes the crashdump kernel to deal with an active iommu and legacy
>> + * DMA from the (old) panicked kernel in a manner similar to how legacy
>> + * DMA is handled when no hardware iommu was in use by the old kernel --
>> + * allow the legacy DMA to continue into its current buffers.
>> + *
>> + * In the crashdump kernel, this code:
>> + * 1. skips disabling the IOMMU's translating of IO Virtual Addresses (IOVA).
>> + * 2. Do not re-enable IOMMU's translating.
>> + * 3. In kdump kernel, use the old root entry table.
>> + * 4. Leaves the current translations in-place so that legacy DMA will
>> + * continue to use its current buffers.
>> + * 5. Allocates to the device drivers in the crashdump kernel
>> + * portions of the iova address ranges that are different
>> + * from the iova address ranges that were being used by the old kernel
>> + * at the time of the panic.
>> + *
>> + */
>
> It looks like you are still copying the io-page-tables from the old
> kernel into the kdump kernel, is that right? With the approach that was
> proposed you only need to copy over the context entries 1-1. They are
> still pointing to the page-tables in the old kernels memory (which is
> just fine).
>
> The root-entry of the old kernel is also re-used, and when the kdump
> kernel starts to use a device, its context entry is updated to point to
> a newly allocated page-table.
>
Current status:
1. Use old root entry table,
2. Copy old context entry tables.
3. Copy old page tables.
4. Allocate new page table when device driver is loading.
I remember the progress we discussed was this. Checking the old mails,
founding we did not clearly discuss about the old page tables, only
agrred with allocating new page tables when device driver is loading.
To me, both of these two options are fine, copying old page tables, or
re-use the old ones. My debugging shows both of them works fine.
If we use the old page tables, the patchset will use less code.
Functions copy_page_addr, copy_page_table are no longer needed, and
functions copy_context_entry and copy_context_entry_table will be
reduced to several lines. The patchset will be much simpler.
Zhenhua
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-12 15:29 ` Vivek Goyal
2015-01-12 16:06 ` Joerg Roedel
@ 2015-01-13 11:41 ` Baoquan He
1 sibling, 0 replies; 27+ messages in thread
From: Baoquan He @ 2015-01-13 11:41 UTC (permalink / raw)
To: Vivek Goyal
Cc: Joerg Roedel, Li, Zhen-Hua, dwmw2, indou.takao, dyoung, iommu,
linux-kernel, linux-pci, kexec, alex.williamson, ddutile,
ishii.hironobu, bhelgaas, doug.hatch, jerry.hoemann, tom.vaden,
li.zhang6, lisa.mitchell, billsumnerlinux, rwright
On 01/12/15 at 10:29am, Vivek Goyal wrote:
> On Mon, Jan 12, 2015 at 04:22:08PM +0100, Joerg Roedel wrote:
> > It looks like you are still copying the io-page-tables from the old
> > kernel into the kdump kernel, is that right? With the approach that was
> > proposed you only need to copy over the context entries 1-1. They are
> > still pointing to the page-tables in the old kernels memory (which is
> > just fine).
>
> Kdump has the notion of backup region. Where certain parts of old kernels
> memory can be moved to a different location (first 640K on x86 as of now)
> and new kernel can make use of this memory now.
Hi Vivek,
About backup region I am a bit confusing. Just say x86, we usually copy
it to a backup region. And this first 640K will be used as a usable
memory region in 2nd kernel since its content has been copied to backup
region. And that backup region is taken from crashkernel reserved
memory and not passed to 2nd kernel as usable memory region.
Here did you mean the old page table could fall into first 640K memory
region or that reserved backup region? Since as I understand the backup
region is taken from crashkernel memory which is not used by 1st kernel
process.
Thanks
Baoquan
>
> So we will have to just make sure that no parts of this old page table
> fall into backup region.
>
> Thanks
> Vivek
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 02/10] iommu/vt-d: Items required for kdump
2015-01-13 8:12 ` Li, ZhenHua
@ 2015-01-13 11:52 ` Joerg Roedel
0 siblings, 0 replies; 27+ messages in thread
From: Joerg Roedel @ 2015-01-13 11:52 UTC (permalink / raw)
To: Li, ZhenHua
Cc: dwmw2, indou.takao, bhe, vgoyal, dyoung, iommu, linux-kernel,
linux-pci, kexec, alex.williamson, ddutile, ishii.hironobu,
bhelgaas, doug.hatch, jerry.hoemann, tom.vaden, li.zhang6,
lisa.mitchell, billsumnerlinux, rwright
On Tue, Jan 13, 2015 at 04:12:29PM +0800, Li, ZhenHua wrote:
> Current status:
> 1. Use old root entry table,
> 2. Copy old context entry tables.
> 3. Copy old page tables.
> 4. Allocate new page table when device driver is loading.
>
> I remember the progress we discussed was this. Checking the old mails,
> founding we did not clearly discuss about the old page tables, only
> agrred with allocating new page tables when device driver is loading.
>
> To me, both of these two options are fine, copying old page tables, or
> re-use the old ones. My debugging shows both of them works fine.
>
> If we use the old page tables, the patchset will use less code.
> Functions copy_page_addr, copy_page_table are no longer needed, and
> functions copy_context_entry and copy_context_entry_table will be
> reduced to several lines. The patchset will be much simpler.
Yes, please do it this way. There is no point in copying the old
page-tables. We never make changes to them, so they can stay where they
are.
The problem with the backup region still exists, but it is not
specific to IOMMUs. This problem and should be addressed seperatly.
Joerg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump
2015-01-12 7:06 ` [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump Li, Zhen-Hua
@ 2015-01-15 3:28 ` Baoquan He
2015-01-15 5:45 ` Li, ZhenHua
0 siblings, 1 reply; 27+ messages in thread
From: Baoquan He @ 2015-01-15 3:28 UTC (permalink / raw)
To: Li, Zhen-Hua
Cc: dwmw2, indou.takao, joro, vgoyal, dyoung, jerry.hoemann,
tom.vaden, rwright, linux-pci, kexec, iommu, lisa.mitchell,
linux-kernel, alex.williamson, ddutile, doug.hatch,
ishii.hironobu, bhelgaas, billsumnerlinux, li.zhang6
On 01/12/15 at 03:06pm, Li, Zhen-Hua wrote:
> +/*
> + * Interface to the "copy translation tables" set of functions
> + * from mainline code.
> + */
> +static int intel_iommu_load_translation_tables(struct dmar_drhd_unit *drhd,
> + int g_num_of_iommus)
The argument g_num_of_iommus is the same as the global variable, it's better
to rename it as num_of_iommus. And even it can be removed since you can
just use the global variable g_num_of_iommus in this function.
Argument drhd can be intel_iommu because no other member variable in
drhd is needed.
> +{
> + struct intel_iommu *iommu; /* Virt(iommu hardware registers) */
> + unsigned long long q; /* quadword scratch */
> + int ret = 0; /* Integer return code */
> + int i = 0; /* Loop index */
> + unsigned long flags;
> +
> + /* Structure so copy_page_addr() can accumulate things
> + * over multiple calls and returns
> + */
> + struct copy_page_addr_parms ppa_parms = copy_page_addr_parms_init;
> + struct copy_page_addr_parms *ppap = &ppa_parms;
> +
> +
> + iommu = drhd->iommu;
> + q = dmar_readq(iommu->reg + DMAR_RTADDR_REG);
> + if (!q)
> + return -1;
> +
> + /* If (list needs initializing) do it here */
This initializing should not be here, because it's not only for this
drhd. It should be done in init_dmars().
> + if (!domain_values_list) {
> + domain_values_list =
> + kcalloc(g_num_of_iommus, sizeof(struct list_head),
> + GFP_KERNEL);
> +
> + if (!domain_values_list) {
> + pr_err("Allocation failed for domain_values_list array\n");
> + return -ENOMEM;
> + }
> + for (i = 0; i < g_num_of_iommus; i++)
> + INIT_LIST_HEAD(&domain_values_list[i]);
> + }
> +
> + spin_lock_irqsave(&iommu->lock, flags);
> +
> + /* Load the root-entry table from the old kernel
> + * foreach context_entry_table in root_entry
> + * foreach context_entry in context_entry_table
> + * foreach level-1 page_table_entry in context_entry
> + * foreach level-2 page_table_entry in level 1 page_table_entry
> + * Above pattern continues up to 6 levels of page tables
> + * Sanity-check the entry
> + * Process the bus, devfn, page_address, page_size
> + */
> + if (!iommu->root_entry) {
> + iommu->root_entry =
> + (struct root_entry *)alloc_pgtable_page(iommu->node);
> + if (!iommu->root_entry) {
> + spin_unlock_irqrestore(&iommu->lock, flags);
> + return -ENOMEM;
> + }
> + }
> +
> + iommu->root_entry_old_phys = q & VTD_PAGE_MASK;
> + if (!iommu->root_entry_old_phys) {
> + pr_err("Could not read old root entry address.");
> + return -1;
> + }
> +
> + iommu->root_entry_old_virt = ioremap_cache(iommu->root_entry_old_phys,
> + VTD_PAGE_SIZE);
> + if (!iommu->root_entry_old_virt) {
> + pr_err("Could not map the old root entry.");
> + return -ENOMEM;
> + }
> +
> + __iommu_load_old_root_entry(iommu);
> + ret = copy_root_entry_table(iommu, ppap);
> + __iommu_flush_cache(iommu, iommu->root_entry, PAGE_SIZE);
> + __iommu_update_old_root_entry(iommu, -1);
> +
> + spin_unlock_irqrestore(&iommu->lock, flags);
> +
> + __iommu_free_mapped_mem();
> +
> + if (ret)
> + return ret;
> +
> + ppa_parms.last = 1;
> + copy_page_addr(0, 0, 0, 0, 0, NULL, ppap);
> +
> + return 0;
> +}
> +
> #endif /* CONFIG_CRASH_DUMP */
> --
> 2.0.0-rc0
>
>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump
2015-01-15 3:28 ` Baoquan He
@ 2015-01-15 5:45 ` Li, ZhenHua
2015-01-15 7:01 ` Baoquan He
0 siblings, 1 reply; 27+ messages in thread
From: Li, ZhenHua @ 2015-01-15 5:45 UTC (permalink / raw)
To: Baoquan He
Cc: dwmw2, indou.takao, joro, vgoyal, dyoung, jerry.hoemann,
tom.vaden, rwright, linux-pci, kexec, iommu, lisa.mitchell,
linux-kernel, alex.williamson, ddutile, doug.hatch,
ishii.hironobu, bhelgaas, billsumnerlinux, li.zhang6
Hi Baoquan,
Thank you very much for your review. But according to the latest
discussion, the page tables will not be copied from old kernel. We keep
using the old page tables before driver is loaded. So there are many
changes in next version;
See my comments.
On 01/15/2015 11:28 AM, Baoquan He wrote:
> On 01/12/15 at 03:06pm, Li, Zhen-Hua wrote:
>> +/*
>> + * Interface to the "copy translation tables" set of functions
>> + * from mainline code.
>> + */
>> +static int intel_iommu_load_translation_tables(struct dmar_drhd_unit *drhd,
>> + int g_num_of_iommus)
>
> The argument g_num_of_iommus is the same as the global variable, it's better
> to rename it as num_of_iommus. And even it can be removed since you can
> just use the global variable g_num_of_iommus in this function.
>
> Argument drhd can be intel_iommu because no other member variable in
> drhd is needed.
This function is no longer used. So forget the parameters.
>
>> +{
>> + struct intel_iommu *iommu; /* Virt(iommu hardware registers) */
>> + unsigned long long q; /* quadword scratch */
>> + int ret = 0; /* Integer return code */
>> + int i = 0; /* Loop index */
>> + unsigned long flags;
>> +
>> + /* Structure so copy_page_addr() can accumulate things
>> + * over multiple calls and returns
>> + */
>> + struct copy_page_addr_parms ppa_parms = copy_page_addr_parms_init;
>> + struct copy_page_addr_parms *ppap = &ppa_parms;
>> +
>> +
>> + iommu = drhd->iommu;
>> + q = dmar_readq(iommu->reg + DMAR_RTADDR_REG);
>> + if (!q)
>> + return -1;
>> +
>> + /* If (list needs initializing) do it here */
>
> This initializing should not be here, because it's not only for this
> drhd. It should be done in init_dmars().
>
Yes you are right. Though the variable domain_values_list will not be
used in next version, I think I need to check if there are any other
similar problems.
>> + if (!domain_values_list) {
>> + domain_values_list =
>> + kcalloc(g_num_of_iommus, sizeof(struct list_head),
>> + GFP_KERNEL);
>> +
>> + if (!domain_values_list) {
>> + pr_err("Allocation failed for domain_values_list array\n");
>> + return -ENOMEM;
>> + }
>> + for (i = 0; i < g_num_of_iommus; i++)
>> + INIT_LIST_HEAD(&domain_values_list[i]);
>> + }
>> +
>> + spin_lock_irqsave(&iommu->lock, flags);
>> +
>> + /* Load the root-entry table from the old kernel
>> + * foreach context_entry_table in root_entry
>> + * foreach context_entry in context_entry_table
>> + * foreach level-1 page_table_entry in context_entry
>> + * foreach level-2 page_table_entry in level 1 page_table_entry
>> + * Above pattern continues up to 6 levels of page tables
>> + * Sanity-check the entry
>> + * Process the bus, devfn, page_address, page_size
>> + */
>> + if (!iommu->root_entry) {
>> + iommu->root_entry =
>> + (struct root_entry *)alloc_pgtable_page(iommu->node);
>> + if (!iommu->root_entry) {
>> + spin_unlock_irqrestore(&iommu->lock, flags);
>> + return -ENOMEM;
>> + }
>> + }
>> +
>> + iommu->root_entry_old_phys = q & VTD_PAGE_MASK;
>> + if (!iommu->root_entry_old_phys) {
>> + pr_err("Could not read old root entry address.");
>> + return -1;
>> + }
>> +
>> + iommu->root_entry_old_virt = ioremap_cache(iommu->root_entry_old_phys,
>> + VTD_PAGE_SIZE);
>> + if (!iommu->root_entry_old_virt) {
>> + pr_err("Could not map the old root entry.");
>> + return -ENOMEM;
>> + }
>> +
>> + __iommu_load_old_root_entry(iommu);
>> + ret = copy_root_entry_table(iommu, ppap);
>> + __iommu_flush_cache(iommu, iommu->root_entry, PAGE_SIZE);
>> + __iommu_update_old_root_entry(iommu, -1);
>> +
>> + spin_unlock_irqrestore(&iommu->lock, flags);
>> +
>> + __iommu_free_mapped_mem();
>> +
>> + if (ret)
>> + return ret;
>> +
>> + ppa_parms.last = 1;
>> + copy_page_addr(0, 0, 0, 0, 0, NULL, ppap);
>> +
>> + return 0;
>> +}
>> +
>> #endif /* CONFIG_CRASH_DUMP */
>> --
>> 2.0.0-rc0
>>
>>
>> _______________________________________________
>> kexec mailing list
>> kexec@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump
2015-01-15 5:45 ` Li, ZhenHua
@ 2015-01-15 7:01 ` Baoquan He
0 siblings, 0 replies; 27+ messages in thread
From: Baoquan He @ 2015-01-15 7:01 UTC (permalink / raw)
To: Li, ZhenHua
Cc: dwmw2, indou.takao, joro, vgoyal, dyoung, jerry.hoemann,
tom.vaden, rwright, linux-pci, kexec, iommu, lisa.mitchell,
linux-kernel, alex.williamson, ddutile, doug.hatch,
ishii.hironobu, bhelgaas, billsumnerlinux, li.zhang6
On 01/15/15 at 01:45pm, Li, ZhenHua wrote:
> Hi Baoquan,
> Thank you very much for your review. But according to the latest
> discussion, the page tables will not be copied from old kernel. We keep
> using the old page tables before driver is loaded. So there are many
> changes in next version;
Oh, yes. So please ignore this comment.
>
> See my comments.
>
> On 01/15/2015 11:28 AM, Baoquan He wrote:
> >On 01/12/15 at 03:06pm, Li, Zhen-Hua wrote:
> >>+/*
> >>+ * Interface to the "copy translation tables" set of functions
> >>+ * from mainline code.
> >>+ */
> >>+static int intel_iommu_load_translation_tables(struct dmar_drhd_unit *drhd,
> >>+ int g_num_of_iommus)
> >
> >The argument g_num_of_iommus is the same as the global variable, it's better
> >to rename it as num_of_iommus. And even it can be removed since you can
> >just use the global variable g_num_of_iommus in this function.
> >
> >Argument drhd can be intel_iommu because no other member variable in
> >drhd is needed.
>
> This function is no longer used. So forget the parameters.
>
> >
> >>+{
> >>+ struct intel_iommu *iommu; /* Virt(iommu hardware registers) */
> >>+ unsigned long long q; /* quadword scratch */
> >>+ int ret = 0; /* Integer return code */
> >>+ int i = 0; /* Loop index */
> >>+ unsigned long flags;
> >>+
> >>+ /* Structure so copy_page_addr() can accumulate things
> >>+ * over multiple calls and returns
> >>+ */
> >>+ struct copy_page_addr_parms ppa_parms = copy_page_addr_parms_init;
> >>+ struct copy_page_addr_parms *ppap = &ppa_parms;
> >>+
> >>+
> >>+ iommu = drhd->iommu;
> >>+ q = dmar_readq(iommu->reg + DMAR_RTADDR_REG);
> >>+ if (!q)
> >>+ return -1;
> >>+
> >>+ /* If (list needs initializing) do it here */
> >
> >This initializing should not be here, because it's not only for this
> >drhd. It should be done in init_dmars().
> >
> Yes you are right. Though the variable domain_values_list will not be
> used in next version, I think I need to check if there are any other
> similar problems.
>
> >>+ if (!domain_values_list) {
> >>+ domain_values_list =
> >>+ kcalloc(g_num_of_iommus, sizeof(struct list_head),
> >>+ GFP_KERNEL);
> >>+
> >>+ if (!domain_values_list) {
> >>+ pr_err("Allocation failed for domain_values_list array\n");
> >>+ return -ENOMEM;
> >>+ }
> >>+ for (i = 0; i < g_num_of_iommus; i++)
> >>+ INIT_LIST_HEAD(&domain_values_list[i]);
> >>+ }
> >>+
> >>+ spin_lock_irqsave(&iommu->lock, flags);
> >>+
> >>+ /* Load the root-entry table from the old kernel
> >>+ * foreach context_entry_table in root_entry
> >>+ * foreach context_entry in context_entry_table
> >>+ * foreach level-1 page_table_entry in context_entry
> >>+ * foreach level-2 page_table_entry in level 1 page_table_entry
> >>+ * Above pattern continues up to 6 levels of page tables
> >>+ * Sanity-check the entry
> >>+ * Process the bus, devfn, page_address, page_size
> >>+ */
> >>+ if (!iommu->root_entry) {
> >>+ iommu->root_entry =
> >>+ (struct root_entry *)alloc_pgtable_page(iommu->node);
> >>+ if (!iommu->root_entry) {
> >>+ spin_unlock_irqrestore(&iommu->lock, flags);
> >>+ return -ENOMEM;
> >>+ }
> >>+ }
> >>+
> >>+ iommu->root_entry_old_phys = q & VTD_PAGE_MASK;
> >>+ if (!iommu->root_entry_old_phys) {
> >>+ pr_err("Could not read old root entry address.");
> >>+ return -1;
> >>+ }
> >>+
> >>+ iommu->root_entry_old_virt = ioremap_cache(iommu->root_entry_old_phys,
> >>+ VTD_PAGE_SIZE);
> >>+ if (!iommu->root_entry_old_virt) {
> >>+ pr_err("Could not map the old root entry.");
> >>+ return -ENOMEM;
> >>+ }
> >>+
> >>+ __iommu_load_old_root_entry(iommu);
> >>+ ret = copy_root_entry_table(iommu, ppap);
> >>+ __iommu_flush_cache(iommu, iommu->root_entry, PAGE_SIZE);
> >>+ __iommu_update_old_root_entry(iommu, -1);
> >>+
> >>+ spin_unlock_irqrestore(&iommu->lock, flags);
> >>+
> >>+ __iommu_free_mapped_mem();
> >>+
> >>+ if (ret)
> >>+ return ret;
> >>+
> >>+ ppa_parms.last = 1;
> >>+ copy_page_addr(0, 0, 0, 0, 0, NULL, ppap);
> >>+
> >>+ return 0;
> >>+}
> >>+
> >> #endif /* CONFIG_CRASH_DUMP */
> >>--
> >>2.0.0-rc0
> >>
> >>
> >>_______________________________________________
> >>kexec mailing list
> >>kexec@lists.infradead.org
> >>http://lists.infradead.org/mailman/listinfo/kexec
>
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2015-01-15 7:03 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-12 7:06 [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 01/10] iommu/vt-d: Update iommu_attach_domain() and its callers Li, Zhen-Hua
2015-01-12 15:18 ` Joerg Roedel
2015-01-13 1:28 ` Li, ZhenHua
2015-01-12 7:06 ` [PATCH v8 02/10] iommu/vt-d: Items required for kdump Li, Zhen-Hua
2015-01-12 15:22 ` Joerg Roedel
2015-01-12 15:29 ` Vivek Goyal
2015-01-12 16:06 ` Joerg Roedel
2015-01-12 16:15 ` Vivek Goyal
2015-01-12 16:48 ` Joerg Roedel
2015-01-13 11:41 ` Baoquan He
2015-01-13 8:12 ` Li, ZhenHua
2015-01-13 11:52 ` Joerg Roedel
2015-01-12 7:06 ` [PATCH v8 03/10] iommu/vt-d: Add domain-id functions Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 04/10] iommu/vt-d: functions to copy data from old mem Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 05/10] iommu/vt-d: Add functions to load and save old re Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 06/10] iommu/vt-d: datatypes and functions used for kdump Li, Zhen-Hua
2015-01-15 3:28 ` Baoquan He
2015-01-15 5:45 ` Li, ZhenHua
2015-01-15 7:01 ` Baoquan He
2015-01-12 7:06 ` [PATCH v8 07/10] iommu/vt-d: enable kdump support in iommu module Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 08/10] iommu/vt-d: assign new page table for dma_map Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 09/10] iommu/vt-d: Copy functions for irte Li, Zhen-Hua
2015-01-12 7:06 ` [PATCH v8 10/10] iommu/vt-d: Use old irte in kdump kernel Li, Zhen-Hua
2015-01-12 8:00 ` [PATCH v8 0/10] iommu/vt-d: Fix intel vt-d faults " Li, ZhenHua
2015-01-12 9:07 ` Baoquan He
2015-01-12 9:28 ` Li, ZhenHua
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).