* Re: [PATCH 3/6] add_to_physmap: Move the code for > XENMEM_add_to_physmap
[not found] <20111113175006.1EF1C72C36F@homiemail-mx8.g.dreamhost.com>
@ 2011-11-14 14:17 ` Andres Lagar-Cavilla
0 siblings, 0 replies; 15+ messages in thread
From: Andres Lagar-Cavilla @ 2011-11-14 14:17 UTC (permalink / raw)
To: xen-devel, jean.guyader
> Date: Sun, 13 Nov 2011 17:40:49 +0000
> From: Jean Guyader <jean.guyader@eu.citrix.com>
> Subject: [Xen-devel] [PATCH 3/6] add_to_physmap: Move the code for
> XENMEM_add_to_physmap
> To: <xen-devel@lists.xensource.com>
> Cc: tim@xen.org, allen.m.kay@intel.com, keir@xen.org, Jean Guyader
> <jean.guyader@eu.citrix.com>, JBeulich@suse.com
> Message-ID:
> <1321206052-18833-4-git-send-email-jean.guyader@eu.citrix.com>
> Content-Type: text/plain; charset="utf-8"
>
>
> Move the code for the XENMEM_add_to_physmap case into it's own
> function (xenmem_add_to_physmap).
>
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> ---
Jean, with the p2m API changes, your patch won't compile any more.
gfn_to_mfn* is now get_gfn*, with the need to put_gfn once done working
with the p2m translation. From what I've seen, your patch need not worry
about put_gfn, although it needs rebasing.
Andres
> xen/arch/x86/mm.c | 189
> ++++++++++++++++++++++++++++-------------------------
> 1 files changed, 100 insertions(+), 89 deletions(-)
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch
> Type: text/x-patch
> Size: 6630 bytes
> Desc: not available
> Url :
> http://lists.xensource.com/archives/html/xen-devel/attachments/20111113/13c22ac8/0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.bin
>
> ------------------------------
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
> End of Xen-devel Digest, Vol 81, Issue 163
> ******************************************
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8)
@ 2011-11-16 19:25 Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 2807 bytes --]
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Changes between v7 and v8:
- Rebase on new add_to_physmap from Andres Lagar-Cavilla
- Only do copy_to_guest on the new XENMAPSPACE (XENMAPSPACE_gmfn_range).
- Convert the xatp argument to a pointer in xenmem_add_to_physmap.
- Modify xenmem_add_to_physmap so it doesn't touchs the input (xatp is const now).
Changes between v6 and v7:
- xenmem_add_to_physmap_once can't take a pointer on xatp because
it modifies .idx and .gpfn.
- Cancel hypercall continuation in the compat layer if the copy_to_guest
failed.
Changes between v5 and v6:
- Rework the patch queue to make it more readable.
- Modify xatp in place in xenmem_add_to_physmap
- Only check for preemption if we are not at the last iteration
- Copy xatp guest handler back to the guest only in case of continuation
- Add continuation only when dealing with the new xenmem space
(XENMAPSPACE_gmfn_range).
Changes between v4 and v5:
- Fix hypercall continuation for add_to_physmap in compat mode.
Changes between v3 and v4:
- Move the loop for gmfn_range from arch_memory_op to xenmem_add_to_physmap.
- Add a comment to comment to explain the purpose of iommu_dont_flush_iotlb.
Changes between v2 and v3:
- Check for the presence iotlb_flush_all callback before calling it.
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and .space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap
mm: New XENMEM space, XENMAPSPACE_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary
iotlb flush
tools/firmware/hvmloader/pci.c | 20 +++-
xen/arch/x86/mm.c | 212 +++++++++++++++++++++++------------
xen/arch/x86/x86_64/compat/mm.c | 18 +++
xen/drivers/passthrough/iommu.c | 25 ++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++-------
xen/include/public/memory.h | 4 +
xen/include/xen/iommu.h | 17 +++
7 files changed, 277 insertions(+), 119 deletions(-)
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 279 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d125b93..97c7ffa 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -579,16 +579,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -614,21 +651,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1678,12 +1701,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1725,26 +1744,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 286 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2780 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..ca7b37b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 97c7ffa..f5d48ec 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -620,6 +620,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2331,6 +2341,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-16 19:25 ` Jean Guyader
2011-11-19 21:58 ` Olaf Hering
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-16 19:25 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 292 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 161 ++++++++++++++++++++++++++++------------------------
1 files changed, 87 insertions(+), 74 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7450 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a41a1d6..f093e93 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4677,37 +4677,18 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d,
+ const struct xen_add_to_physmap *xatp)
{
struct page_info *page = NULL;
unsigned long gfn = 0; /* gcc ... */
+ unsigned long prev_mfn, mfn = 0, gpfn, idx;
int rc;
- switch ( op )
+ switch ( xatp->space )
{
- case XENMEM_add_to_physmap:
- {
- struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
- struct domain *d;
-
- if ( copy_from_guest(&xatp, arg, 1) )
- return -EFAULT;
-
- rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
- if ( rc != 0 )
- return rc;
-
- if ( xsm_add_to_physmap(current->domain, d) )
- {
- rcu_unlock_domain(d);
- return -EPERM;
- }
-
- switch ( xatp.space )
- {
case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
+ if ( xatp->idx == 0 )
mfn = virt_to_mfn(d->shared_info);
break;
case XENMAPSPACE_grant_table:
@@ -4716,21 +4697,22 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
if ( d->grant_table->gt_version == 0 )
d->grant_table->gt_version = 1;
+ idx = xatp->idx;
if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
+ (xatp->idx & XENMAPIDX_grant_table_status) )
{
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp->idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[idx]);
}
else
{
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
+ if ( (idx >= nr_grant_frames(d->grant_table)) &&
+ (idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, idx + 1);
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
+ if ( idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[idx]);
}
spin_unlock(&d->grant_table->lock);
@@ -4738,9 +4720,9 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
case XENMAPSPACE_gmfn:
{
p2m_type_t p2mt;
- gfn = xatp.idx;
+ gfn = xatp->idx;
- xatp.idx = mfn_x(get_gfn_unshare(d, xatp.idx, &p2mt));
+ idx = mfn_x(get_gfn_unshare(d, xatp->idx, &p2mt));
/* If the page is still shared, exit early */
if ( p2m_is_shared(p2mt) )
{
@@ -4748,58 +4730,89 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
rcu_unlock_domain(d);
return -ENOMEM;
}
- if ( !get_page_from_pagenr(xatp.idx, d) )
+ if ( !get_page_from_pagenr(idx, d) )
break;
- mfn = xatp.idx;
+ mfn = idx;
page = mfn_to_page(mfn);
break;
}
default:
break;
- }
-
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- if ( xatp.space == XENMAPSPACE_gmfn )
- put_gfn(d, gfn);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
-
- domain_lock(d);
+ }
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
if ( page )
put_page(page);
+ if ( xatp->space == XENMAPSPACE_gmfn )
+ put_gfn(d, gfn);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
- /* Remove previously mapped page if it was present. */
- prev_mfn = get_gfn_untyped(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, PAGE_ORDER_4K);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
- /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
- put_gfn(d, xatp.gpfn);
+ domain_lock(d);
+
+ if ( page )
+ put_page(page);
+
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = get_gfn_untyped(d, xatp->gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp->gpfn, prev_mfn, PAGE_ORDER_4K);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp->gpfn);
+ }
+ /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
+ put_gfn(d, xatp->gpfn);
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, PAGE_ORDER_4K);
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, PAGE_ORDER_4K);
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, PAGE_ORDER_4K);
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp->gpfn, mfn, PAGE_ORDER_4K);
- /* In the XENMAPSPACE_gmfn, we took a ref and locked the p2m at the top */
- if ( xatp.space == XENMAPSPACE_gmfn )
- put_gfn(d, gfn);
- domain_unlock(d);
+ /* In the XENMAPSPACE_gmfn, we took a ref and locked the p2m at the top */
+ if ( xatp->space == XENMAPSPACE_gmfn )
+ put_gfn(d, gfn);
+ domain_unlock(d);
+
+ rcu_unlock_domain(d);
+
+ return rc;
+}
+
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+ int rc;
+
+ switch ( op )
+ {
+ case XENMEM_add_to_physmap:
+ {
+ struct xen_add_to_physmap xatp;
+ struct domain *d;
+
+ if ( copy_from_guest(&xatp, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ if ( xsm_add_to_physmap(current->domain, d) )
+ {
+ rcu_unlock_domain(d);
+ return -EPERM;
+ }
+
+ rc = xenmem_add_to_physmap(d, &xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
@ 2011-11-19 21:58 ` Olaf Hering
2011-11-19 22:14 ` Keir Fraser
0 siblings, 1 reply; 15+ messages in thread
From: Olaf Hering @ 2011-11-19 21:58 UTC (permalink / raw)
To: Jean Guyader; +Cc: allen.m.kay, keir, xen-devel, tim, JBeulich
On Wed, Nov 16, Jean Guyader wrote:
> Move the code for the XENMEM_add_to_physmap case into it's own
> function (xenmem_add_to_physmap).
This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite failures.
(XEN) Assertion '!in_atomic()' failed at softirq.c:61
preempt_count is like fffffc52 or fffffc00 in my testing.
Olaf
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-19 21:58 ` Olaf Hering
@ 2011-11-19 22:14 ` Keir Fraser
2011-11-19 22:37 ` Jean Guyader
2011-11-20 13:25 ` Olaf Hering
0 siblings, 2 replies; 15+ messages in thread
From: Keir Fraser @ 2011-11-19 22:14 UTC (permalink / raw)
To: Olaf Hering, Jean Guyader; +Cc: allen.m.kay, xen-devel, tim, JBeulich
On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
> On Wed, Nov 16, Jean Guyader wrote:
>
>> Move the code for the XENMEM_add_to_physmap case into it's own
>> function (xenmem_add_to_physmap).
>
> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
> failures.
> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>
> preempt_count is like fffffc52 or fffffc00 in my testing.
Thanks, hopefully fixed by c/s 24167.
-- keir
> Olaf
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-19 22:14 ` Keir Fraser
@ 2011-11-19 22:37 ` Jean Guyader
2011-11-20 13:25 ` Olaf Hering
1 sibling, 0 replies; 15+ messages in thread
From: Jean Guyader @ 2011-11-19 22:37 UTC (permalink / raw)
To: Keir Fraser
Cc: Olaf Hering, xen-devel, allen.m.kay, tim, Jean Guyader, JBeulich
On 19 November 2011 14:14, Keir Fraser <keir.xen@gmail.com> wrote:
> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>
>> On Wed, Nov 16, Jean Guyader wrote:
>>
>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>> function (xenmem_add_to_physmap).
>>
>> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
>> failures.
>> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>>
>> preempt_count is like fffffc52 or fffffc00 in my testing.
>
> Thanks, hopefully fixed by c/s 24167.
>
Thanks, sorry about that.
Jean
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-19 22:14 ` Keir Fraser
2011-11-19 22:37 ` Jean Guyader
@ 2011-11-20 13:25 ` Olaf Hering
2011-11-20 19:59 ` Keir Fraser
1 sibling, 1 reply; 15+ messages in thread
From: Olaf Hering @ 2011-11-20 13:25 UTC (permalink / raw)
To: Keir Fraser; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich
On Sat, Nov 19, Keir Fraser wrote:
> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>
> > On Wed, Nov 16, Jean Guyader wrote:
> >
> >> Move the code for the XENMEM_add_to_physmap case into it's own
> >> function (xenmem_add_to_physmap).
> >
> > This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
> > failures.
> > (XEN) Assertion '!in_atomic()' failed at softirq.c:61
> >
> > preempt_count is like fffffc52 or fffffc00 in my testing.
>
> Thanks, hopefully fixed by c/s 24167.
Yes, the ASSERT does not trigger anymore.
The remaining issue is this:
Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only memory page. gfn=0xc0, mfn=0x201979
See
http://www.chiark.greenend.org.uk/~xensrcts/logs/9893/test-amd64-i386-rhel6hvm-amd/serial-potato-beetle.log
Olaf
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-20 13:25 ` Olaf Hering
@ 2011-11-20 19:59 ` Keir Fraser
2011-11-21 8:39 ` Olaf Hering
0 siblings, 1 reply; 15+ messages in thread
From: Keir Fraser @ 2011-11-20 19:59 UTC (permalink / raw)
To: Olaf Hering; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich
On 20/11/2011 13:25, "Olaf Hering" <olaf@aepfle.de> wrote:
> On Sat, Nov 19, Keir Fraser wrote:
>
>> On 19/11/2011 21:58, "Olaf Hering" <olaf@aepfle.de> wrote:
>>
>>> On Wed, Nov 16, Jean Guyader wrote:
>>>
>>>> Move the code for the XENMEM_add_to_physmap case into it's own
>>>> function (xenmem_add_to_physmap).
>>>
>>> This changeset 24163:7a9a1261a6b0 seems to cause the current testsuite
>>> failures.
>>> (XEN) Assertion '!in_atomic()' failed at softirq.c:61
>>>
>>> preempt_count is like fffffc52 or fffffc00 in my testing.
>>
>> Thanks, hopefully fixed by c/s 24167.
>
> Yes, the ASSERT does not trigger anymore.
>
> The remaining issue is this:
>
> Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only
> memory page. gfn=0xc0, mfn=0x201979
Is that new behaviour? It may be unrelated to whatever HVM test failure
we're seeing, or else be a mere symptom of a guest gone haywire for other
reasons (we write-protect that memory range because it is supposed to be
ROM).
-- Keir
> See
> http://www.chiark.greenend.org.uk/~xensrcts/logs/9893/test-amd64-i386-rhel6hvm
> -amd/serial-potato-beetle.log
>
> Olaf
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-20 19:59 ` Keir Fraser
@ 2011-11-21 8:39 ` Olaf Hering
0 siblings, 0 replies; 15+ messages in thread
From: Olaf Hering @ 2011-11-21 8:39 UTC (permalink / raw)
To: Keir Fraser; +Cc: allen.m.kay, xen-devel, tim, Jean Guyader, JBeulich
On Sun, Nov 20, Keir Fraser wrote:
> > Nov 20 06:21:11.744519 (XEN) hvm.c:2312:d1 guest attempted write to read-only
> > memory page. gfn=0xc0, mfn=0x201979
>
> Is that new behaviour? It may be unrelated to whatever HVM test failure
> we're seeing, or else be a mere symptom of a guest gone haywire for other
> reasons (we write-protect that memory range because it is supposed to be
> ROM).
The message does not trigger with changeset 24162, but it does with
24167.
Olaf
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v7)
@ 2011-11-13 17:40 Jean Guyader
2011-11-13 17:40 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-13 17:40 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 2480 bytes --]
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Changes between v6 and v7:
- xenmem_add_to_physmap_once can't take a pointer on xatp because
it modifies .idx and .gpfn.
- Cancel hypercall continuation in the compat layer if the copy_to_guest
failed.
Changes between v5 and v6:
- Rework the patch queue to make it more readable.
- Modify xatp in place in xenmem_add_to_physmap
- Only check for preemption if we are not at the last iteration
- Copy xatp guest handler back to the guest only in case of continuation
- Add continuation only when dealing with the new xenmem space
(XENMAPSPACE_gmfn_range).
Changes between v4 and v5:
- Fix hypercall continuation for add_to_physmap in compat mode.
Changes between v3 and v4:
- Move the loop for gmfn_range from arch_memory_op to xenmem_add_to_physmap.
- Add a comment to comment to explain the purpose of iommu_dont_flush_iotlb.
Changes between v2 and v3:
- Check for the presence iotlb_flush_all callback before calling it.
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and .space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap
mm: New XENMEM space, XENMAPSPACE_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary
iotlb flush
tools/firmware/hvmloader/pci.c | 20 ++-
xen/arch/x86/mm.c | 231 ++++++++++++++++++++++-------------
xen/arch/x86/x86_64/compat/mm.c | 14 ++
xen/drivers/passthrough/iommu.c | 25 ++++
xen/drivers/passthrough/vtd/iommu.c | 100 +++++++++-------
xen/include/public/memory.h | 4 +
xen/include/xen/iommu.h | 17 +++
7 files changed, 279 insertions(+), 132 deletions(-)
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-13 17:40 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v7) Jean Guyader
@ 2011-11-13 17:40 ` Jean Guyader
2011-11-13 17:40 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-13 17:40 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 326 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Acked-by: Allen M Kay <allen.m.kay@intel.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7717ab4..5a5b6be 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -578,16 +578,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -613,21 +650,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1677,12 +1700,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1724,26 +1743,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-13 17:40 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-13 17:40 ` Jean Guyader
2011-11-13 17:40 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-13 17:40 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 332 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Acked-by: Allen M Kay <allen.m.kay@intel.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2780 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..ca7b37b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5a5b6be..7ec9541 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -619,6 +619,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2329,6 +2339,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap
2011-11-13 17:40 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-13 17:40 ` Jean Guyader
0 siblings, 0 replies; 15+ messages in thread
From: Jean Guyader @ 2011-11-13 17:40 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 294 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 189 ++++++++++++++++++++++++++++-------------------------
1 files changed, 100 insertions(+), 89 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 6630 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..7cbbb07 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,9 +4592,107 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
+static int xenmem_add_to_physmap(struct domain *d,
+ struct xen_add_to_physmap xatp)
+{
+ struct page_info* page = NULL;
+ unsigned long mfn = 0;
+ unsigned long prev_mfn, gpfn;
+ int rc;
+
+ switch ( xatp.space )
+ {
+ case XENMAPSPACE_shared_info:
+ if ( xatp.idx == 0 )
+ mfn = virt_to_mfn(d->shared_info);
+ break;
+ case XENMAPSPACE_grant_table:
+ spin_lock(&d->grant_table->lock);
+
+ if ( d->grant_table->gt_version == 0 )
+ d->grant_table->gt_version = 1;
+
+ if ( d->grant_table->gt_version == 2 &&
+ (xatp.idx & XENMAPIDX_grant_table_status) )
+ {
+ xatp.idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp.idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ }
+ else
+ {
+ if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+ (xatp.idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, xatp.idx + 1);
+
+ if ( xatp.idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
+ }
+
+ spin_unlock(&d->grant_table->lock);
+ break;
+ case XENMAPSPACE_gmfn:
+ {
+ p2m_type_t p2mt;
+
+ xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+ /* If the page is still shared, exit early */
+ if ( p2m_is_shared(p2mt) )
+ {
+ rcu_unlock_domain(d);
+ return -ENOMEM;
+ }
+ if ( !get_page_from_pagenr(xatp.idx, d) )
+ break;
+ mfn = xatp.idx;
+ page = mfn_to_page(mfn);
+ break;
+ }
+ default:
+ break;
+ }
+
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
+ if ( page )
+ put_page(page);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
+
+ domain_lock(d);
+
+ if ( page )
+ put_page(page);
+
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp.gpfn);
+ }
+
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, 0);
+
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+
+ domain_unlock(d);
+
+ return rc;
+}
+
long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
{
- struct page_info *page = NULL;
int rc;
switch ( op )
@@ -4602,7 +4700,6 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
case XENMEM_add_to_physmap:
{
struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
struct domain *d;
if ( copy_from_guest(&xatp, arg, 1) )
@@ -4618,93 +4715,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
return -EPERM;
}
- switch ( xatp.space )
- {
- case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
- mfn = virt_to_mfn(d->shared_info);
- break;
- case XENMAPSPACE_grant_table:
- spin_lock(&d->grant_table->lock);
-
- if ( d->grant_table->gt_version == 0 )
- d->grant_table->gt_version = 1;
-
- if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
- {
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
- }
- else
- {
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
-
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
- }
-
- spin_unlock(&d->grant_table->lock);
- break;
- case XENMAPSPACE_gmfn:
- {
- p2m_type_t p2mt;
-
- xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
- /* If the page is still shared, exit early */
- if ( p2m_is_shared(p2mt) )
- {
- rcu_unlock_domain(d);
- return -ENOMEM;
- }
- if ( !get_page_from_pagenr(xatp.idx, d) )
- break;
- mfn = xatp.idx;
- page = mfn_to_page(mfn);
- break;
- }
- default:
- break;
- }
-
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
-
- domain_lock(d);
-
- if ( page )
- put_page(page);
-
- /* Remove previously mapped page if it was present. */
- prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
-
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, 0);
-
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
-
- domain_unlock(d);
+ rc = xenmem_add_to_physmap(d, xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5)
@ 2011-11-10 8:43 Jean Guyader
2011-11-10 8:43 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-10 8:43 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 1866 bytes --]
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
changes between v4 and v5:
- Fix hypercall continuation for add_to_physmap in compat mode.
Changes between v3 and v4:
- Move the loop for gmfn_range from arch_memory_op to xenmem_add_to_physmap.
- Add a comment to comment to explain the purpose of iommu_dont_flush_iotlb.
Changes between v2 and v3:
- Check for the presence iotlb_flush_all callback before calling it.
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and .space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap.
mm: New XENMEM space, XENMAPSPACE_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole.
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary
iotlb flush
tools/firmware/hvmloader/pci.c | 20 +++++--
xen/arch/x86/mm.c | 82 ++++++++++++++++++++++------
xen/arch/x86/x86_64/compat/mm.c | 10 ++++
xen/drivers/passthrough/iommu.c | 25 +++++++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++++++++++++---------------
xen/include/public/memory.h | 4 ++
xen/include/xen/iommu.h | 17 ++++++
7 files changed, 192 insertions(+), 66 deletions(-)
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-10 8:43 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5) Jean Guyader
@ 2011-11-10 8:43 ` Jean Guyader
2011-11-10 8:44 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-10 8:43 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 325 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Acked-by: Allen M Kay <allen.m.kay@intel.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7717ab4..5a5b6be 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -578,16 +578,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -613,21 +650,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1677,12 +1700,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1724,26 +1743,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-10 8:43 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-10 8:44 ` Jean Guyader
2011-11-10 8:44 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-10 8:44 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 332 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Acked-by: Allen M Kay <allen.m.kay@intel.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2780 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..ca7b37b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5a5b6be..7ec9541 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -619,6 +619,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2329,6 +2339,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
2011-11-10 8:44 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-10 8:44 ` Jean Guyader
0 siblings, 0 replies; 15+ messages in thread
From: Jean Guyader @ 2011-11-10 8:44 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, keir, Jean Guyader, JBeulich
[-- Attachment #1: Type: text/plain, Size: 328 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
xen/arch/x86/mm.c | 188 ++++++++++++++++++++++++++++-------------------------
1 files changed, 99 insertions(+), 89 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
{
struct page_info *page = NULL;
+ unsigned long prev_mfn, mfn = 0, gpfn;
int rc;
- switch ( op )
- {
- case XENMEM_add_to_physmap:
+ switch ( xatp.space )
{
- struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
- struct domain *d;
-
- if ( copy_from_guest(&xatp, arg, 1) )
- return -EFAULT;
+ case XENMAPSPACE_shared_info:
+ if ( xatp.idx == 0 )
+ mfn = virt_to_mfn(d->shared_info);
+ break;
+ case XENMAPSPACE_grant_table:
+ spin_lock(&d->grant_table->lock);
- rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
- if ( rc != 0 )
- return rc;
+ if ( d->grant_table->gt_version == 0 )
+ d->grant_table->gt_version = 1;
- if ( xsm_add_to_physmap(current->domain, d) )
+ if ( d->grant_table->gt_version == 2 &&
+ (xatp.idx & XENMAPIDX_grant_table_status) )
{
- rcu_unlock_domain(d);
- return -EPERM;
+ xatp.idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp.idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ }
+ else
+ {
+ if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+ (xatp.idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, xatp.idx + 1);
+
+ if ( xatp.idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
}
- switch ( xatp.space )
+ spin_unlock(&d->grant_table->lock);
+ break;
+ case XENMAPSPACE_gmfn:
+ {
+ p2m_type_t p2mt;
+
+ xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+ /* If the page is still shared, exit early */
+ if ( p2m_is_shared(p2mt) )
{
- case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
- mfn = virt_to_mfn(d->shared_info);
+ rcu_unlock_domain(d);
+ return -ENOMEM;
+ }
+ if ( !get_page_from_pagenr(xatp.idx, d) )
break;
- case XENMAPSPACE_grant_table:
- spin_lock(&d->grant_table->lock);
+ mfn = xatp.idx;
+ page = mfn_to_page(mfn);
+ break;
+ }
+ default:
+ break;
+ }
- if ( d->grant_table->gt_version == 0 )
- d->grant_table->gt_version = 1;
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
+ if ( page )
+ put_page(page);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
- if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
- {
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
- }
- else
- {
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
+ domain_lock(d);
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
- }
+ if ( page )
+ put_page(page);
- spin_unlock(&d->grant_table->lock);
- break;
- case XENMAPSPACE_gmfn:
- {
- p2m_type_t p2mt;
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp.gpfn);
+ }
- xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
- /* If the page is still shared, exit early */
- if ( p2m_is_shared(p2mt) )
- {
- rcu_unlock_domain(d);
- return -ENOMEM;
- }
- if ( !get_page_from_pagenr(xatp.idx, d) )
- break;
- mfn = xatp.idx;
- page = mfn_to_page(mfn);
- break;
- }
- default:
- break;
- }
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, 0);
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
- domain_lock(d);
+ domain_unlock(d);
- if ( page )
- put_page(page);
+ return rc;
+}
- /* Remove previously mapped page if it was present. */
- prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+ int rc;
+
+ switch ( op )
+ {
+ case XENMEM_add_to_physmap:
+ {
+ struct xen_add_to_physmap xatp;
+ struct domain *d;
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+ if ( copy_from_guest(&xatp, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ if ( xsm_add_to_physmap(current->domain, d) )
+ {
+ rcu_unlock_domain(d);
+ return -EPERM;
+ }
- domain_unlock(d);
+ xenmem_add_to_physmap(d, xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v4)
@ 2011-11-08 20:04 Jean Guyader
2011-11-08 20:04 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-08 20:04 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Changes between v3 and v4:
- Move the loop for gmfn_range from arch_memory_op to xenmem_add_to_physmap.
- Add a comment to comment to explain the purpose of iommu_dont_flush_iotlb.
Changes between v2 and v3:
- Check for the presence iotlb_flush_all callback before calling it.
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and .space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.
tools/firmware/hvmloader/pci.c | 20 +++++--
xen/arch/x86/mm.c | 82 ++++++++++++++++++++++------
xen/arch/x86/x86_64/compat/mm.c | 4 ++
xen/drivers/passthrough/iommu.c | 25 +++++++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++++++++++++---------------
xen/include/public/memory.h | 4 ++
xen/include/xen/iommu.h | 17 ++++++
7 files changed, 186 insertions(+), 66 deletions(-)
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap.
mm: New XENMEM space, XENMAPSPACE_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole.
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-08 20:04 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v4) Jean Guyader
@ 2011-11-08 20:04 ` Jean Guyader
2011-11-08 20:04 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-08 20:04 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 279 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7717ab4..5a5b6be 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -578,16 +578,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -613,21 +650,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1677,12 +1700,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1724,26 +1743,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-08 20:04 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-08 20:04 ` Jean Guyader
2011-11-08 20:04 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-08 20:04 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 286 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2780 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..ca7b37b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5a5b6be..7ec9541 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -619,6 +619,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2329,6 +2339,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
2011-11-08 20:04 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-08 20:04 ` Jean Guyader
0 siblings, 0 replies; 15+ messages in thread
From: Jean Guyader @ 2011-11-08 20:04 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 293 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 188 ++++++++++++++++++++++++++++-------------------------
1 files changed, 99 insertions(+), 89 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
{
struct page_info *page = NULL;
+ unsigned long prev_mfn, mfn = 0, gpfn;
int rc;
- switch ( op )
- {
- case XENMEM_add_to_physmap:
+ switch ( xatp.space )
{
- struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
- struct domain *d;
-
- if ( copy_from_guest(&xatp, arg, 1) )
- return -EFAULT;
+ case XENMAPSPACE_shared_info:
+ if ( xatp.idx == 0 )
+ mfn = virt_to_mfn(d->shared_info);
+ break;
+ case XENMAPSPACE_grant_table:
+ spin_lock(&d->grant_table->lock);
- rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
- if ( rc != 0 )
- return rc;
+ if ( d->grant_table->gt_version == 0 )
+ d->grant_table->gt_version = 1;
- if ( xsm_add_to_physmap(current->domain, d) )
+ if ( d->grant_table->gt_version == 2 &&
+ (xatp.idx & XENMAPIDX_grant_table_status) )
{
- rcu_unlock_domain(d);
- return -EPERM;
+ xatp.idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp.idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ }
+ else
+ {
+ if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+ (xatp.idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, xatp.idx + 1);
+
+ if ( xatp.idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
}
- switch ( xatp.space )
+ spin_unlock(&d->grant_table->lock);
+ break;
+ case XENMAPSPACE_gmfn:
+ {
+ p2m_type_t p2mt;
+
+ xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+ /* If the page is still shared, exit early */
+ if ( p2m_is_shared(p2mt) )
{
- case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
- mfn = virt_to_mfn(d->shared_info);
+ rcu_unlock_domain(d);
+ return -ENOMEM;
+ }
+ if ( !get_page_from_pagenr(xatp.idx, d) )
break;
- case XENMAPSPACE_grant_table:
- spin_lock(&d->grant_table->lock);
+ mfn = xatp.idx;
+ page = mfn_to_page(mfn);
+ break;
+ }
+ default:
+ break;
+ }
- if ( d->grant_table->gt_version == 0 )
- d->grant_table->gt_version = 1;
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
+ if ( page )
+ put_page(page);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
- if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
- {
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
- }
- else
- {
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
+ domain_lock(d);
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
- }
+ if ( page )
+ put_page(page);
- spin_unlock(&d->grant_table->lock);
- break;
- case XENMAPSPACE_gmfn:
- {
- p2m_type_t p2mt;
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp.gpfn);
+ }
- xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
- /* If the page is still shared, exit early */
- if ( p2m_is_shared(p2mt) )
- {
- rcu_unlock_domain(d);
- return -ENOMEM;
- }
- if ( !get_page_from_pagenr(xatp.idx, d) )
- break;
- mfn = xatp.idx;
- page = mfn_to_page(mfn);
- break;
- }
- default:
- break;
- }
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, 0);
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
- domain_lock(d);
+ domain_unlock(d);
- if ( page )
- put_page(page);
+ return rc;
+}
- /* Remove previously mapped page if it was present. */
- prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+ int rc;
+
+ switch ( op )
+ {
+ case XENMEM_add_to_physmap:
+ {
+ struct xen_add_to_physmap xatp;
+ struct domain *d;
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+ if ( copy_from_guest(&xatp, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ if ( xsm_add_to_physmap(current->domain, d) )
+ {
+ rcu_unlock_domain(d);
+ return -EPERM;
+ }
- domain_unlock(d);
+ xenmem_add_to_physmap(d, xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* IOMMU, vtd and iotlb flush rework (v3)
@ 2011-11-07 18:25 Jean Guyader
2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Changes between v2 and v3:
- Check for the presence iotlb_flush_all callback before calling it.
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and .space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap.
mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole.
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush
tools/firmware/hvmloader/pci.c | 20 +++-
xen/arch/x86/mm.c | 203 +++++++++++++++++++++--------------
xen/drivers/passthrough/iommu.c | 25 +++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++--------
xen/include/public/memory.h | 4 +
xen/include/xen/iommu.h | 7 ++
6 files changed, 230 insertions(+), 129 deletions(-)
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-07 18:25 IOMMU, vtd and iotlb flush rework (v3) Jean Guyader
@ 2011-11-07 18:25 ` Jean Guyader
2011-11-07 18:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 279 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7717ab4..5a5b6be 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -578,16 +578,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -613,21 +650,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1677,12 +1700,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1724,26 +1743,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-07 18:25 ` Jean Guyader
2011-11-07 18:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 286 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2780 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..ca7b37b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5a5b6be..7ec9541 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -619,6 +619,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2329,6 +2339,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
2011-11-07 18:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-07 18:25 ` Jean Guyader
2011-11-08 13:24 ` Tim Deegan
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 18:25 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 293 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 188 ++++++++++++++++++++++++++++-------------------------
1 files changed, 99 insertions(+), 89 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
{
struct page_info *page = NULL;
+ unsigned long prev_mfn, mfn = 0, gpfn;
int rc;
- switch ( op )
- {
- case XENMEM_add_to_physmap:
+ switch ( xatp.space )
{
- struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
- struct domain *d;
-
- if ( copy_from_guest(&xatp, arg, 1) )
- return -EFAULT;
+ case XENMAPSPACE_shared_info:
+ if ( xatp.idx == 0 )
+ mfn = virt_to_mfn(d->shared_info);
+ break;
+ case XENMAPSPACE_grant_table:
+ spin_lock(&d->grant_table->lock);
- rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
- if ( rc != 0 )
- return rc;
+ if ( d->grant_table->gt_version == 0 )
+ d->grant_table->gt_version = 1;
- if ( xsm_add_to_physmap(current->domain, d) )
+ if ( d->grant_table->gt_version == 2 &&
+ (xatp.idx & XENMAPIDX_grant_table_status) )
{
- rcu_unlock_domain(d);
- return -EPERM;
+ xatp.idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp.idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ }
+ else
+ {
+ if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+ (xatp.idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, xatp.idx + 1);
+
+ if ( xatp.idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
}
- switch ( xatp.space )
+ spin_unlock(&d->grant_table->lock);
+ break;
+ case XENMAPSPACE_gmfn:
+ {
+ p2m_type_t p2mt;
+
+ xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+ /* If the page is still shared, exit early */
+ if ( p2m_is_shared(p2mt) )
{
- case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
- mfn = virt_to_mfn(d->shared_info);
+ rcu_unlock_domain(d);
+ return -ENOMEM;
+ }
+ if ( !get_page_from_pagenr(xatp.idx, d) )
break;
- case XENMAPSPACE_grant_table:
- spin_lock(&d->grant_table->lock);
+ mfn = xatp.idx;
+ page = mfn_to_page(mfn);
+ break;
+ }
+ default:
+ break;
+ }
- if ( d->grant_table->gt_version == 0 )
- d->grant_table->gt_version = 1;
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
+ if ( page )
+ put_page(page);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
- if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
- {
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
- }
- else
- {
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
+ domain_lock(d);
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
- }
+ if ( page )
+ put_page(page);
- spin_unlock(&d->grant_table->lock);
- break;
- case XENMAPSPACE_gmfn:
- {
- p2m_type_t p2mt;
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp.gpfn);
+ }
- xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
- /* If the page is still shared, exit early */
- if ( p2m_is_shared(p2mt) )
- {
- rcu_unlock_domain(d);
- return -ENOMEM;
- }
- if ( !get_page_from_pagenr(xatp.idx, d) )
- break;
- mfn = xatp.idx;
- page = mfn_to_page(mfn);
- break;
- }
- default:
- break;
- }
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, 0);
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
- domain_lock(d);
+ domain_unlock(d);
- if ( page )
- put_page(page);
+ return rc;
+}
- /* Remove previously mapped page if it was present. */
- prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+ int rc;
+
+ switch ( op )
+ {
+ case XENMEM_add_to_physmap:
+ {
+ struct xen_add_to_physmap xatp;
+ struct domain *d;
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+ if ( copy_from_guest(&xatp, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ if ( xsm_add_to_physmap(current->domain, d) )
+ {
+ rcu_unlock_domain(d);
+ return -EPERM;
+ }
- domain_unlock(d);
+ xenmem_add_to_physmap(d, xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* IOMMU, vtd and iotlb flush rework (v2)
@ 2011-11-07 15:16 Jean Guyader
2011-11-07 15:16 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 15:16 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and .space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits.
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap.
mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole.
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush
tools/firmware/hvmloader/pci.c | 20 +++-
xen/arch/x86/mm.c | 203 +++++++++++++++++++++--------------
xen/drivers/passthrough/iommu.c | 25 +++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++--------
xen/include/public/memory.h | 5 +-
xen/include/xen/iommu.h | 7 ++
6 files changed, 230 insertions(+), 130 deletions(-)
Jean
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-07 15:16 IOMMU, vtd and iotlb flush rework (v2) Jean Guyader
@ 2011-11-07 15:16 ` Jean Guyader
2011-11-07 15:16 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 15:16 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 279 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7717ab4..5a5b6be 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -578,16 +578,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -613,21 +650,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1677,12 +1700,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1724,26 +1743,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-07 15:16 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-07 15:16 ` Jean Guyader
2011-11-07 15:16 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 15:16 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 286 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2742 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..9c62861 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5a5b6be..7ec9541 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -619,6 +619,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2329,6 +2339,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
2011-11-07 15:16 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-07 15:16 ` Jean Guyader
0 siblings, 0 replies; 15+ messages in thread
From: Jean Guyader @ 2011-11-07 15:16 UTC (permalink / raw)
To: xen-devel; +Cc: tim, allen.m.kay, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 293 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 188 ++++++++++++++++++++++++++++-------------------------
1 files changed, 99 insertions(+), 89 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
{
struct page_info *page = NULL;
+ unsigned long prev_mfn, mfn = 0, gpfn;
int rc;
- switch ( op )
- {
- case XENMEM_add_to_physmap:
+ switch ( xatp.space )
{
- struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
- struct domain *d;
-
- if ( copy_from_guest(&xatp, arg, 1) )
- return -EFAULT;
+ case XENMAPSPACE_shared_info:
+ if ( xatp.idx == 0 )
+ mfn = virt_to_mfn(d->shared_info);
+ break;
+ case XENMAPSPACE_grant_table:
+ spin_lock(&d->grant_table->lock);
- rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
- if ( rc != 0 )
- return rc;
+ if ( d->grant_table->gt_version == 0 )
+ d->grant_table->gt_version = 1;
- if ( xsm_add_to_physmap(current->domain, d) )
+ if ( d->grant_table->gt_version == 2 &&
+ (xatp.idx & XENMAPIDX_grant_table_status) )
{
- rcu_unlock_domain(d);
- return -EPERM;
+ xatp.idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp.idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ }
+ else
+ {
+ if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+ (xatp.idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, xatp.idx + 1);
+
+ if ( xatp.idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
}
- switch ( xatp.space )
+ spin_unlock(&d->grant_table->lock);
+ break;
+ case XENMAPSPACE_gmfn:
+ {
+ p2m_type_t p2mt;
+
+ xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+ /* If the page is still shared, exit early */
+ if ( p2m_is_shared(p2mt) )
{
- case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
- mfn = virt_to_mfn(d->shared_info);
+ rcu_unlock_domain(d);
+ return -ENOMEM;
+ }
+ if ( !get_page_from_pagenr(xatp.idx, d) )
break;
- case XENMAPSPACE_grant_table:
- spin_lock(&d->grant_table->lock);
+ mfn = xatp.idx;
+ page = mfn_to_page(mfn);
+ break;
+ }
+ default:
+ break;
+ }
- if ( d->grant_table->gt_version == 0 )
- d->grant_table->gt_version = 1;
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
+ if ( page )
+ put_page(page);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
- if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
- {
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
- }
- else
- {
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
+ domain_lock(d);
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
- }
+ if ( page )
+ put_page(page);
- spin_unlock(&d->grant_table->lock);
- break;
- case XENMAPSPACE_gmfn:
- {
- p2m_type_t p2mt;
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp.gpfn);
+ }
- xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
- /* If the page is still shared, exit early */
- if ( p2m_is_shared(p2mt) )
- {
- rcu_unlock_domain(d);
- return -ENOMEM;
- }
- if ( !get_page_from_pagenr(xatp.idx, d) )
- break;
- mfn = xatp.idx;
- page = mfn_to_page(mfn);
- break;
- }
- default:
- break;
- }
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, 0);
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
- domain_lock(d);
+ domain_unlock(d);
- if ( page )
- put_page(page);
+ return rc;
+}
- /* Remove previously mapped page if it was present. */
- prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+ int rc;
+
+ switch ( op )
+ {
+ case XENMEM_add_to_physmap:
+ {
+ struct xen_add_to_physmap xatp;
+ struct domain *d;
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+ if ( copy_from_guest(&xatp, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ if ( xsm_add_to_physmap(current->domain, d) )
+ {
+ rcu_unlock_domain(d);
+ return -EPERM;
+ }
- domain_unlock(d);
+ xenmem_add_to_physmap(d, xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 0/6] IOMMU, vtd and iotlb flush rework
@ 2011-11-04 10:38 Jean Guyader
2011-11-04 10:38 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-04 10:38 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap.
mm: Add new map space for add_to_physmap, XENMAPSPACE_gmfn_range.
hvmloader: Change memory relocation loop when overlap with PCI hole.
Introduce domain flag (dont_flush_iotlb) to avoid unnecessary iotlb flush.
tools/firmware/hvmloader/pci.c | 20 +++-
xen/arch/x86/mm.c | 197 ++++++++++++++++++++---------------
xen/drivers/passthrough/iommu.c | 26 +++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++--------
xen/include/public/memory.h | 4 +
xen/include/xen/iommu.h | 5 +
xen/include/xen/sched.h | 1 +
7 files changed, 222 insertions(+), 131 deletions(-)
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/6] vtd: Refactor iotlb flush code
2011-11-04 10:38 [PATCH 0/6] IOMMU, vtd and iotlb flush rework Jean Guyader
@ 2011-11-04 10:38 ` Jean Guyader
2011-11-04 10:38 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-04 10:38 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 279 bytes --]
Factorize the iotlb flush code from map_page and unmap_page into
it's own function.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/vtd/iommu.c | 86 +++++++++++++++++-----------------
1 files changed, 43 insertions(+), 43 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-vtd-Refactor-iotlb-flush-code.patch --]
[-- Type: text/x-patch; name="0001-vtd-Refactor-iotlb-flush-code.patch", Size: 4122 bytes --]
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7717ab4..5a5b6be 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -578,16 +578,53 @@ static void iommu_flush_all(void)
}
}
+static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+ int dma_old_pte_present, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+ struct acpi_drhd_unit *drhd;
+ struct iommu *iommu;
+ int flush_dev_iotlb;
+ int iommu_domid;
+
+ /*
+ * No need pcideves_lock here because we have flush
+ * when assign/deassign device
+ */
+ for_each_drhd_unit ( drhd )
+ {
+ iommu = drhd->iommu;
+
+ if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+ continue;
+
+ flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+ iommu_domid= domain_iommu_domid(d, iommu);
+ if ( iommu_domid == -1 )
+ continue;
+
+ if ( page_count > 1 || gfn == -1 )
+ {
+ if ( iommu_flush_iotlb_dsi(iommu, iommu_domid,
+ 0, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ else
+ {
+ if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+ (paddr_t)gfn << PAGE_SHIFT_4K, 0,
+ !dma_old_pte_present, flush_dev_iotlb) )
+ iommu_flush_write_buffer(iommu);
+ }
+ }
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
struct hvm_iommu *hd = domain_hvm_iommu(domain);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL;
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
struct mapped_rmrr *mrmrr;
spin_lock(&hd->mapping_lock);
@@ -613,21 +650,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- /* No need pcidevs_lock here since do that on assign/deassign device*/
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
- if ( test_bit(iommu->index, &hd->iommu_bitmap) )
- {
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(domain, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid, addr,
- 0, 0, flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
- }
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1677,12 +1700,8 @@ static int intel_iommu_map_page(
unsigned int flags)
{
struct hvm_iommu *hd = domain_hvm_iommu(d);
- struct acpi_drhd_unit *drhd;
- struct iommu *iommu;
struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
u64 pg_maddr;
- int flush_dev_iotlb;
- int iommu_domid;
/* Do nothing if VT-d shares EPT page table */
if ( iommu_use_hap_pt(d) )
@@ -1724,26 +1743,7 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- /*
- * No need pcideves_lock here because we have flush
- * when assign/deassign device
- */
- for_each_drhd_unit ( drhd )
- {
- iommu = drhd->iommu;
-
- if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
- continue;
-
- flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
- iommu_domid= domain_iommu_domid(d, iommu);
- if ( iommu_domid == -1 )
- continue;
- if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
- (paddr_t)gfn << PAGE_SHIFT_4K, 0,
- !dma_pte_present(old), flush_dev_iotlb) )
- iommu_flush_write_buffer(iommu);
- }
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all.
2011-11-04 10:38 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
@ 2011-11-04 10:38 ` Jean Guyader
2011-11-04 10:38 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
0 siblings, 1 reply; 15+ messages in thread
From: Jean Guyader @ 2011-11-04 10:38 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 286 bytes --]
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/drivers/passthrough/iommu.c | 20 ++++++++++++++++++++
xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++
xen/include/xen/iommu.h | 5 +++++
3 files changed, 37 insertions(+), 0 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch --]
[-- Type: text/x-patch; name="0002-iommu-Introduce-iommu_flush-and-iommu_flush_all.patch", Size: 2742 bytes --]
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index cd6174d..9c62861 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -301,6 +301,26 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn)
return hd->platform_ops->unmap_page(d, gfn);
}
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush )
+ return;
+
+ hd->platform_ops->iotlb_flush(d, gfn, page_count);
+}
+
+void iommu_iotlb_flush_all(struct domain *d)
+{
+ struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops )
+ return;
+
+ hd->platform_ops->iotlb_flush_all(d);
+}
+
/* caller should hold the pcidevs_lock */
int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
{
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5a5b6be..7ec9541 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -619,6 +619,16 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
}
}
+static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)
+{
+ __intel_iommu_iotlb_flush(d, gfn, 1, page_count);
+}
+
+static void intel_iommu_iotlb_flush_all(struct domain *d)
+{
+ __intel_iommu_iotlb_flush(d, 0, 0, 0);
+}
+
/* clear one page's page table */
static void dma_pte_clear_one(struct domain *domain, u64 addr)
{
@@ -2329,6 +2339,8 @@ const struct iommu_ops intel_iommu_ops = {
.resume = vtd_resume,
.share_p2m = iommu_set_pgd,
.crash_shutdown = vtd_crash_shutdown,
+ .iotlb_flush = intel_iommu_iotlb_flush,
+ .iotlb_flush_all = intel_iommu_iotlb_flush_all,
};
/*
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 837e60d..a1034df 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -139,6 +139,8 @@ struct iommu_ops {
void (*resume)(void);
void (*share_p2m)(struct domain *d);
void (*crash_shutdown)(void);
+ void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
+ void (*iotlb_flush_all)(struct domain *d);
};
void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
@@ -155,4 +157,7 @@ void iommu_share_p2m_table(struct domain *d);
int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
+void iommu_iotlb_flush_all(struct domain *d);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap.
2011-11-04 10:38 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
@ 2011-11-04 10:38 ` Jean Guyader
0 siblings, 0 replies; 15+ messages in thread
From: Jean Guyader @ 2011-11-04 10:38 UTC (permalink / raw)
To: xen-devel; +Cc: allen.m.kay, tim, Jean Guyader
[-- Attachment #1: Type: text/plain, Size: 293 bytes --]
Move the code for the XENMEM_add_to_physmap case into it's own
function (xenmem_add_to_physmap).
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 188 ++++++++++++++++++++++++++++-------------------------
1 files changed, 99 insertions(+), 89 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch --]
[-- Type: text/x-patch; name="0003-add_to_physmap-Move-the-code-for-XENMEM_add_to_physm.patch", Size: 7065 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index acc1f34..f75011e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4592,119 +4592,129 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
return 0;
}
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+static int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap xatp)
{
struct page_info *page = NULL;
+ unsigned long prev_mfn, mfn = 0, gpfn;
int rc;
- switch ( op )
- {
- case XENMEM_add_to_physmap:
+ switch ( xatp.space )
{
- struct xen_add_to_physmap xatp;
- unsigned long prev_mfn, mfn = 0, gpfn;
- struct domain *d;
-
- if ( copy_from_guest(&xatp, arg, 1) )
- return -EFAULT;
+ case XENMAPSPACE_shared_info:
+ if ( xatp.idx == 0 )
+ mfn = virt_to_mfn(d->shared_info);
+ break;
+ case XENMAPSPACE_grant_table:
+ spin_lock(&d->grant_table->lock);
- rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
- if ( rc != 0 )
- return rc;
+ if ( d->grant_table->gt_version == 0 )
+ d->grant_table->gt_version = 1;
- if ( xsm_add_to_physmap(current->domain, d) )
+ if ( d->grant_table->gt_version == 2 &&
+ (xatp.idx & XENMAPIDX_grant_table_status) )
{
- rcu_unlock_domain(d);
- return -EPERM;
+ xatp.idx &= ~XENMAPIDX_grant_table_status;
+ if ( xatp.idx < nr_status_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
+ }
+ else
+ {
+ if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
+ (xatp.idx < max_nr_grant_frames) )
+ gnttab_grow_table(d, xatp.idx + 1);
+
+ if ( xatp.idx < nr_grant_frames(d->grant_table) )
+ mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
}
- switch ( xatp.space )
+ spin_unlock(&d->grant_table->lock);
+ break;
+ case XENMAPSPACE_gmfn:
+ {
+ p2m_type_t p2mt;
+
+ xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
+ /* If the page is still shared, exit early */
+ if ( p2m_is_shared(p2mt) )
{
- case XENMAPSPACE_shared_info:
- if ( xatp.idx == 0 )
- mfn = virt_to_mfn(d->shared_info);
+ rcu_unlock_domain(d);
+ return -ENOMEM;
+ }
+ if ( !get_page_from_pagenr(xatp.idx, d) )
break;
- case XENMAPSPACE_grant_table:
- spin_lock(&d->grant_table->lock);
+ mfn = xatp.idx;
+ page = mfn_to_page(mfn);
+ break;
+ }
+ default:
+ break;
+ }
- if ( d->grant_table->gt_version == 0 )
- d->grant_table->gt_version = 1;
+ if ( !paging_mode_translate(d) || (mfn == 0) )
+ {
+ if ( page )
+ put_page(page);
+ rcu_unlock_domain(d);
+ return -EINVAL;
+ }
- if ( d->grant_table->gt_version == 2 &&
- (xatp.idx & XENMAPIDX_grant_table_status) )
- {
- xatp.idx &= ~XENMAPIDX_grant_table_status;
- if ( xatp.idx < nr_status_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->status[xatp.idx]);
- }
- else
- {
- if ( (xatp.idx >= nr_grant_frames(d->grant_table)) &&
- (xatp.idx < max_nr_grant_frames) )
- gnttab_grow_table(d, xatp.idx + 1);
+ domain_lock(d);
- if ( xatp.idx < nr_grant_frames(d->grant_table) )
- mfn = virt_to_mfn(d->grant_table->shared_raw[xatp.idx]);
- }
+ if ( page )
+ put_page(page);
- spin_unlock(&d->grant_table->lock);
- break;
- case XENMAPSPACE_gmfn:
- {
- p2m_type_t p2mt;
+ /* Remove previously mapped page if it was present. */
+ prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
+ if ( mfn_valid(prev_mfn) )
+ {
+ if ( is_xen_heap_mfn(prev_mfn) )
+ /* Xen heap frames are simply unhooked from this phys slot. */
+ guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
+ else
+ /* Normal domain memory is freed, to avoid leaking memory. */
+ guest_remove_page(d, xatp.gpfn);
+ }
- xatp.idx = mfn_x(gfn_to_mfn_unshare(d, xatp.idx, &p2mt));
- /* If the page is still shared, exit early */
- if ( p2m_is_shared(p2mt) )
- {
- rcu_unlock_domain(d);
- return -ENOMEM;
- }
- if ( !get_page_from_pagenr(xatp.idx, d) )
- break;
- mfn = xatp.idx;
- page = mfn_to_page(mfn);
- break;
- }
- default:
- break;
- }
+ /* Unmap from old location, if any. */
+ gpfn = get_gpfn_from_mfn(mfn);
+ ASSERT( gpfn != SHARED_M2P_ENTRY );
+ if ( gpfn != INVALID_M2P_ENTRY )
+ guest_physmap_remove_page(d, gpfn, mfn, 0);
- if ( !paging_mode_translate(d) || (mfn == 0) )
- {
- if ( page )
- put_page(page);
- rcu_unlock_domain(d);
- return -EINVAL;
- }
+ /* Map at new location. */
+ rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
- domain_lock(d);
+ domain_unlock(d);
- if ( page )
- put_page(page);
+ return rc;
+}
- /* Remove previously mapped page if it was present. */
- prev_mfn = gmfn_to_mfn(d, xatp.gpfn);
- if ( mfn_valid(prev_mfn) )
- {
- if ( is_xen_heap_mfn(prev_mfn) )
- /* Xen heap frames are simply unhooked from this phys slot. */
- guest_physmap_remove_page(d, xatp.gpfn, prev_mfn, 0);
- else
- /* Normal domain memory is freed, to avoid leaking memory. */
- guest_remove_page(d, xatp.gpfn);
- }
- /* Unmap from old location, if any. */
- gpfn = get_gpfn_from_mfn(mfn);
- ASSERT( gpfn != SHARED_M2P_ENTRY );
- if ( gpfn != INVALID_M2P_ENTRY )
- guest_physmap_remove_page(d, gpfn, mfn, 0);
+long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+{
+ int rc;
+
+ switch ( op )
+ {
+ case XENMEM_add_to_physmap:
+ {
+ struct xen_add_to_physmap xatp;
+ struct domain *d;
- /* Map at new location. */
- rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
+ if ( copy_from_guest(&xatp, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ if ( xsm_add_to_physmap(current->domain, d) )
+ {
+ rcu_unlock_domain(d);
+ return -EPERM;
+ }
- domain_unlock(d);
+ xenmem_add_to_physmap(d, xatp);
rcu_unlock_domain(d);
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply related [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-11-21 8:39 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20111113175006.1EF1C72C36F@homiemail-mx8.g.dreamhost.com>
2011-11-14 14:17 ` [PATCH 3/6] add_to_physmap: Move the code for > XENMEM_add_to_physmap Andres Lagar-Cavilla
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-19 21:58 ` Olaf Hering
2011-11-19 22:14 ` Keir Fraser
2011-11-19 22:37 ` Jean Guyader
2011-11-20 13:25 ` Olaf Hering
2011-11-20 19:59 ` Keir Fraser
2011-11-21 8:39 ` Olaf Hering
-- strict thread matches above, loose matches on Subject: below --
2011-11-13 17:40 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v7) Jean Guyader
2011-11-13 17:40 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-13 17:40 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-13 17:40 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-10 8:43 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5) Jean Guyader
2011-11-10 8:43 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-10 8:44 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-10 8:44 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-08 20:04 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v4) Jean Guyader
2011-11-08 20:04 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-08 20:04 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-08 20:04 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-07 18:25 IOMMU, vtd and iotlb flush rework (v3) Jean Guyader
2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 18:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 18:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-08 13:24 ` Tim Deegan
2011-11-07 15:16 IOMMU, vtd and iotlb flush rework (v2) Jean Guyader
2011-11-07 15:16 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 15:16 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 15:16 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-04 10:38 [PATCH 0/6] IOMMU, vtd and iotlb flush rework Jean Guyader
2011-11-04 10:38 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-04 10:38 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-04 10:38 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).