From: Jean Guyader <jean.guyader@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: tim@xen.org, allen.m.kay@intel.com,
Jean Guyader <jean.guyader@eu.citrix.com>
Subject: [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush
Date: Mon, 7 Nov 2011 15:16:39 +0000 [thread overview]
Message-ID: <1320678999-24995-7-git-send-email-jean.guyader@eu.citrix.com> (raw)
In-Reply-To: <1320678999-24995-6-git-send-email-jean.guyader@eu.citrix.com>
[-- Attachment #1: Type: text/plain, Size: 454 bytes --]
Add cpu flag that will be checked by the iommu low level code
to skip iotlb flushes. iommu_iotlb_flush shall be called explicitly.
Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
---
xen/arch/x86/mm.c | 15 ++++++++++++++-
xen/drivers/passthrough/iommu.c | 5 +++++
xen/drivers/passthrough/vtd/iommu.c | 6 ++++--
xen/include/xen/iommu.h | 2 ++
4 files changed, 25 insertions(+), 3 deletions(-)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0006-Introduce-per-cpu-flag-iommu_dont_flush_iotlb-to-avo.patch --]
[-- Type: text/x-patch; name="0006-Introduce-per-cpu-flag-iommu_dont_flush_iotlb-to-avo.patch", Size: 3674 bytes --]
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index cca64b8..7aece56 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4698,7 +4698,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
{
case XENMEM_add_to_physmap:
{
- struct xen_add_to_physmap xatp;
+ struct xen_add_to_physmap xatp, start_xatp;
struct domain *d;
if ( copy_from_guest(&xatp, arg, 1) )
@@ -4716,7 +4716,13 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
if ( xatp.space != XENMAPSPACE_gmfn_range )
xatp.size = 1;
+ else
+ {
+ if ( need_iommu(d) )
+ this_cpu(iommu_dont_flush_iotlb) = 1;
+ }
+ start_xatp = xatp;
for ( ; xatp.size > 0; xatp.size-- )
{
if ( hypercall_preempt_check() )
@@ -4729,6 +4735,13 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
xatp.gpfn++;
}
+ if ( xatp.space == XENMAPSPACE_gmfn_range && need_iommu(d) )
+ {
+ this_cpu(iommu_dont_flush_iotlb) = 0;
+ iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp.size);
+ iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp.size);
+ }
+
rcu_unlock_domain(d);
if ( rc == -EAGAIN )
{
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9c62861..a06c94a 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -52,6 +52,8 @@ bool_t __read_mostly iommu_hap_pt_share = 1;
bool_t __read_mostly iommu_debug;
bool_t __read_mostly amd_iommu_perdev_intremap;
+DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+
static void __init parse_iommu_param(char *s)
{
char *ss;
@@ -227,6 +229,7 @@ static int iommu_populate_page_table(struct domain *d)
spin_lock(&d->page_alloc_lock);
+ this_cpu(iommu_dont_flush_iotlb) = 1;
page_list_for_each ( page, &d->page_list )
{
if ( is_hvm_domain(d) ||
@@ -244,6 +247,8 @@ static int iommu_populate_page_table(struct domain *d)
}
}
}
+ this_cpu(iommu_dont_flush_iotlb) = 0;
+ iommu_iotlb_flush_all(d);
spin_unlock(&d->page_alloc_lock);
return 0;
}
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 7ec9541..a3dd018 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -660,7 +660,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
spin_unlock(&hd->mapping_lock);
iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
- __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
+ if ( !this_cpu(iommu_dont_flush_iotlb) )
+ __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K , 0, 1);
unmap_vtd_domain_page(page);
@@ -1753,7 +1754,8 @@ static int intel_iommu_map_page(
spin_unlock(&hd->mapping_lock);
unmap_vtd_domain_page(page);
- __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
+ if ( !this_cpu(iommu_dont_flush_iotlb) )
+ __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1);
return 0;
}
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index a1034df..c757a78 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -160,4 +160,6 @@ int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
void iommu_iotlb_flush_all(struct domain *d);
+DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+
#endif /* _IOMMU_H_ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2011-11-07 15:16 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-11-07 15:16 IOMMU, vtd and iotlb flush rework (v2) Jean Guyader
2011-11-07 15:16 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 15:16 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 15:16 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-07 15:16 ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
2011-11-07 15:16 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-07 15:16 ` Jean Guyader [this message]
2011-11-07 16:45 ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jan Beulich
2011-11-07 16:58 ` Jean Guyader
2011-11-07 16:42 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jan Beulich
2011-11-07 17:06 ` Jean Guyader
2011-11-08 8:24 ` Jan Beulich
2011-11-08 10:25 ` Jean Guyader
2011-11-08 10:39 ` Jan Beulich
2011-11-08 10:39 ` Wei Wang2
2011-11-08 10:38 ` Jean Guyader
-- strict thread matches above, loose matches on Subject: below --
2011-11-07 18:25 IOMMU, vtd and iotlb flush rework (v3) Jean Guyader
2011-11-07 18:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-07 18:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-07 18:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-07 18:25 ` [PATCH 4/6] mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range Jean Guyader
2011-11-07 18:25 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-07 18:25 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2011-11-08 13:45 ` Tim Deegan
2011-11-08 20:04 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v4) Jean Guyader
2011-11-08 20:04 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-08 20:04 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-08 20:04 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-08 20:04 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-08 20:04 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-08 20:04 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2011-11-10 8:43 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5) Jean Guyader
2011-11-10 8:43 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-10 8:44 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-10 8:44 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-10 8:44 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-10 8:44 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-10 8:44 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2011-11-13 17:40 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v7) Jean Guyader
2011-11-13 17:40 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-13 17:40 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-13 17:40 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-13 17:40 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-13 17:40 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-13 17:40 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
2011-11-16 19:25 [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v8) Jean Guyader
2011-11-16 19:25 ` [PATCH 1/6] vtd: Refactor iotlb flush code Jean Guyader
2011-11-16 19:25 ` [PATCH 2/6] iommu: Introduce iommu_flush and iommu_flush_all Jean Guyader
2011-11-16 19:25 ` [PATCH 3/6] add_to_physmap: Move the code for XENMEM_add_to_physmap Jean Guyader
2011-11-16 19:25 ` [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range Jean Guyader
2011-11-16 19:25 ` [PATCH 5/6] hvmloader: Change memory relocation loop when overlap with PCI hole Jean Guyader
2011-11-16 19:25 ` [PATCH 6/6] Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush Jean Guyader
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1320678999-24995-7-git-send-email-jean.guyader@eu.citrix.com \
--to=jean.guyader@eu.citrix.com \
--cc=allen.m.kay@intel.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).