* [PATCH 0/5 v5] xen/arm: fix guest builder cache cohenrency (again, again)
@ 2014-02-11 14:09 Ian Campbell
2014-02-11 14:11 ` [PATCH v5 1/5] xen: arm: rename create_p2m_entries to apply_p2m_changes Ian Campbell
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Ian Campbell @ 2014-02-11 14:09 UTC (permalink / raw)
To: xen-devel
Cc: Keir Fraser, Ian Jackson, Tim Deegan, Julien Grall,
Stefano Stabellini, Jan Beulich
George gave a release ack to v3.
Both 32 and 64 bit have survived ~10,000 boots with v4.
Changes in v5:
avoid get_order_from_pages and just use 1<<MAX_ORDER
s/sync_page_to_ram/flush_page_to_ram/g
remove hard tab, add an emacs magic block
Changes in v4:
make sure to actually invalidate the cache, not just
clean it
rename existing cache flush functions to avoid catching
me out that way again.
switch to using a start + length in the domctl interface
Changes in v3:
s/cacheflush_page/sync_page_to_ram/
xc interface takes a length instead of an end
make the domctl range inclusive.
make xc interface internal -- it isn't needed from libxl
in the current design and it is easier to expose an
interface in the future than to hide it.
Changes in v2:
Flush on page alloc and do targeted flushes at domain build time
rather than a big flush after domain build. This adds a new call
to common code, which is stubbed out on x86. This avoid needing
to worry about preemptability of the new domctl and also catches
cases related to ballooning where things might not be flushed
(e.g. a guest scrubs a page but doesn't clean the cache)
This has done 12000 boot loops on arm32 and 10000 on arm64.
Given the security aspect I would like to put this in 4.4.
Original blurb:
On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.
Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).
There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).
As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.
Ian.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v5 1/5] xen: arm: rename create_p2m_entries to apply_p2m_changes
2014-02-11 14:09 [PATCH 0/5 v5] xen/arm: fix guest builder cache cohenrency (again, again) Ian Campbell
@ 2014-02-11 14:11 ` Ian Campbell
2014-02-11 14:11 ` [PATCH v5 2/5] xen: arm: rename p2m next_gfn_to_relinquish to lowest_mapped_gfn Ian Campbell
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Ian Campbell @ 2014-02-11 14:11 UTC (permalink / raw)
To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini
This function hasn't been only about creating for quite a while.
This is purely a rename.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
xen/arch/arm/p2m.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
RELINQUISH,
};
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
enum p2m_operation op,
paddr_t start_gpaddr,
paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
paddr_t start,
paddr_t end)
{
- return create_p2m_entries(d, ALLOCATE, start, end,
- 0, MATTR_MEM, p2m_ram_rw);
+ return apply_p2m_changes(d, ALLOCATE, start, end,
+ 0, MATTR_MEM, p2m_ram_rw);
}
int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
paddr_t end_gaddr,
paddr_t maddr)
{
- return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
- maddr, MATTR_DEV, p2m_mmio_direct);
+ return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+ maddr, MATTR_DEV, p2m_mmio_direct);
}
int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
unsigned long page_order,
p2m_type_t t)
{
- return create_p2m_entries(d, INSERT,
- pfn_to_paddr(gpfn),
- pfn_to_paddr(gpfn + (1 << page_order)),
- pfn_to_paddr(mfn), MATTR_MEM, t);
+ return apply_p2m_changes(d, INSERT,
+ pfn_to_paddr(gpfn),
+ pfn_to_paddr(gpfn + (1 << page_order)),
+ pfn_to_paddr(mfn), MATTR_MEM, t);
}
void guest_physmap_remove_page(struct domain *d,
unsigned long gpfn,
unsigned long mfn, unsigned int page_order)
{
- create_p2m_entries(d, REMOVE,
- pfn_to_paddr(gpfn),
- pfn_to_paddr(gpfn + (1<<page_order)),
- pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+ apply_p2m_changes(d, REMOVE,
+ pfn_to_paddr(gpfn),
+ pfn_to_paddr(gpfn + (1<<page_order)),
+ pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
}
int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
{
struct p2m_domain *p2m = &d->arch.p2m;
- return create_p2m_entries(d, RELINQUISH,
+ return apply_p2m_changes(d, RELINQUISH,
pfn_to_paddr(p2m->next_gfn_to_relinquish),
pfn_to_paddr(p2m->max_mapped_gfn),
pfn_to_paddr(INVALID_MFN),
--
1.7.10.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v5 2/5] xen: arm: rename p2m next_gfn_to_relinquish to lowest_mapped_gfn
2014-02-11 14:09 [PATCH 0/5 v5] xen/arm: fix guest builder cache cohenrency (again, again) Ian Campbell
2014-02-11 14:11 ` [PATCH v5 1/5] xen: arm: rename create_p2m_entries to apply_p2m_changes Ian Campbell
@ 2014-02-11 14:11 ` Ian Campbell
2014-02-11 14:11 ` [PATCH v5 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build Ian Campbell
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Ian Campbell @ 2014-02-11 14:11 UTC (permalink / raw)
To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini
This has other uses other than during relinquish, so rename it for clarity.
This is a pure rename.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
xen/arch/arm/p2m.c | 9 ++++-----
xen/include/asm-arm/p2m.h | 8 +++++---
2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
{
if ( hypercall_preempt_check() )
{
- p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+ p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
rc = -EAGAIN;
goto out;
}
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
unsigned long egfn = paddr_to_pfn(end_gpaddr);
p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
- /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
- p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+ p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
}
rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
p2m->first_level = NULL;
p2m->max_mapped_gfn = 0;
- p2m->next_gfn_to_relinquish = ULONG_MAX;
+ p2m->lowest_mapped_gfn = ULONG_MAX;
err:
spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
struct p2m_domain *p2m = &d->arch.p2m;
return apply_p2m_changes(d, RELINQUISH,
- pfn_to_paddr(p2m->next_gfn_to_relinquish),
+ pfn_to_paddr(p2m->lowest_mapped_gfn),
pfn_to_paddr(p2m->max_mapped_gfn),
pfn_to_paddr(INVALID_MFN),
MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
*/
unsigned long max_mapped_gfn;
- /* When releasing mapped gfn's in a preemptible manner, recall where
- * to resume the search */
- unsigned long next_gfn_to_relinquish;
+ /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+ * preemptible manner this is update to track recall where to
+ * resume the search. Apart from during teardown this can only
+ * decrease. */
+ unsigned long lowest_mapped_gfn;
};
/* List of possible type for each page in the p2m entry.
--
1.7.10.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v5 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build.
2014-02-11 14:09 [PATCH 0/5 v5] xen/arm: fix guest builder cache cohenrency (again, again) Ian Campbell
2014-02-11 14:11 ` [PATCH v5 1/5] xen: arm: rename create_p2m_entries to apply_p2m_changes Ian Campbell
2014-02-11 14:11 ` [PATCH v5 2/5] xen: arm: rename p2m next_gfn_to_relinquish to lowest_mapped_gfn Ian Campbell
@ 2014-02-11 14:11 ` Ian Campbell
2014-02-12 11:52 ` Jan Beulich
2014-02-11 14:11 ` [PATCH v5 4/5] Revert "xen: arm: force guest memory accesses to cacheable when MMU is disabled" Ian Campbell
2014-02-11 14:11 ` [PATCH v5 5/5] xen: arm: correct terminology for cache flush macros Ian Campbell
4 siblings, 1 reply; 8+ messages in thread
From: Ian Campbell @ 2014-02-11 14:11 UTC (permalink / raw)
To: xen-devel
Cc: keir, Ian Campbell, stefano.stabellini, ian.jackson, julien.grall,
tim, jbeulich
Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).
This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.
Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v5: avoid get_order_from_pages and just use 1<<MAX_ORDER
s/sync_page_to_ram/flush_page_to_ram/g
remove hard tab, add an emacs magic block
v4: introduce a function to clean and invalidate as intended
make the domctl take a length not an end.
v3:
s/cacheflush_page/sync_page_to_ram/
xc interface takes a length instead of an end
make the domctl range inclusive.
make xc interface internal -- it isn't needed from libxl in the current
design and it is easier to expose an interface in the future than to hide
it.
v2:
Switch to cleaning at page allocation time + explicit flushing of the
regions which the toolstack touches.
Add XSM for new domctl.
New domctl restricts the amount of space it is willing to flush, to avoid
thinking about preemption.
---
tools/libxc/xc_dom_boot.c | 4 ++++
tools/libxc/xc_dom_core.c | 2 ++
tools/libxc/xc_domain.c | 10 ++++++++++
tools/libxc/xc_private.c | 2 ++
tools/libxc/xc_private.h | 3 +++
xen/arch/arm/domctl.c | 14 ++++++++++++++
xen/arch/arm/mm.c | 12 ++++++++++++
xen/arch/arm/p2m.c | 25 +++++++++++++++++++++++++
xen/common/page_alloc.c | 5 +++++
xen/include/asm-arm/arm32/page.h | 4 ++++
xen/include/asm-arm/arm64/page.h | 4 ++++
xen/include/asm-arm/p2m.h | 3 +++
xen/include/asm-arm/page.h | 3 +++
xen/include/asm-x86/page.h | 3 +++
xen/include/public/domctl.h | 13 +++++++++++++
xen/xsm/flask/hooks.c | 13 +++++++++++++
xen/xsm/flask/policy/access_vectors | 2 ++
17 files changed, 122 insertions(+)
diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3d4d107 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
return -1;
}
+ /* Guest shouldn't really touch its grant table until it has
+ * enabled its caches. But lets be nice. */
+ xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
+
return 0;
}
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..b9d1015 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
prev->next = phys->next;
else
dom->phys_pages = phys->next;
+
+ xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
}
void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index e1d1bec..369c3f3 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
return 0;
}
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+ xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
+{
+ DECLARE_DOMCTL;
+ domctl.cmd = XEN_DOMCTL_cacheflush;
+ domctl.domain = (domid_t)domid;
+ domctl.u.cacheflush.start_pfn = start_pfn;
+ domctl.u.cacheflush.nr_pfns = nr_pfns;
+ return do_domctl(xch, &domctl);
+}
int xc_domain_pause(xc_interface *xch,
uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..33ed15b 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
return -1;
memcpy(vaddr, src_page, PAGE_SIZE);
munmap(vaddr, PAGE_SIZE);
+ xc_domain_cacheflush(xch, domid, dst_pfn, 1);
return 0;
}
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
return -1;
memset(vaddr, 0, PAGE_SIZE);
munmap(vaddr, PAGE_SIZE);
+ xc_domain_cacheflush(xch, domid, dst_pfn, 1);
return 0;
}
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 92271c9..a610f0c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
/* Optionally flush file to disk and discard page cache */
void discard_file_cache(xc_interface *xch, int fd, int flush);
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+ xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
+
#define MAX_MMU_UPDATES 1024
struct xc_mmu {
mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..45974e7 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
{
switch ( domctl->cmd )
{
+ case XEN_DOMCTL_cacheflush:
+ {
+ unsigned long s = domctl->u.cacheflush.start_pfn;
+ unsigned long e = s + domctl->u.cacheflush.nr_pfns;
+
+ if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
+ return -EINVAL;
+
+ if ( e < s )
+ return -EINVAL;
+
+ return p2m_cache_flush(d, s, e);
+ }
+
default:
return subarch_do_domctl(domctl, d, u_domctl);
}
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index cf4e7d4..98d054b 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -342,6 +342,18 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
}
#endif
+void flush_page_to_ram(unsigned long mfn)
+{
+ void *p, *v = map_domain_page(mfn);
+
+ dsb(); /* So the CPU issues all writes to the range */
+ for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
+ asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
+ dsb(); /* So we know the flushes happen before continuing */
+
+ unmap_domain_page(v);
+}
+
void __init arch_init_memory(void)
{
/*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..d00c882 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
#include <asm/gic.h>
#include <asm/event.h>
#include <asm/hardirq.h>
+#include <asm/page.h>
/* First level P2M is 2 consecutive pages */
#define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
ALLOCATE,
REMOVE,
RELINQUISH,
+ CACHEFLUSH,
};
static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
count++;
}
break;
+
+ case CACHEFLUSH:
+ {
+ if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+ break;
+
+ flush_page_to_ram(pte.p2m.base);
+ }
+ break;
}
/* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
MATTR_MEM, p2m_invalid);
}
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+ struct p2m_domain *p2m = &d->arch.p2m;
+
+ start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+ end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+ return apply_p2m_changes(d, CACHEFLUSH,
+ pfn_to_paddr(start_mfn),
+ pfn_to_paddr(end_mfn),
+ pfn_to_paddr(INVALID_MFN),
+ MATTR_MEM, p2m_invalid);
+}
+
unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
{
paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..601319c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
/* Initialise fields which have other uses for free pages. */
pg[i].u.inuse.type_info = 0;
page_set_owner(&pg[i], NULL);
+
+ /* Ensure cache and RAM are consistent for platforms where the
+ * guest can control its own visibility of/through the cache.
+ */
+ flush_page_to_ram(page_to_mfn(&pg[i]));
}
spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..cb6add4 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
/* Inline ASM to flush dcache on register R (may be an inline asm operand) */
#define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
+
/*
* Flush all hypervisor mappings from the TLB and branch predictor.
* This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..baf8903 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
/* Inline ASM to flush dcache on register R (may be an inline asm operand) */
#define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) "dc civac, %" #R ";"
+
/*
* Flush all hypervisor mappings from the TLB
* This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
/* Look up the MFN corresponding to a domain's PFN. */
paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
/* Setup p2m RAM mapping for domain d from start-end. */
int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
/* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..5a371da 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
: : "r" (_p), "m" (*_p)); \
} while (0)
+/* Flush the dcache for an entire page. */
+void flush_page_to_ram(unsigned long mfn);
+
/* Print a walk of an arbitrary page table */
void dump_pt_walk(lpae_t *table, paddr_t addr);
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..ccc268d 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
}
+/* No cache maintenance required on x86 architecture. */
+static inline void flush_page_to_ram(unsigned long mfn) {}
+
/* return true if permission increased */
static inline bool_t
perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f22fe2e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+ /* IN: page range to flush. */
+ xen_pfn_t start_pfn, nr_pfns;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
struct xen_domctl {
uint32_t cmd;
#define XEN_DOMCTL_createdomain 1
@@ -954,6 +965,7 @@ struct xen_domctl {
#define XEN_DOMCTL_setnodeaffinity 68
#define XEN_DOMCTL_getnodeaffinity 69
#define XEN_DOMCTL_set_max_evtchn 70
+#define XEN_DOMCTL_cacheflush 71
#define XEN_DOMCTL_gdbsx_guestmemio 1000
#define XEN_DOMCTL_gdbsx_pausevcpu 1001
#define XEN_DOMCTL_gdbsx_unpausevcpu 1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
struct xen_domctl_set_max_evtchn set_max_evtchn;
struct xen_domctl_gdbsx_memio gdbsx_guest_memio;
struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+ struct xen_domctl_cacheflush cacheflush;
struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
struct xen_domctl_gdbsx_domstatus gdbsx_domstatus;
uint8_t pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..d515702 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
case XEN_DOMCTL_set_max_evtchn:
return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
+ case XEN_DOMCTL_cacheflush:
+ return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
default:
printk("flask_domctl: Unknown op %d\n", cmd);
return -EPERM;
@@ -1617,3 +1620,13 @@ static __init int flask_init(void)
}
xsm_initcall(flask_init);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
setclaim
# XEN_DOMCTL_set_max_evtchn
set_max_evtchn
+# XEN_DOMCTL_cacheflush
+ cacheflush
}
# Similar to class domain, but primarily contains domctls related to HVM domains
--
1.7.10.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v5 4/5] Revert "xen: arm: force guest memory accesses to cacheable when MMU is disabled"
2014-02-11 14:09 [PATCH 0/5 v5] xen/arm: fix guest builder cache cohenrency (again, again) Ian Campbell
` (2 preceding siblings ...)
2014-02-11 14:11 ` [PATCH v5 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build Ian Campbell
@ 2014-02-11 14:11 ` Ian Campbell
2014-02-11 14:11 ` [PATCH v5 5/5] xen: arm: correct terminology for cache flush macros Ian Campbell
4 siblings, 0 replies; 8+ messages in thread
From: Ian Campbell @ 2014-02-11 14:11 UTC (permalink / raw)
To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini
This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.
This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
- Correction to HSR_SYSREG_CRN_MASK
- Rename of HSR_SYSCTL macros to avoid naming clash
- Definition of some additional cp reg specifications
Since these are still useful they are not reverted.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
v2: Move to end of series
Do not revert useful bits
---
xen/arch/arm/domain.c | 7 --
xen/arch/arm/traps.c | 158 ------------------------------------------
xen/include/asm-arm/domain.h | 2 -
3 files changed, 167 deletions(-)
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c279a27..8f20fdf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
#include <xen/errno.h>
#include <xen/bitops.h>
#include <xen/grant_table.h>
-#include <xen/stdbool.h>
#include <asm/current.h>
#include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
else
hcr |= HCR_RW;
- if ( n->arch.default_cache )
- hcr |= (HCR_TVM|HCR_DC);
- else
- hcr &= ~(HCR_TVM|HCR_DC);
-
WRITE_SYSREG(hcr, HCR_EL2);
isb();
@@ -477,7 +471,6 @@ int vcpu_initialise(struct vcpu *v)
return rc;
v->arch.sctlr = SCTLR_GUEST_INIT;
- v->arch.default_cache = true;
/*
* By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..a15b59e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
#include <xen/hypercall.h>
#include <xen/softirq.h>
#include <xen/domain_page.h>
-#include <xen/stdbool.h>
#include <public/sched.h>
#include <public/xen.h>
#include <asm/event.h>
#include <asm/regs.h>
#include <asm/cpregs.h>
#include <asm/psci.h>
-#include <asm/flushtlb.h>
#include "decode.h"
#include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
regs->pc += hsr.len ? 4 : 2;
}
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
- /*
- * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
- * because they are incompatible.
- *
- * Once HCR.DC is disabled then we do not need HCR_TVM either,
- * since it's only purpose was to catch the MMU being enabled.
- *
- * Both are set appropriately on context switch but we need to
- * clear them now since we may not context switch on return to
- * guest.
- */
- if ( val & SCTLR_M )
- {
- WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
- /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
- * VMID requires us to flush the TLB for that VMID. */
- flush_tlb();
- v->arch.default_cache = false;
- }
-}
-
static void do_cp15_32(struct cpu_user_regs *regs,
union hsr hsr)
{
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
if ( cp32.read )
*r = v->arch.actlr;
break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do { \
- if ( cp32.read ) \
- *r = READ_SYSREG32(R); \
- else \
- WRITE_SYSREG32(*r, R); \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do { \
- if ( cp32.read ) \
- *r = (uint32_t)READ_SYSREG64(R); \
- else \
- WRITE_SYSREG64((uint64_t)*r, R); \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do { \
- if ( cp32.read ) \
- *r = (uint32_t)(READ_SYSREG64(R) >> 32); \
- else \
- { \
- uint64_t t = READ_SYSREG64(R) & 0xffffffffUL; \
- t |= ((uint64_t)(*r)) << 32; \
- WRITE_SYSREG64(t, R); \
- } \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do { \
- if ( cp32.read ) \
- *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff); \
- else \
- { \
- uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL; \
- t |= *r; \
- WRITE_SYSREG64(t, R); \
- } \
-} while(0)
-#endif
-
- /* HCR.TVM */
- case HSR_CPREG32(SCTLR):
- CP32_PASSTHRU32(SCTLR_EL1);
- update_sctlr(v, *r);
- break;
- case HSR_CPREG32(TTBR0_32): CP32_PASSTHRU64(TTBR0_EL1); break;
- case HSR_CPREG32(TTBR1_32): CP32_PASSTHRU64(TTBR1_EL1); break;
- case HSR_CPREG32(TTBCR): CP32_PASSTHRU32(TCR_EL1); break;
- case HSR_CPREG32(DACR): CP32_PASSTHRU32(DACR32_EL2); break;
- case HSR_CPREG32(DFSR): CP32_PASSTHRU32(ESR_EL1); break;
- case HSR_CPREG32(IFSR): CP32_PASSTHRU32(IFSR32_EL2); break;
- case HSR_CPREG32(ADFSR): CP32_PASSTHRU32(AFSR0_EL1); break;
- case HSR_CPREG32(AIFSR): CP32_PASSTHRU32(AFSR1_EL1); break;
- case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
- case HSR_CPREG32(DFAR): CP32_PASSTHRU64_LO(FAR_EL1); break;
- case HSR_CPREG32(IFAR): CP32_PASSTHRU64_HI(FAR_EL1); break;
- case HSR_CPREG32(MAIR0): CP32_PASSTHRU64_LO(MAIR_EL1); break;
- case HSR_CPREG32(MAIR1): CP32_PASSTHRU64_HI(MAIR_EL1); break;
- case HSR_CPREG32(AMAIR0): CP32_PASSTHRU64_LO(AMAIR_EL1); break;
- case HSR_CPREG32(AMAIR1): CP32_PASSTHRU64_HI(AMAIR_EL1); break;
-#else
- case HSR_CPREG32(DFAR): CP32_PASSTHRU32(DFAR); break;
- case HSR_CPREG32(IFAR): CP32_PASSTHRU32(IFAR); break;
- case HSR_CPREG32(MAIR0): CP32_PASSTHRU32(MAIR0); break;
- case HSR_CPREG32(MAIR1): CP32_PASSTHRU32(MAIR1); break;
- case HSR_CPREG32(AMAIR0): CP32_PASSTHRU32(AMAIR0); break;
- case HSR_CPREG32(AMAIR1): CP32_PASSTHRU32(AMAIR1); break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
default:
printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
union hsr hsr)
{
struct hsr_cp64 cp64 = hsr.cp64;
- uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
- uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
- uint64_t r;
if ( !check_conditional_instr(regs, hsr) )
{
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
domain_crash_synchronous();
}
break;
-
-#define CP64_PASSTHRU(R...) do { \
- if ( cp64.read ) \
- { \
- r = READ_SYSREG64(R); \
- *r1 = r & 0xffffffffUL; \
- *r2 = r >> 32; \
- } \
- else \
- { \
- r = (*r1) | (((uint64_t)(*r2))<<32); \
- WRITE_SYSREG64(r, R); \
- } \
-} while(0)
-
- case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
- case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
default:
printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
union hsr hsr)
{
struct hsr_sysreg sysreg = hsr.sysreg;
- register_t *x = select_user_reg(regs, sysreg.reg);
- struct vcpu *v = current;
switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
{
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
domain_crash_synchronous();
}
break;
-
-#define SYSREG_PASSTHRU(R...) do { \
- if ( sysreg.read ) \
- *x = READ_SYSREG(R); \
- else \
- WRITE_SYSREG(*x, R); \
-} while(0)
-
- case HSR_SYSREG_SCTLR_EL1:
- SYSREG_PASSTHRU(SCTLR_EL1);
- update_sctlr(v, *x);
- break;
- case HSR_SYSREG_TTBR0_EL1: SYSREG_PASSTHRU(TTBR0_EL1); break;
- case HSR_SYSREG_TTBR1_EL1: SYSREG_PASSTHRU(TTBR1_EL1); break;
- case HSR_SYSREG_TCR_EL1: SYSREG_PASSTHRU(TCR_EL1); break;
- case HSR_SYSREG_ESR_EL1: SYSREG_PASSTHRU(ESR_EL1); break;
- case HSR_SYSREG_FAR_EL1: SYSREG_PASSTHRU(FAR_EL1); break;
- case HSR_SYSREG_AFSR0_EL1: SYSREG_PASSTHRU(AFSR0_EL1); break;
- case HSR_SYSREG_AFSR1_EL1: SYSREG_PASSTHRU(AFSR1_EL1); break;
- case HSR_SYSREG_MAIR_EL1: SYSREG_PASSTHRU(MAIR_EL1); break;
- case HSR_SYSREG_AMAIR_EL1: SYSREG_PASSTHRU(AMAIR_EL1); break;
- case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
default:
printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
sysreg.read ? "mrs" : "msr",
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
uint64_t event_mask;
uint64_t lr_mask;
- bool_t default_cache;
-
struct {
/*
* SGIs and PPIs are per-VCPU, SPIs are domain global and in
--
1.7.10.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v5 5/5] xen: arm: correct terminology for cache flush macros
2014-02-11 14:09 [PATCH 0/5 v5] xen/arm: fix guest builder cache cohenrency (again, again) Ian Campbell
` (3 preceding siblings ...)
2014-02-11 14:11 ` [PATCH v5 4/5] Revert "xen: arm: force guest memory accesses to cacheable when MMU is disabled" Ian Campbell
@ 2014-02-11 14:11 ` Ian Campbell
4 siblings, 0 replies; 8+ messages in thread
From: Ian Campbell @ 2014-02-11 14:11 UTC (permalink / raw)
To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini
The term "flush" is slightly ambiguous. The correct ARM term for for this
operaton is clean, as opposed to clean+invalidate for which we also now have a
function.
This is a pure rename, no functional change.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
This could easily be left for 4.5.
---
xen/arch/arm/guestcopy.c | 2 +-
xen/arch/arm/kernel.c | 2 +-
xen/arch/arm/mm.c | 16 ++++++++--------
xen/arch/arm/smpboot.c | 2 +-
xen/include/asm-arm/arm32/page.h | 2 +-
xen/include/asm-arm/arm64/page.h | 2 +-
xen/include/asm-arm/page.h | 10 +++++-----
7 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index bd0a355..af0af6b 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -24,7 +24,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
p += offset;
memcpy(p, from, size);
if ( flush_dcache )
- flush_xen_dcache_va_range(p, size);
+ clean_xen_dcache_va_range(p, size);
unmap_domain_page(p - offset);
len -= size;
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..1e3107d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -58,7 +58,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
set_fixmap(FIXMAP_MISC, p, attrindx);
memcpy(dst, src + s, l);
- flush_xen_dcache_va_range(dst, l);
+ clean_xen_dcache_va_range(dst, l);
paddr += l;
dst += l;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 98d054b..af7b189 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -484,13 +484,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
/* Clear the copy of the boot pagetables. Each secondary CPU
* rebuilds these itself (see head.S) */
memset(boot_pgtable, 0x0, PAGE_SIZE);
- flush_xen_dcache(boot_pgtable);
+ clean_xen_dcache(boot_pgtable);
#ifdef CONFIG_ARM_64
memset(boot_first, 0x0, PAGE_SIZE);
- flush_xen_dcache(boot_first);
+ clean_xen_dcache(boot_first);
#endif
memset(boot_second, 0x0, PAGE_SIZE);
- flush_xen_dcache(boot_second);
+ clean_xen_dcache(boot_second);
/* Break up the Xen mapping into 4k pages and protect them separately. */
for ( i = 0; i < LPAE_ENTRIES; i++ )
@@ -528,7 +528,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
/* Make sure it is clear */
memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
- flush_xen_dcache_va_range(this_cpu(xen_dommap),
+ clean_xen_dcache_va_range(this_cpu(xen_dommap),
DOMHEAP_SECOND_PAGES*PAGE_SIZE);
#endif
}
@@ -539,7 +539,7 @@ int init_secondary_pagetables(int cpu)
/* Set init_ttbr for this CPU coming up. All CPus share a single setof
* pagetables, but rewrite it each time for consistency with 32 bit. */
init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
- flush_xen_dcache(init_ttbr);
+ clean_xen_dcache(init_ttbr);
return 0;
}
#else
@@ -574,15 +574,15 @@ int init_secondary_pagetables(int cpu)
write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
}
- flush_xen_dcache_va_range(first, PAGE_SIZE);
- flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
+ clean_xen_dcache_va_range(first, PAGE_SIZE);
+ clean_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
per_cpu(xen_pgtable, cpu) = first;
per_cpu(xen_dommap, cpu) = domheap;
/* Set init_ttbr for this CPU coming up */
init_ttbr = __pa(first);
- flush_xen_dcache(init_ttbr);
+ clean_xen_dcache(init_ttbr);
return 0;
}
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index c53c765..a829957 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -378,7 +378,7 @@ int __cpu_up(unsigned int cpu)
/* Open the gate for this CPU */
smp_up_cpu = cpu_logical_map(cpu);
- flush_xen_dcache(smp_up_cpu);
+ clean_xen_dcache(smp_up_cpu);
rc = arch_cpu_up(cpu);
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cb6add4..b8221ca 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -20,7 +20,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
}
/* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
+#define __clean_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
/* Inline ASM to clean and invalidate dcache on register R (may be an
* inline asm operand) */
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index baf8903..3352821 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -15,7 +15,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
}
/* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
+#define __clean_xen_dcache_one(R) "dc cvac, %" #R ";"
/* Inline ASM to clean and invalidate dcache on register R (may be an
* inline asm operand) */
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 5a371da..e00be9e 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -229,26 +229,26 @@ extern size_t cacheline_bytes;
/* Function for flushing medium-sized areas.
* if 'range' is large enough we might want to use model-specific
* full-cache flushes. */
-static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
+static inline void clean_xen_dcache_va_range(void *p, unsigned long size)
{
void *end;
dsb(); /* So the CPU issues all writes to the range */
for ( end = p + size; p < end; p += cacheline_bytes )
- asm volatile (__flush_xen_dcache_one(0) : : "r" (p));
+ asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
dsb(); /* So we know the flushes happen before continuing */
}
/* Macro for flushing a single small item. The predicate is always
* compile-time constant so this will compile down to 3 instructions in
* the common case. */
-#define flush_xen_dcache(x) do { \
+#define clean_xen_dcache(x) do { \
typeof(x) *_p = &(x); \
if ( sizeof(x) > MIN_CACHELINE_BYTES || sizeof(x) > alignof(x) ) \
- flush_xen_dcache_va_range(_p, sizeof(x)); \
+ clean_xen_dcache_va_range(_p, sizeof(x)); \
else \
asm volatile ( \
"dsb sy;" /* Finish all earlier writes */ \
- __flush_xen_dcache_one(0) \
+ __clean_xen_dcache_one(0) \
"dsb sy;" /* Finish flush before continuing */ \
: : "r" (_p), "m" (*_p)); \
} while (0)
--
1.7.10.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v5 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build.
2014-02-11 14:11 ` [PATCH v5 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build Ian Campbell
@ 2014-02-12 11:52 ` Jan Beulich
2014-02-12 13:20 ` Ian Campbell
0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-02-12 11:52 UTC (permalink / raw)
To: Ian Campbell
Cc: keir, stefano.stabellini, tim, julien.grall, ian.jackson,
xen-devel
>>> On 11.02.14 at 15:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
>
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has
> occured
> gets committed to real RAM. To achieve this add a new cacheflush_page
> function,
> which is a stub on x86.
>
> Secondly we need to flush anything which the domain builder touches, which
> we
> do via a new domctl.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
> Cc: keir@xen.org
> Cc: ian.jackson@eu.citrix.com
> --
> v5: avoid get_order_from_pages and just use 1<<MAX_ORDER
>
> s/sync_page_to_ram/flush_page_to_ram/g
>
> remove hard tab, add an emacs magic block
>
> v4: introduce a function to clean and invalidate as intended
>
> make the domctl take a length not an end.
>
> v3:
> s/cacheflush_page/sync_page_to_ram/
>
> xc interface takes a length instead of an end
>
> make the domctl range inclusive.
>
> make xc interface internal -- it isn't needed from libxl in the current
> design and it is easier to expose an interface in the future than to
> hide
> it.
>
> v2:
> Switch to cleaning at page allocation time + explicit flushing of the
> regions which the toolstack touches.
>
> Add XSM for new domctl.
>
> New domctl restricts the amount of space it is willing to flush, to avoid
> thinking about preemption.
> ---
> tools/libxc/xc_dom_boot.c | 4 ++++
> tools/libxc/xc_dom_core.c | 2 ++
> tools/libxc/xc_domain.c | 10 ++++++++++
> tools/libxc/xc_private.c | 2 ++
> tools/libxc/xc_private.h | 3 +++
> xen/arch/arm/domctl.c | 14 ++++++++++++++
> xen/arch/arm/mm.c | 12 ++++++++++++
> xen/arch/arm/p2m.c | 25 +++++++++++++++++++++++++
> xen/common/page_alloc.c | 5 +++++
> xen/include/asm-arm/arm32/page.h | 4 ++++
> xen/include/asm-arm/arm64/page.h | 4 ++++
> xen/include/asm-arm/p2m.h | 3 +++
> xen/include/asm-arm/page.h | 3 +++
> xen/include/asm-x86/page.h | 3 +++
> xen/include/public/domctl.h | 13 +++++++++++++
> xen/xsm/flask/hooks.c | 13 +++++++++++++
> xen/xsm/flask/policy/access_vectors | 2 ++
> 17 files changed, 122 insertions(+)
>
> diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
> index 5a9cfc6..3d4d107 100644
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
> return -1;
> }
>
> + /* Guest shouldn't really touch its grant table until it has
> + * enabled its caches. But lets be nice. */
> + xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
> +
> return 0;
> }
>
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..b9d1015 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t
> pfn)
> prev->next = phys->next;
> else
> dom->phys_pages = phys->next;
> +
> + xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
> }
>
> void xc_dom_unmap_all(struct xc_dom_image *dom)
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index e1d1bec..369c3f3 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
> return 0;
> }
>
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> + xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> + DECLARE_DOMCTL;
> + domctl.cmd = XEN_DOMCTL_cacheflush;
> + domctl.domain = (domid_t)domid;
> + domctl.u.cacheflush.start_pfn = start_pfn;
> + domctl.u.cacheflush.nr_pfns = nr_pfns;
> + return do_domctl(xch, &domctl);
> +}
>
> int xc_domain_pause(xc_interface *xch,
> uint32_t domid)
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index 838fd21..33ed15b 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
> return -1;
> memcpy(vaddr, src_page, PAGE_SIZE);
> munmap(vaddr, PAGE_SIZE);
> + xc_domain_cacheflush(xch, domid, dst_pfn, 1);
> return 0;
> }
>
> @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
> return -1;
> memset(vaddr, 0, PAGE_SIZE);
> munmap(vaddr, PAGE_SIZE);
> + xc_domain_cacheflush(xch, domid, dst_pfn, 1);
> return 0;
> }
>
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 92271c9..a610f0c 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp,
> int nbits);
> /* Optionally flush file to disk and discard page cache */
> void discard_file_cache(xc_interface *xch, int fd, int flush);
>
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> + xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
> +
> #define MAX_MMU_UPDATES 1024
> struct xc_mmu {
> mmu_update_t updates[MAX_MMU_UPDATES];
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..45974e7 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct
> domain *d,
> {
> switch ( domctl->cmd )
> {
> + case XEN_DOMCTL_cacheflush:
> + {
> + unsigned long s = domctl->u.cacheflush.start_pfn;
> + unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> + if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
> + return -EINVAL;
> +
> + if ( e < s )
> + return -EINVAL;
> +
> + return p2m_cache_flush(d, s, e);
> + }
> +
> default:
> return subarch_do_domctl(domctl, d, u_domctl);
> }
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index cf4e7d4..98d054b 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -342,6 +342,18 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
> }
> #endif
>
> +void flush_page_to_ram(unsigned long mfn)
> +{
> + void *p, *v = map_domain_page(mfn);
> +
> + dsb(); /* So the CPU issues all writes to the range */
> + for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> + asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r"
> (p));
> + dsb(); /* So we know the flushes happen before continuing */
> +
> + unmap_domain_page(v);
> +}
> +
> void __init arch_init_memory(void)
> {
> /*
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a61edeb..d00c882 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -8,6 +8,7 @@
> #include <asm/gic.h>
> #include <asm/event.h>
> #include <asm/hardirq.h>
> +#include <asm/page.h>
>
> /* First level P2M is 2 consecutive pages */
> #define P2M_FIRST_ORDER 1
> @@ -228,6 +229,7 @@ enum p2m_operation {
> ALLOCATE,
> REMOVE,
> RELINQUISH,
> + CACHEFLUSH,
> };
>
> static int apply_p2m_changes(struct domain *d,
> @@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
> count++;
> }
> break;
> +
> + case CACHEFLUSH:
> + {
> + if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> + break;
> +
> + flush_page_to_ram(pte.p2m.base);
> + }
> + break;
> }
>
> /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> @@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
> MATTR_MEM, p2m_invalid);
> }
>
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t
> end_mfn)
> +{
> + struct p2m_domain *p2m = &d->arch.p2m;
> +
> + start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> + end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +
> + return apply_p2m_changes(d, CACHEFLUSH,
> + pfn_to_paddr(start_mfn),
> + pfn_to_paddr(end_mfn),
> + pfn_to_paddr(INVALID_MFN),
> + MATTR_MEM, p2m_invalid);
> +}
> +
> unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
> {
> paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..601319c 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
> /* Initialise fields which have other uses for free pages. */
> pg[i].u.inuse.type_info = 0;
> page_set_owner(&pg[i], NULL);
> +
> + /* Ensure cache and RAM are consistent for platforms where the
> + * guest can control its own visibility of/through the cache.
> + */
> + flush_page_to_ram(page_to_mfn(&pg[i]));
> }
>
> spin_unlock(&heap_lock);
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index cf12a89..cb6add4 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
> /* Inline ASM to flush dcache on register R (may be an inline asm operand)
> */
> #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
>
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
> +
> /*
> * Flush all hypervisor mappings from the TLB and branch predictor.
> * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 9551f90..baf8903 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
> /* Inline ASM to flush dcache on register R (may be an inline asm operand)
> */
> #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
>
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) "dc civac, %" #R ";"
> +
> /*
> * Flush all hypervisor mappings from the TLB
> * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index e9c884a..3b39c45 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
> /* Look up the MFN corresponding to a domain's PFN. */
> paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
>
> +/* Clean & invalidate caches corresponding to a region of guest address
> space */
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t
> end_mfn);
> +
> /* Setup p2m RAM mapping for domain d from start-end. */
> int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
> /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 670d4e7..5a371da 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p,
> unsigned long size)
> : : "r" (_p), "m" (*_p)); \
> } while (0)
>
> +/* Flush the dcache for an entire page. */
> +void flush_page_to_ram(unsigned long mfn);
> +
> /* Print a walk of an arbitrary page table */
> void dump_pt_walk(lpae_t *table, paddr_t addr);
>
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 7a46af5..ccc268d 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t
> cacheattr)
> return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
> }
>
> +/* No cache maintenance required on x86 architecture. */
> +static inline void flush_page_to_ram(unsigned long mfn) {}
> +
> /* return true if permission increased */
> static inline bool_t
> perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..f22fe2e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
> typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
> DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> + /* IN: page range to flush. */
> + xen_pfn_t start_pfn, nr_pfns;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
> struct xen_domctl {
> uint32_t cmd;
> #define XEN_DOMCTL_createdomain 1
> @@ -954,6 +965,7 @@ struct xen_domctl {
> #define XEN_DOMCTL_setnodeaffinity 68
> #define XEN_DOMCTL_getnodeaffinity 69
> #define XEN_DOMCTL_set_max_evtchn 70
> +#define XEN_DOMCTL_cacheflush 71
> #define XEN_DOMCTL_gdbsx_guestmemio 1000
> #define XEN_DOMCTL_gdbsx_pausevcpu 1001
> #define XEN_DOMCTL_gdbsx_unpausevcpu 1002
> @@ -1012,6 +1024,7 @@ struct xen_domctl {
> struct xen_domctl_set_max_evtchn set_max_evtchn;
> struct xen_domctl_gdbsx_memio gdbsx_guest_memio;
> struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> + struct xen_domctl_cacheflush cacheflush;
> struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
> struct xen_domctl_gdbsx_domstatus gdbsx_domstatus;
> uint8_t pad[128];
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 50a35fc..d515702 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
> case XEN_DOMCTL_set_max_evtchn:
> return current_has_perm(d, SECCLASS_DOMAIN2,
> DOMAIN2__SET_MAX_EVTCHN);
>
> + case XEN_DOMCTL_cacheflush:
> + return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> +
> default:
> printk("flask_domctl: Unknown op %d\n", cmd);
> return -EPERM;
> @@ -1617,3 +1620,13 @@ static __init int flask_init(void)
> }
>
> xsm_initcall(flask_init);
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/xsm/flask/policy/access_vectors
> b/xen/xsm/flask/policy/access_vectors
> index 1fbe241..a0ed13d 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -196,6 +196,8 @@ class domain2
> setclaim
> # XEN_DOMCTL_set_max_evtchn
> set_max_evtchn
> +# XEN_DOMCTL_cacheflush
> + cacheflush
> }
>
> # Similar to class domain, but primarily contains domctls related to HVM
> domains
> --
> 1.7.10.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build.
2014-02-12 11:52 ` Jan Beulich
@ 2014-02-12 13:20 ` Ian Campbell
0 siblings, 0 replies; 8+ messages in thread
From: Ian Campbell @ 2014-02-12 13:20 UTC (permalink / raw)
To: Jan Beulich
Cc: keir, stefano.stabellini, tim, julien.grall, ian.jackson,
xen-devel
On Wed, 2014-02-12 at 11:52 +0000, Jan Beulich wrote:
> >>> On 11.02.14 at 15:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > Guests are initially started with caches disabled and so we need to make sure
> > they see consistent data in RAM (requiring a cache clean) but also that they
> > do not have old stale data suddenly appear in the caches when they enable
> > their caches (requiring the invalidate).
> >
> > This can be split into two halves. First we must flush each page as it is
> > allocated to the guest. It is not sufficient to do the flush at scrub time
> > since this will miss pages which are ballooned out by the guest (where the
> > guest must scrub if it cares about not leaking the pagecontent). We need to
> > clean as well as invalidate to make sure that any scrubbing which has
> > occured
> > gets committed to real RAM. To achieve this add a new cacheflush_page
> > function,
> > which is a stub on x86.
> >
> > Secondly we need to flush anything which the domain builder touches, which
> > we
> > do via a new domctl.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > Acked-by: Julien Grall <julien.grall@linaro.org>
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> Acked-by: Jan Beulich <jbeulich@suse.com>
Thanks. With that I have applied this series.
Ian.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-02-12 13:20 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-11 14:09 [PATCH 0/5 v5] xen/arm: fix guest builder cache cohenrency (again, again) Ian Campbell
2014-02-11 14:11 ` [PATCH v5 1/5] xen: arm: rename create_p2m_entries to apply_p2m_changes Ian Campbell
2014-02-11 14:11 ` [PATCH v5 2/5] xen: arm: rename p2m next_gfn_to_relinquish to lowest_mapped_gfn Ian Campbell
2014-02-11 14:11 ` [PATCH v5 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build Ian Campbell
2014-02-12 11:52 ` Jan Beulich
2014-02-12 13:20 ` Ian Campbell
2014-02-11 14:11 ` [PATCH v5 4/5] Revert "xen: arm: force guest memory accesses to cacheable when MMU is disabled" Ian Campbell
2014-02-11 14:11 ` [PATCH v5 5/5] xen: arm: correct terminology for cache flush macros Ian Campbell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).