* [PATCH v2 01/10] arm64: KVM: force cache clean on page fault when caches are off
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:06 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 02/10] arm64: KVM: allows discrimination of AArch32 sysreg access Marc Zyngier
` (9 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
In order for the guest with caches off to observe data written
contained in a given page, we need to make sure that page is
committed to memory, and not just hanging in the cache (as
guest accesses are completely bypassing the cache until it
decides to enable it).
For this purpose, hook into the coherent_icache_guest_page
function and flush the region if the guest SCTLR_EL1
register doesn't show the MMU and caches as being enabled.
The function also get renamed to coherent_cache_guest_page.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm/include/asm/kvm_mmu.h | 4 ++--
arch/arm/kvm/mmu.c | 4 ++--
arch/arm64/include/asm/kvm_mmu.h | 11 +++++++----
3 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 77de4a4..f997b9e 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -116,8 +116,8 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
struct kvm;
-static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
- unsigned long size)
+static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
+ unsigned long size)
{
/*
* If we are going to insert an instruction page and the icache is
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 5809069..415fd63 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -713,7 +713,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pmd_writable(&new_pmd);
kvm_set_pfn_dirty(pfn);
}
- coherent_icache_guest_page(kvm, hva & PMD_MASK, PMD_SIZE);
+ coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE);
ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
} else {
pte_t new_pte = pfn_pte(pfn, PAGE_S2);
@@ -721,7 +721,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn);
}
- coherent_icache_guest_page(kvm, hva, PAGE_SIZE);
+ coherent_cache_guest_page(vcpu, hva, PAGE_SIZE);
ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false);
}
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 680f74e..2232dd0 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -106,7 +106,6 @@ static inline bool kvm_is_write_fault(unsigned long esr)
return true;
}
-static inline void kvm_clean_dcache_area(void *addr, size_t size) {}
static inline void kvm_clean_pgd(pgd_t *pgd) {}
static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
static inline void kvm_clean_pte(pte_t *pte) {}
@@ -124,9 +123,14 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
struct kvm;
-static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
- unsigned long size)
+#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
+
+static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
+ unsigned long size)
{
+ if ((vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) != 0b101)
+ kvm_flush_dcache_to_poc((void *)hva, size);
+
if (!icache_is_aliasing()) { /* PIPT */
flush_icache_range(hva, hva + size);
} else if (!icache_is_aivivt()) { /* non ASID-tagged VIVT */
@@ -135,7 +139,6 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
}
}
-#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 01/10] arm64: KVM: force cache clean on page fault when caches are off
2014-01-22 14:56 ` [PATCH v2 01/10] arm64: KVM: force cache clean on page fault when " Marc Zyngier
@ 2014-01-29 20:06 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:06 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:33PM +0000, Marc Zyngier wrote:
> In order for the guest with caches off to observe data written
> contained in a given page, we need to make sure that page is
> committed to memory, and not just hanging in the cache (as
> guest accesses are completely bypassing the cache until it
> decides to enable it).
>
> For this purpose, hook into the coherent_icache_guest_page
> function and flush the region if the guest SCTLR_EL1
> register doesn't show the MMU and caches as being enabled.
> The function also get renamed to coherent_cache_guest_page.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
> arch/arm/include/asm/kvm_mmu.h | 4 ++--
> arch/arm/kvm/mmu.c | 4 ++--
> arch/arm64/include/asm/kvm_mmu.h | 11 +++++++----
> 3 files changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 77de4a4..f997b9e 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -116,8 +116,8 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
>
> struct kvm;
>
> -static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
> - unsigned long size)
> +static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
> + unsigned long size)
> {
> /*
> * If we are going to insert an instruction page and the icache is
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 5809069..415fd63 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -713,7 +713,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> kvm_set_s2pmd_writable(&new_pmd);
> kvm_set_pfn_dirty(pfn);
> }
> - coherent_icache_guest_page(kvm, hva & PMD_MASK, PMD_SIZE);
> + coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE);
> ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
> } else {
> pte_t new_pte = pfn_pte(pfn, PAGE_S2);
> @@ -721,7 +721,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> kvm_set_s2pte_writable(&new_pte);
> kvm_set_pfn_dirty(pfn);
> }
> - coherent_icache_guest_page(kvm, hva, PAGE_SIZE);
> + coherent_cache_guest_page(vcpu, hva, PAGE_SIZE);
> ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false);
> }
>
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 680f74e..2232dd0 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -106,7 +106,6 @@ static inline bool kvm_is_write_fault(unsigned long esr)
> return true;
> }
>
> -static inline void kvm_clean_dcache_area(void *addr, size_t size) {}
> static inline void kvm_clean_pgd(pgd_t *pgd) {}
> static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
> static inline void kvm_clean_pte(pte_t *pte) {}
> @@ -124,9 +123,14 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
>
> struct kvm;
>
> -static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
> - unsigned long size)
> +#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
> +
> +static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
> + unsigned long size)
> {
> + if ((vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) != 0b101)
> + kvm_flush_dcache_to_poc((void *)hva, size);
> +
This deserves a comment or a static inline...
> if (!icache_is_aliasing()) { /* PIPT */
> flush_icache_range(hva, hva + size);
> } else if (!icache_is_aivivt()) { /* non ASID-tagged VIVT */
> @@ -135,7 +139,6 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
> }
> }
>
> -#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
>
> #endif /* __ASSEMBLY__ */
> #endif /* __ARM64_KVM_MMU_H__ */
> --
> 1.8.3.4
>
Otherwise:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 02/10] arm64: KVM: allows discrimination of AArch32 sysreg access
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
2014-01-22 14:56 ` [PATCH v2 01/10] arm64: KVM: force cache clean on page fault when " Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:06 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 03/10] arm64: KVM: trap VM system registers until MMU and caches are ON Marc Zyngier
` (8 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
The current handling of AArch32 trapping is slightly less than
perfect, as it is not possible (from a handler point of view)
to distinguish it from an AArch64 access, nor to tell a 32bit
from a 64bit access either.
Fix this by introducing two additional flags:
- is_aarch32: true if the access was made in AArch32 mode
- is_32bit: true if is_aarch32 == true and a MCR/MRC instruction
was used to perform the access (as opposed to MCRR/MRRC).
This allows a handler to cover all the possible conditions in which
a system register gets trapped.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs.c | 5 +++++
arch/arm64/kvm/sys_regs.h | 2 ++
2 files changed, 7 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 02e9d09..f063750 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -437,6 +437,8 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
u32 hsr = kvm_vcpu_get_hsr(vcpu);
int Rt2 = (hsr >> 10) & 0xf;
+ params.is_aarch32 = true;
+ params.is_32bit = false;
params.CRm = (hsr >> 1) & 0xf;
params.Rt = (hsr >> 5) & 0xf;
params.is_write = ((hsr & 1) == 0);
@@ -480,6 +482,8 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
struct sys_reg_params params;
u32 hsr = kvm_vcpu_get_hsr(vcpu);
+ params.is_aarch32 = true;
+ params.is_32bit = true;
params.CRm = (hsr >> 1) & 0xf;
params.Rt = (hsr >> 5) & 0xf;
params.is_write = ((hsr & 1) == 0);
@@ -549,6 +553,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
struct sys_reg_params params;
unsigned long esr = kvm_vcpu_get_hsr(vcpu);
+ params.is_aarch32 = false;
params.Op0 = (esr >> 20) & 3;
params.Op1 = (esr >> 14) & 0x7;
params.CRn = (esr >> 10) & 0xf;
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index d50d372..d411e25 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -30,6 +30,8 @@ struct sys_reg_params {
u8 Op2;
u8 Rt;
bool is_write;
+ bool is_aarch32;
+ bool is_32bit; /* Only valid if is_aarch32 is true */
};
struct sys_reg_desc {
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 02/10] arm64: KVM: allows discrimination of AArch32 sysreg access
2014-01-22 14:56 ` [PATCH v2 02/10] arm64: KVM: allows discrimination of AArch32 sysreg access Marc Zyngier
@ 2014-01-29 20:06 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:06 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:34PM +0000, Marc Zyngier wrote:
> The current handling of AArch32 trapping is slightly less than
> perfect, as it is not possible (from a handler point of view)
> to distinguish it from an AArch64 access, nor to tell a 32bit
> from a 64bit access either.
>
> Fix this by introducing two additional flags:
> - is_aarch32: true if the access was made in AArch32 mode
> - is_32bit: true if is_aarch32 == true and a MCR/MRC instruction
> was used to perform the access (as opposed to MCRR/MRRC).
>
> This allows a handler to cover all the possible conditions in which
> a system register gets trapped.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/sys_regs.c | 5 +++++
> arch/arm64/kvm/sys_regs.h | 2 ++
> 2 files changed, 7 insertions(+)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 02e9d09..f063750 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -437,6 +437,8 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> u32 hsr = kvm_vcpu_get_hsr(vcpu);
> int Rt2 = (hsr >> 10) & 0xf;
>
> + params.is_aarch32 = true;
> + params.is_32bit = false;
> params.CRm = (hsr >> 1) & 0xf;
> params.Rt = (hsr >> 5) & 0xf;
> params.is_write = ((hsr & 1) == 0);
> @@ -480,6 +482,8 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
> struct sys_reg_params params;
> u32 hsr = kvm_vcpu_get_hsr(vcpu);
>
> + params.is_aarch32 = true;
> + params.is_32bit = true;
> params.CRm = (hsr >> 1) & 0xf;
> params.Rt = (hsr >> 5) & 0xf;
> params.is_write = ((hsr & 1) == 0);
> @@ -549,6 +553,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
> struct sys_reg_params params;
> unsigned long esr = kvm_vcpu_get_hsr(vcpu);
>
> + params.is_aarch32 = false;
I'm wondering if we should set is_32bit = false, just for clarity...
> params.Op0 = (esr >> 20) & 3;
> params.Op1 = (esr >> 14) & 0x7;
> params.CRn = (esr >> 10) & 0xf;
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index d50d372..d411e25 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -30,6 +30,8 @@ struct sys_reg_params {
> u8 Op2;
> u8 Rt;
> bool is_write;
> + bool is_aarch32;
> + bool is_32bit; /* Only valid if is_aarch32 is true */
> };
>
> struct sys_reg_desc {
> --
> 1.8.3.4
>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 03/10] arm64: KVM: trap VM system registers until MMU and caches are ON
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
2014-01-22 14:56 ` [PATCH v2 01/10] arm64: KVM: force cache clean on page fault when " Marc Zyngier
2014-01-22 14:56 ` [PATCH v2 02/10] arm64: KVM: allows discrimination of AArch32 sysreg access Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:07 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 04/10] arm64: KVM: flush VM pages before letting the guest enable caches Marc Zyngier
` (7 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
In order to be able to detect the point where the guest enables
its MMU and caches, trap all the VM related system registers.
Once we see the guest enabling both the MMU and the caches, we
can go back to a saner mode of operation, which is to leave these
registers in complete control of the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 3 +-
arch/arm64/include/asm/kvm_asm.h | 3 +-
arch/arm64/kvm/sys_regs.c | 99 +++++++++++++++++++++++++++++++++++-----
3 files changed, 91 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c98ef47..fd0a651 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -62,6 +62,7 @@
* RW: 64bit by default, can be overriden for 32bit VMs
* TAC: Trap ACTLR
* TSC: Trap SMC
+ * TVM: Trap VM ops (until M+C set in SCTLR_EL1)
* TSW: Trap cache operations by set/way
* TWE: Trap WFE
* TWI: Trap WFI
@@ -74,7 +75,7 @@
* SWIO: Turn set/way invalidates into set/way clean+invalidate
*/
#define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
- HCR_BSU_IS | HCR_FB | HCR_TAC | \
+ HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
HCR_AMO | HCR_IMO | HCR_FMO | \
HCR_SWIO | HCR_TIDCP | HCR_RW)
#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 3d796b4..89d7796 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -81,7 +81,8 @@
#define c13_TID_URW (TPIDR_EL0 * 2) /* Thread ID, User R/W */
#define c13_TID_URO (TPIDRRO_EL0 * 2)/* Thread ID, User R/O */
#define c13_TID_PRIV (TPIDR_EL1 * 2) /* Thread ID, Privileged */
-#define c10_AMAIR (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
+#define c10_AMAIR0 (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
+#define c10_AMAIR1 (c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
#define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
#define NR_CP15_REGS (NR_SYS_REGS * 2)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f063750..1a4731b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -121,6 +121,55 @@ done:
}
/*
+ * Generic accessor for VM registers. Only called as long as HCR_TVM
+ * is set.
+ */
+static bool access_vm_reg(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ unsigned long val;
+
+ BUG_ON(!p->is_write);
+
+ val = *vcpu_reg(vcpu, p->Rt);
+ if (!p->is_aarch32) {
+ vcpu_sys_reg(vcpu, r->reg) = val;
+ } else {
+ vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL;
+ if (!p->is_32bit)
+ vcpu_cp15(vcpu, r->reg + 1) = val >> 32;
+ }
+ return true;
+}
+
+/*
+ * SCTLR_EL1 accessor. Only called as long as HCR_TVM is set. If the
+ * guest enables the MMU, we stop trapping the VM sys_regs and leave
+ * it in complete control of the caches.
+ */
+static bool access_sctlr(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ unsigned long val;
+
+ BUG_ON(!p->is_write);
+
+ val = *vcpu_reg(vcpu, p->Rt);
+
+ if (!p->is_aarch32)
+ vcpu_sys_reg(vcpu, r->reg) = val;
+ else
+ vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL;
+
+ if ((val & (0b101)) == 0b101) /* MMU+Caches enabled? */
+ vcpu->arch.hcr_el2 &= ~HCR_TVM;
+
+ return true;
+}
+
+/*
* We could trap ID_DFR0 and tell the guest we don't support performance
* monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
* NAKed, so it will read the PMCR anyway.
@@ -185,32 +234,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
NULL, reset_mpidr, MPIDR_EL1 },
/* SCTLR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
- NULL, reset_val, SCTLR_EL1, 0x00C50078 },
+ access_sctlr, reset_val, SCTLR_EL1, 0x00C50078 },
/* CPACR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010),
NULL, reset_val, CPACR_EL1, 0 },
/* TTBR0_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
- NULL, reset_unknown, TTBR0_EL1 },
+ access_vm_reg, reset_unknown, TTBR0_EL1 },
/* TTBR1_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
- NULL, reset_unknown, TTBR1_EL1 },
+ access_vm_reg, reset_unknown, TTBR1_EL1 },
/* TCR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
- NULL, reset_val, TCR_EL1, 0 },
+ access_vm_reg, reset_val, TCR_EL1, 0 },
/* AFSR0_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
- NULL, reset_unknown, AFSR0_EL1 },
+ access_vm_reg, reset_unknown, AFSR0_EL1 },
/* AFSR1_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
- NULL, reset_unknown, AFSR1_EL1 },
+ access_vm_reg, reset_unknown, AFSR1_EL1 },
/* ESR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
- NULL, reset_unknown, ESR_EL1 },
+ access_vm_reg, reset_unknown, ESR_EL1 },
/* FAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
- NULL, reset_unknown, FAR_EL1 },
+ access_vm_reg, reset_unknown, FAR_EL1 },
/* PAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0111), CRm(0b0100), Op2(0b000),
NULL, reset_unknown, PAR_EL1 },
@@ -224,17 +273,17 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* MAIR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
- NULL, reset_unknown, MAIR_EL1 },
+ access_vm_reg, reset_unknown, MAIR_EL1 },
/* AMAIR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
- NULL, reset_amair_el1, AMAIR_EL1 },
+ access_vm_reg, reset_amair_el1, AMAIR_EL1 },
/* VBAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
NULL, reset_val, VBAR_EL1, 0 },
/* CONTEXTIDR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
- NULL, reset_val, CONTEXTIDR_EL1, 0 },
+ access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
/* TPIDR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
NULL, reset_unknown, TPIDR_EL1 },
@@ -305,14 +354,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
NULL, reset_val, FPEXC32_EL2, 0x70 },
};
-/* Trapped cp15 registers */
+/*
+ * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
+ * depending on the way they are accessed (as a 32bit or a 64bit
+ * register).
+ */
static const struct sys_reg_desc cp15_regs[] = {
+ { Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+ { Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_sctlr, NULL, c1_SCTLR },
+ { Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+ { Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
+ { Op1( 0), CRn( 2), CRm( 0), Op2( 2), access_vm_reg, NULL, c2_TTBCR },
+ { Op1( 0), CRn( 3), CRm( 0), Op2( 0), access_vm_reg, NULL, c3_DACR },
+ { Op1( 0), CRn( 5), CRm( 0), Op2( 0), access_vm_reg, NULL, c5_DFSR },
+ { Op1( 0), CRn( 5), CRm( 0), Op2( 1), access_vm_reg, NULL, c5_IFSR },
+ { Op1( 0), CRn( 5), CRm( 1), Op2( 0), access_vm_reg, NULL, c5_ADFSR },
+ { Op1( 0), CRn( 5), CRm( 1), Op2( 1), access_vm_reg, NULL, c5_AIFSR },
+ { Op1( 0), CRn( 6), CRm( 0), Op2( 0), access_vm_reg, NULL, c6_DFAR },
+ { Op1( 0), CRn( 6), CRm( 0), Op2( 2), access_vm_reg, NULL, c6_IFAR },
+
/*
* DC{C,I,CI}SW operations:
*/
{ Op1( 0), CRn( 7), CRm( 6), Op2( 2), access_dcsw },
{ Op1( 0), CRn( 7), CRm(10), Op2( 2), access_dcsw },
{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
+
{ Op1( 0), CRn( 9), CRm(12), Op2( 0), pm_fake },
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), pm_fake },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), pm_fake },
@@ -326,6 +393,14 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), pm_fake },
{ Op1( 0), CRn( 9), CRm(14), Op2( 1), pm_fake },
{ Op1( 0), CRn( 9), CRm(14), Op2( 2), pm_fake },
+
+ { Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
+ { Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
+ { Op1( 0), CRn(10), CRm( 3), Op2( 0), access_vm_reg, NULL, c10_AMAIR0 },
+ { Op1( 0), CRn(10), CRm( 3), Op2( 1), access_vm_reg, NULL, c10_AMAIR1 },
+ { Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+ { Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
};
/* Target specific emulation tables */
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 03/10] arm64: KVM: trap VM system registers until MMU and caches are ON
2014-01-22 14:56 ` [PATCH v2 03/10] arm64: KVM: trap VM system registers until MMU and caches are ON Marc Zyngier
@ 2014-01-29 20:07 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:07 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:35PM +0000, Marc Zyngier wrote:
> In order to be able to detect the point where the guest enables
> its MMU and caches, trap all the VM related system registers.
>
> Once we see the guest enabling both the MMU and the caches, we
> can go back to a saner mode of operation, which is to leave these
> registers in complete control of the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
> arch/arm64/include/asm/kvm_arm.h | 3 +-
> arch/arm64/include/asm/kvm_asm.h | 3 +-
> arch/arm64/kvm/sys_regs.c | 99 +++++++++++++++++++++++++++++++++++-----
> 3 files changed, 91 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index c98ef47..fd0a651 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -62,6 +62,7 @@
> * RW: 64bit by default, can be overriden for 32bit VMs
> * TAC: Trap ACTLR
> * TSC: Trap SMC
> + * TVM: Trap VM ops (until M+C set in SCTLR_EL1)
> * TSW: Trap cache operations by set/way
> * TWE: Trap WFE
> * TWI: Trap WFI
> @@ -74,7 +75,7 @@
> * SWIO: Turn set/way invalidates into set/way clean+invalidate
> */
> #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
> - HCR_BSU_IS | HCR_FB | HCR_TAC | \
> + HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> HCR_AMO | HCR_IMO | HCR_FMO | \
> HCR_SWIO | HCR_TIDCP | HCR_RW)
> #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index 3d796b4..89d7796 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -81,7 +81,8 @@
> #define c13_TID_URW (TPIDR_EL0 * 2) /* Thread ID, User R/W */
> #define c13_TID_URO (TPIDRRO_EL0 * 2)/* Thread ID, User R/O */
> #define c13_TID_PRIV (TPIDR_EL1 * 2) /* Thread ID, Privileged */
> -#define c10_AMAIR (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
> +#define c10_AMAIR0 (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
> +#define c10_AMAIR1 (c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
> #define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
> #define NR_CP15_REGS (NR_SYS_REGS * 2)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f063750..1a4731b 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -121,6 +121,55 @@ done:
> }
>
> /*
> + * Generic accessor for VM registers. Only called as long as HCR_TVM
> + * is set.
> + */
> +static bool access_vm_reg(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> + unsigned long val;
> +
> + BUG_ON(!p->is_write);
> +
> + val = *vcpu_reg(vcpu, p->Rt);
> + if (!p->is_aarch32) {
> + vcpu_sys_reg(vcpu, r->reg) = val;
> + } else {
> + vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL;
> + if (!p->is_32bit)
> + vcpu_cp15(vcpu, r->reg + 1) = val >> 32;
> + }
> + return true;
> +}
> +
> +/*
> + * SCTLR_EL1 accessor. Only called as long as HCR_TVM is set. If the
> + * guest enables the MMU, we stop trapping the VM sys_regs and leave
> + * it in complete control of the caches.
> + */
> +static bool access_sctlr(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> + unsigned long val;
> +
> + BUG_ON(!p->is_write);
> +
> + val = *vcpu_reg(vcpu, p->Rt);
> +
> + if (!p->is_aarch32)
> + vcpu_sys_reg(vcpu, r->reg) = val;
> + else
> + vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL;
call access_vm_reg()?
> +
> + if ((val & (0b101)) == 0b101) /* MMU+Caches enabled? */
> + vcpu->arch.hcr_el2 &= ~HCR_TVM;
static inline and reuse in patch 1?
static inline bool sctlr_caches_enabled(unsigned long val)
{
return (val & 0b101) == 0b101;
}
> +
> + return true;
> +}
> +
> +/*
> * We could trap ID_DFR0 and tell the guest we don't support performance
> * monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
> * NAKed, so it will read the PMCR anyway.
> @@ -185,32 +234,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> NULL, reset_mpidr, MPIDR_EL1 },
> /* SCTLR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
> - NULL, reset_val, SCTLR_EL1, 0x00C50078 },
> + access_sctlr, reset_val, SCTLR_EL1, 0x00C50078 },
> /* CPACR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010),
> NULL, reset_val, CPACR_EL1, 0 },
> /* TTBR0_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
> - NULL, reset_unknown, TTBR0_EL1 },
> + access_vm_reg, reset_unknown, TTBR0_EL1 },
> /* TTBR1_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
> - NULL, reset_unknown, TTBR1_EL1 },
> + access_vm_reg, reset_unknown, TTBR1_EL1 },
> /* TCR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
> - NULL, reset_val, TCR_EL1, 0 },
> + access_vm_reg, reset_val, TCR_EL1, 0 },
>
> /* AFSR0_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
> - NULL, reset_unknown, AFSR0_EL1 },
> + access_vm_reg, reset_unknown, AFSR0_EL1 },
> /* AFSR1_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
> - NULL, reset_unknown, AFSR1_EL1 },
> + access_vm_reg, reset_unknown, AFSR1_EL1 },
> /* ESR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
> - NULL, reset_unknown, ESR_EL1 },
> + access_vm_reg, reset_unknown, ESR_EL1 },
> /* FAR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
> - NULL, reset_unknown, FAR_EL1 },
> + access_vm_reg, reset_unknown, FAR_EL1 },
> /* PAR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b0111), CRm(0b0100), Op2(0b000),
> NULL, reset_unknown, PAR_EL1 },
> @@ -224,17 +273,17 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>
> /* MAIR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
> - NULL, reset_unknown, MAIR_EL1 },
> + access_vm_reg, reset_unknown, MAIR_EL1 },
> /* AMAIR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
> - NULL, reset_amair_el1, AMAIR_EL1 },
> + access_vm_reg, reset_amair_el1, AMAIR_EL1 },
>
> /* VBAR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
> NULL, reset_val, VBAR_EL1, 0 },
> /* CONTEXTIDR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
> - NULL, reset_val, CONTEXTIDR_EL1, 0 },
> + access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
> /* TPIDR_EL1 */
> { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
> NULL, reset_unknown, TPIDR_EL1 },
> @@ -305,14 +354,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> NULL, reset_val, FPEXC32_EL2, 0x70 },
> };
>
> -/* Trapped cp15 registers */
> +/*
> + * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
> + * depending on the way they are accessed (as a 32bit or a 64bit
> + * register).
> + */
> static const struct sys_reg_desc cp15_regs[] = {
> + { Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
> + { Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_sctlr, NULL, c1_SCTLR },
> + { Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
> + { Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
> + { Op1( 0), CRn( 2), CRm( 0), Op2( 2), access_vm_reg, NULL, c2_TTBCR },
> + { Op1( 0), CRn( 3), CRm( 0), Op2( 0), access_vm_reg, NULL, c3_DACR },
> + { Op1( 0), CRn( 5), CRm( 0), Op2( 0), access_vm_reg, NULL, c5_DFSR },
> + { Op1( 0), CRn( 5), CRm( 0), Op2( 1), access_vm_reg, NULL, c5_IFSR },
> + { Op1( 0), CRn( 5), CRm( 1), Op2( 0), access_vm_reg, NULL, c5_ADFSR },
> + { Op1( 0), CRn( 5), CRm( 1), Op2( 1), access_vm_reg, NULL, c5_AIFSR },
> + { Op1( 0), CRn( 6), CRm( 0), Op2( 0), access_vm_reg, NULL, c6_DFAR },
> + { Op1( 0), CRn( 6), CRm( 0), Op2( 2), access_vm_reg, NULL, c6_IFAR },
> +
> /*
> * DC{C,I,CI}SW operations:
> */
> { Op1( 0), CRn( 7), CRm( 6), Op2( 2), access_dcsw },
> { Op1( 0), CRn( 7), CRm(10), Op2( 2), access_dcsw },
> { Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
> +
> { Op1( 0), CRn( 9), CRm(12), Op2( 0), pm_fake },
> { Op1( 0), CRn( 9), CRm(12), Op2( 1), pm_fake },
> { Op1( 0), CRn( 9), CRm(12), Op2( 2), pm_fake },
> @@ -326,6 +393,14 @@ static const struct sys_reg_desc cp15_regs[] = {
> { Op1( 0), CRn( 9), CRm(14), Op2( 0), pm_fake },
> { Op1( 0), CRn( 9), CRm(14), Op2( 1), pm_fake },
> { Op1( 0), CRn( 9), CRm(14), Op2( 2), pm_fake },
> +
> + { Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
> + { Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
> + { Op1( 0), CRn(10), CRm( 3), Op2( 0), access_vm_reg, NULL, c10_AMAIR0 },
> + { Op1( 0), CRn(10), CRm( 3), Op2( 1), access_vm_reg, NULL, c10_AMAIR1 },
> + { Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
> +
> + { Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
> };
>
> /* Target specific emulation tables */
> --
> 1.8.3.4
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 04/10] arm64: KVM: flush VM pages before letting the guest enable caches
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (2 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 03/10] arm64: KVM: trap VM system registers until MMU and caches are ON Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:07 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 05/10] ARM: KVM: force cache clean on page fault when caches are off Marc Zyngier
` (6 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
When the guest runs with caches disabled (like in an early boot
sequence, for example), all the writes are diectly going to RAM,
bypassing the caches altogether.
Once the MMU and caches are enabled, whatever sits in the cache
becomes suddently visible, which isn't what the guest expects.
A way to avoid this potential disaster is to invalidate the cache
when the MMU is being turned on. For this, we hook into the SCTLR_EL1
trapping code, and scan the stage-2 page tables, invalidating the
pages/sections that have already been mapped in.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm/include/asm/kvm_mmu.h | 2 +
arch/arm/kvm/mmu.c | 83 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/include/asm/kvm_mmu.h | 1 +
arch/arm64/kvm/sys_regs.c | 5 ++-
4 files changed, 90 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index f997b9e..cbab9ba 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -141,6 +141,8 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
+void stage2_flush_vm(struct kvm *kvm);
+
#endif /* !__ASSEMBLY__ */
#endif /* __ARM_KVM_MMU_H__ */
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 415fd63..52f8b7d 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -187,6 +187,89 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
}
}
+void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
+ unsigned long addr, unsigned long end)
+{
+ pte_t *pte;
+
+ pte = pte_offset_kernel(pmd, addr);
+ do {
+ if (!pte_none(*pte)) {
+ hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
+ kvm_flush_dcache_to_poc((void*)hva, PAGE_SIZE);
+ }
+ } while(pte++, addr += PAGE_SIZE, addr != end);
+}
+
+void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
+ unsigned long addr, unsigned long end)
+{
+ pmd_t *pmd;
+ unsigned long next;
+
+ pmd = pmd_offset(pud, addr);
+ do {
+ next = pmd_addr_end(addr, end);
+ if (!pmd_none(*pmd)) {
+ if (kvm_pmd_huge(*pmd)) {
+ hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
+ kvm_flush_dcache_to_poc((void*)hva, PMD_SIZE);
+ } else {
+ stage2_flush_ptes(kvm, pmd, addr, next);
+ }
+ }
+ } while(pmd++, addr = next, addr != end);
+}
+
+void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
+ unsigned long addr, unsigned long end)
+{
+ pud_t *pud;
+ unsigned long next;
+
+ pud = pud_offset(pgd, addr);
+ do {
+ next = pud_addr_end(addr, end);
+ if (!pud_none(*pud)) {
+ if (pud_huge(*pud)) {
+ hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
+ kvm_flush_dcache_to_poc((void*)hva, PUD_SIZE);
+ } else {
+ stage2_flush_pmds(kvm, pud, addr, next);
+ }
+ }
+ } while(pud++, addr = next, addr != end);
+}
+
+static void stage2_flush_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *memslot)
+{
+ unsigned long addr = memslot->base_gfn << PAGE_SHIFT;
+ unsigned long end = addr + PAGE_SIZE * memslot->npages;
+ unsigned long next;
+ pgd_t *pgd;
+
+ pgd = kvm->arch.pgd + pgd_index(addr);
+ do {
+ next = pgd_addr_end(addr, end);
+ stage2_flush_puds(kvm, pgd, addr, next);
+ } while(pgd++, addr = next, addr != end);
+}
+
+void stage2_flush_vm(struct kvm *kvm)
+{
+ struct kvm_memslots *slots;
+ struct kvm_memory_slot *memslot;
+
+ spin_lock(&kvm->mmu_lock);
+
+ slots = kvm_memslots(kvm);
+ kvm_for_each_memslot(memslot, slots)
+ stage2_flush_memslot(kvm, memslot);
+
+ spin_unlock(&kvm->mmu_lock);
+}
+
/**
* free_boot_hyp_pgd - free HYP boot page tables
*
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 2232dd0..b7b2ca3 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -139,6 +139,7 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
}
}
+void stage2_flush_vm(struct kvm *kvm);
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1a4731b..3d27bb8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -27,6 +27,7 @@
#include <asm/kvm_host.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#include <trace/events/kvm.h>
@@ -163,8 +164,10 @@ static bool access_sctlr(struct kvm_vcpu *vcpu,
else
vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL;
- if ((val & (0b101)) == 0b101) /* MMU+Caches enabled? */
+ if ((val & (0b101)) == 0b101) { /* MMU+Caches enabled? */
vcpu->arch.hcr_el2 &= ~HCR_TVM;
+ stage2_flush_vm(vcpu->kvm);
+ }
return true;
}
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 04/10] arm64: KVM: flush VM pages before letting the guest enable caches
2014-01-22 14:56 ` [PATCH v2 04/10] arm64: KVM: flush VM pages before letting the guest enable caches Marc Zyngier
@ 2014-01-29 20:07 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:07 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:36PM +0000, Marc Zyngier wrote:
> When the guest runs with caches disabled (like in an early boot
> sequence, for example), all the writes are diectly going to RAM,
> bypassing the caches altogether.
>
> Once the MMU and caches are enabled, whatever sits in the cache
> becomes suddently visible, which isn't what the guest expects.
nit: s/suddently/suddenly/
>
> A way to avoid this potential disaster is to invalidate the cache
> when the MMU is being turned on. For this, we hook into the SCTLR_EL1
> trapping code, and scan the stage-2 page tables, invalidating the
> pages/sections that have already been mapped in.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
> arch/arm/include/asm/kvm_mmu.h | 2 +
> arch/arm/kvm/mmu.c | 83 ++++++++++++++++++++++++++++++++++++++++
> arch/arm64/include/asm/kvm_mmu.h | 1 +
> arch/arm64/kvm/sys_regs.c | 5 ++-
> 4 files changed, 90 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index f997b9e..cbab9ba 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -141,6 +141,8 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
>
> #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
>
> +void stage2_flush_vm(struct kvm *kvm);
> +
> #endif /* !__ASSEMBLY__ */
>
> #endif /* __ARM_KVM_MMU_H__ */
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 415fd63..52f8b7d 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -187,6 +187,89 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
> }
> }
>
> +void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
> + unsigned long addr, unsigned long end)
> +{
> + pte_t *pte;
> +
> + pte = pte_offset_kernel(pmd, addr);
> + do {
> + if (!pte_none(*pte)) {
> + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
> + kvm_flush_dcache_to_poc((void*)hva, PAGE_SIZE);
> + }
> + } while(pte++, addr += PAGE_SIZE, addr != end);
> +}
> +
> +void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
> + unsigned long addr, unsigned long end)
> +{
> + pmd_t *pmd;
> + unsigned long next;
> +
> + pmd = pmd_offset(pud, addr);
> + do {
> + next = pmd_addr_end(addr, end);
> + if (!pmd_none(*pmd)) {
> + if (kvm_pmd_huge(*pmd)) {
> + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
> + kvm_flush_dcache_to_poc((void*)hva, PMD_SIZE);
> + } else {
> + stage2_flush_ptes(kvm, pmd, addr, next);
> + }
> + }
> + } while(pmd++, addr = next, addr != end);
> +}
> +
> +void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
> + unsigned long addr, unsigned long end)
> +{
> + pud_t *pud;
> + unsigned long next;
> +
> + pud = pud_offset(pgd, addr);
> + do {
> + next = pud_addr_end(addr, end);
> + if (!pud_none(*pud)) {
> + if (pud_huge(*pud)) {
> + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
> + kvm_flush_dcache_to_poc((void*)hva, PUD_SIZE);
> + } else {
> + stage2_flush_pmds(kvm, pud, addr, next);
> + }
> + }
> + } while(pud++, addr = next, addr != end);
> +}
> +
> +static void stage2_flush_memslot(struct kvm *kvm,
> + struct kvm_memory_slot *memslot)
> +{
> + unsigned long addr = memslot->base_gfn << PAGE_SHIFT;
> + unsigned long end = addr + PAGE_SIZE * memslot->npages;
> + unsigned long next;
hmm, these are IPAs right, which can be larger than 32 bits with LPAE
on arm, so shouldn't this be phys_addr_t ?
If so, that propagates throughout the child subroutines, and you should
check if you can actually use pgd_addr_end.
> + pgd_t *pgd;
> +
> + pgd = kvm->arch.pgd + pgd_index(addr);
> + do {
> + next = pgd_addr_end(addr, end);
> + stage2_flush_puds(kvm, pgd, addr, next);
> + } while(pgd++, addr = next, addr != end);
I think the CodingStyle mandates a space after the while, which applies
to all the functions above.
> +}
> +
Could you add documentation for stage2_flush_vm?
/**
* stage2_flush_vm - Invalidate cache for pages mapped in stage 2
* @kvm: The struct kvm pointer
*
* Go through the stage 2 page tables and invalidate any cache lines
* backing memory already mapped to the VM.
*/
or some variation thereof...
> +void stage2_flush_vm(struct kvm *kvm)
> +{
> + struct kvm_memslots *slots;
> + struct kvm_memory_slot *memslot;
> +
> + spin_lock(&kvm->mmu_lock);
> +
> + slots = kvm_memslots(kvm);
> + kvm_for_each_memslot(memslot, slots)
> + stage2_flush_memslot(kvm, memslot);
> +
> + spin_unlock(&kvm->mmu_lock);
don't you need to also srcu_read_lock(kvm->srcu) here?
> +}
> +
> /**
> * free_boot_hyp_pgd - free HYP boot page tables
> *
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 2232dd0..b7b2ca3 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -139,6 +139,7 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
> }
> }
>
> +void stage2_flush_vm(struct kvm *kvm);
>
> #endif /* __ASSEMBLY__ */
> #endif /* __ARM64_KVM_MMU_H__ */
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 1a4731b..3d27bb8 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -27,6 +27,7 @@
> #include <asm/kvm_host.h>
> #include <asm/kvm_emulate.h>
> #include <asm/kvm_coproc.h>
> +#include <asm/kvm_mmu.h>
> #include <asm/cacheflush.h>
> #include <asm/cputype.h>
> #include <trace/events/kvm.h>
> @@ -163,8 +164,10 @@ static bool access_sctlr(struct kvm_vcpu *vcpu,
> else
> vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL;
>
> - if ((val & (0b101)) == 0b101) /* MMU+Caches enabled? */
> + if ((val & (0b101)) == 0b101) { /* MMU+Caches enabled? */
> vcpu->arch.hcr_el2 &= ~HCR_TVM;
> + stage2_flush_vm(vcpu->kvm);
> + }
>
> return true;
> }
> --
> 1.8.3.4
>
Thanks,
--
Christoffer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 05/10] ARM: KVM: force cache clean on page fault when caches are off
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (3 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 04/10] arm64: KVM: flush VM pages before letting the guest enable caches Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:07 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 06/10] ARM: KVM: fix handling of trapped 64bit coprocessor accesses Marc Zyngier
` (5 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
In order for the guest with caches off to observe data written
contained in a given page, we need to make sure that page is
committed to memory, and not just hanging in the cache (as
guest accesses are completely bypassing the cache until it
decides to enable it).
For this purpose, hook into the coherent_cache_guest_page
function and flush the region if the guest SCTLR
register doesn't show the MMU and caches as being enabled.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_mmu.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index cbab9ba..fa023e2 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -116,9 +116,14 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
struct kvm;
+#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
+
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
unsigned long size)
{
+ if ((vcpu->arch.cp15[c1_SCTLR] & 0b101) != 0b101)
+ kvm_flush_dcache_to_poc((void *)hva, size);
+
/*
* If we are going to insert an instruction page and the icache is
* either VIPT or PIPT, there is a potential problem where the host
@@ -139,8 +144,6 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
}
}
-#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
-
void stage2_flush_vm(struct kvm *kvm);
#endif /* !__ASSEMBLY__ */
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 05/10] ARM: KVM: force cache clean on page fault when caches are off
2014-01-22 14:56 ` [PATCH v2 05/10] ARM: KVM: force cache clean on page fault when caches are off Marc Zyngier
@ 2014-01-29 20:07 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:07 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:37PM +0000, Marc Zyngier wrote:
> In order for the guest with caches off to observe data written
nit: s/the guest/a guest/
nit: s/caches off/caches disabled/
> contained in a given page, we need to make sure that page is
> committed to memory, and not just hanging in the cache (as
> guest accesses are completely bypassing the cache until it
nit: s/it/the guest/
> decides to enable it).
>
> For this purpose, hook into the coherent_cache_guest_page
> function and flush the region if the guest SCTLR
> register doesn't show the MMU and caches as being enabled.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_mmu.h | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index cbab9ba..fa023e2 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -116,9 +116,14 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
>
> struct kvm;
>
> +#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
> +
> static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
> unsigned long size)
> {
> + if ((vcpu->arch.cp15[c1_SCTLR] & 0b101) != 0b101)
> + kvm_flush_dcache_to_poc((void *)hva, size);
> +
Ah, my favorite inline function again...
> /*
> * If we are going to insert an instruction page and the icache is
> * either VIPT or PIPT, there is a potential problem where the host
> @@ -139,8 +144,6 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
> }
> }
>
> -#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
> -
> void stage2_flush_vm(struct kvm *kvm);
>
> #endif /* !__ASSEMBLY__ */
> --
> 1.8.3.4
>
Besides the nits:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 06/10] ARM: KVM: fix handling of trapped 64bit coprocessor accesses
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (4 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 05/10] ARM: KVM: force cache clean on page fault when caches are off Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:07 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 07/10] ARM: KVM: fix ordering of " Marc Zyngier
` (4 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
changed the way we match the 64bit coprocessor access from
user space, but didn't update the trap handler for the same
set of registers.
The effect is that a trapped 64bit access is never matched, leading
to a fault being injected into the guest. This went unnoticed as we
didn;t really trap any 64bit register so far.
Placing the CRm field of the access into the CRn field of the matching
structure fixes the problem. Also update the debug feature to emit the
expected string in case of failing match.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/coproc.c | 4 ++--
arch/arm/kvm/coproc.h | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 78c0885..126c90d 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -443,7 +443,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
struct coproc_params params;
- params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
+ params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
params.is_64bit = true;
@@ -451,7 +451,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
params.Op2 = 0;
params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
- params.CRn = 0;
+ params.CRm = 0;
return emulate_cp15(vcpu, ¶ms);
}
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index 0461d5c..c5ad7ff 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -58,8 +58,8 @@ static inline void print_cp_instr(const struct coproc_params *p)
{
/* Look, we even formatted it for you to paste into the table! */
if (p->is_64bit) {
- kvm_pr_unimpl(" { CRm(%2lu), Op1(%2lu), is64, func_%s },\n",
- p->CRm, p->Op1, p->is_write ? "write" : "read");
+ kvm_pr_unimpl(" { CRm64(%2lu), Op1(%2lu), is64, func_%s },\n",
+ p->CRn, p->Op1, p->is_write ? "write" : "read");
} else {
kvm_pr_unimpl(" { CRn(%2lu), CRm(%2lu), Op1(%2lu), Op2(%2lu), is32,"
" func_%s },\n",
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 06/10] ARM: KVM: fix handling of trapped 64bit coprocessor accesses
2014-01-22 14:56 ` [PATCH v2 06/10] ARM: KVM: fix handling of trapped 64bit coprocessor accesses Marc Zyngier
@ 2014-01-29 20:07 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:07 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:38PM +0000, Marc Zyngier wrote:
> Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
> changed the way we match the 64bit coprocessor access from
> user space, but didn't update the trap handler for the same
> set of registers.
>
> The effect is that a trapped 64bit access is never matched, leading
> to a fault being injected into the guest. This went unnoticed as we
> didn;t really trap any 64bit register so far.
didn't
>
> Placing the CRm field of the access into the CRn field of the matching
> structure fixes the problem. Also update the debug feature to emit the
> expected string in case of failing match.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/coproc.c | 4 ++--
> arch/arm/kvm/coproc.h | 4 ++--
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 78c0885..126c90d 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -443,7 +443,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> {
> struct coproc_params params;
>
> - params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
> + params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
> params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
> params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
> params.is_64bit = true;
> @@ -451,7 +451,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
> params.Op2 = 0;
> params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
> - params.CRn = 0;
> + params.CRm = 0;
>
> return emulate_cp15(vcpu, ¶ms);
> }
> diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
> index 0461d5c..c5ad7ff 100644
> --- a/arch/arm/kvm/coproc.h
> +++ b/arch/arm/kvm/coproc.h
> @@ -58,8 +58,8 @@ static inline void print_cp_instr(const struct coproc_params *p)
> {
> /* Look, we even formatted it for you to paste into the table! */
> if (p->is_64bit) {
> - kvm_pr_unimpl(" { CRm(%2lu), Op1(%2lu), is64, func_%s },\n",
> - p->CRm, p->Op1, p->is_write ? "write" : "read");
> + kvm_pr_unimpl(" { CRm64(%2lu), Op1(%2lu), is64, func_%s },\n",
> + p->CRn, p->Op1, p->is_write ? "write" : "read");
> } else {
> kvm_pr_unimpl(" { CRn(%2lu), CRm(%2lu), Op1(%2lu), Op2(%2lu), is32,"
> " func_%s },\n",
> --
> 1.8.3.4
>
Thanks for fixing my broken fix!
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 07/10] ARM: KVM: fix ordering of 64bit coprocessor accesses
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (5 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 06/10] ARM: KVM: fix handling of trapped 64bit coprocessor accesses Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:07 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 08/10] ARM: KVM: introduce per-vcpu HYP Configuration Register Marc Zyngier
` (3 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
added an ordering dependency for the 64bit registers.
The order described is: CRn, CRm, Op1, Op2, 64bit-first.
Unfortunately, the implementation is: CRn, 64bit-first, CRm...
Move the 64bit test to be last in order to match the documentation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/coproc.h | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index c5ad7ff..1a44bbe 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -135,13 +135,13 @@ static inline int cmp_reg(const struct coproc_reg *i1,
return -1;
if (i1->CRn != i2->CRn)
return i1->CRn - i2->CRn;
- if (i1->is_64 != i2->is_64)
- return i2->is_64 - i1->is_64;
if (i1->CRm != i2->CRm)
return i1->CRm - i2->CRm;
if (i1->Op1 != i2->Op1)
return i1->Op1 - i2->Op1;
- return i1->Op2 - i2->Op2;
+ if (i1->Op2 != i2->Op2)
+ return i1->Op2 - i2->Op2;
+ return i2->is_64 - i1->is_64;
}
@@ -153,4 +153,8 @@ static inline int cmp_reg(const struct coproc_reg *i1,
#define is64 .is_64 = true
#define is32 .is_64 = false
+bool access_sctlr(struct kvm_vcpu *vcpu,
+ const struct coproc_params *p,
+ const struct coproc_reg *r);
+
#endif /* __ARM_KVM_COPROC_LOCAL_H__ */
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 07/10] ARM: KVM: fix ordering of 64bit coprocessor accesses
2014-01-22 14:56 ` [PATCH v2 07/10] ARM: KVM: fix ordering of " Marc Zyngier
@ 2014-01-29 20:07 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:07 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:39PM +0000, Marc Zyngier wrote:
> Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
> added an ordering dependency for the 64bit registers.
>
> The order described is: CRn, CRm, Op1, Op2, 64bit-first.
>
> Unfortunately, the implementation is: CRn, 64bit-first, CRm...
>
> Move the 64bit test to be last in order to match the documentation.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/coproc.h | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
> index c5ad7ff..1a44bbe 100644
> --- a/arch/arm/kvm/coproc.h
> +++ b/arch/arm/kvm/coproc.h
> @@ -135,13 +135,13 @@ static inline int cmp_reg(const struct coproc_reg *i1,
> return -1;
> if (i1->CRn != i2->CRn)
> return i1->CRn - i2->CRn;
> - if (i1->is_64 != i2->is_64)
> - return i2->is_64 - i1->is_64;
> if (i1->CRm != i2->CRm)
> return i1->CRm - i2->CRm;
> if (i1->Op1 != i2->Op1)
> return i1->Op1 - i2->Op1;
> - return i1->Op2 - i2->Op2;
> + if (i1->Op2 != i2->Op2)
> + return i1->Op2 - i2->Op2;
> + return i2->is_64 - i1->is_64;
> }
>
>
> @@ -153,4 +153,8 @@ static inline int cmp_reg(const struct coproc_reg *i1,
> #define is64 .is_64 = true
> #define is32 .is_64 = false
>
> +bool access_sctlr(struct kvm_vcpu *vcpu,
> + const struct coproc_params *p,
> + const struct coproc_reg *r);
> +
> #endif /* __ARM_KVM_COPROC_LOCAL_H__ */
stray hunk from other patch?
> --
> 1.8.3.4
>
otherwise,
Thanks for fixing my broken fix, again,
FWIW: Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 08/10] ARM: KVM: introduce per-vcpu HYP Configuration Register
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (6 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 07/10] ARM: KVM: fix ordering of " Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:08 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 09/10] ARM: KVM: trap VM system registers until MMU and caches are ON Marc Zyngier
` (2 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
So far, KVM/ARM used a fixed HCR configuration per guest, except for
the VI/VF/VA bits to control the interrupt in absence of VGIC.
With the upcoming need to dynamically reconfigure trapping, it becomes
necessary to allow the HCR to be changed on a per-vcpu basis.
The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR
field, initialized at setup time.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_arm.h | 1 -
arch/arm/include/asm/kvm_host.h | 9 ++++++---
arch/arm/kernel/asm-offsets.c | 1 +
arch/arm/kvm/guest.c | 1 +
arch/arm/kvm/interrupts_head.S | 9 +++------
5 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
index 1d3153c..a843e74 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -69,7 +69,6 @@
#define HCR_GUEST_MASK (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
HCR_TWE | HCR_SWIO | HCR_TIDCP)
-#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
/* System Control Register (SCTLR) bits */
#define SCTLR_TE (1 << 30)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ba6d33a..918fdc1 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -101,6 +101,12 @@ struct kvm_vcpu_arch {
/* The CPU type we expose to the VM */
u32 midr;
+ /* HYP trapping configuration */
+ u32 hcr;
+
+ /* Interrupt related fields */
+ u32 irq_lines; /* IRQ and FIQ levels */
+
/* Exception Information */
struct kvm_vcpu_fault_info fault;
@@ -128,9 +134,6 @@ struct kvm_vcpu_arch {
/* IO related fields */
struct kvm_decode mmio_decode;
- /* Interrupt related fields */
- u32 irq_lines; /* IRQ and FIQ levels */
-
/* Cache some mmu pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index dbe0476..713e807 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -174,6 +174,7 @@ int main(void)
DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
+ DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index 20f8d97..0c8c044 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -38,6 +38,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{
+ vcpu->arch.hcr = HCR_GUEST_MASK;
return 0;
}
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 4a2a97a..7cb41e1 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -597,17 +597,14 @@ vcpu .req r0 @ vcpu pointer always in r0
/* Enable/Disable: stage-2 trans., trap interrupts, trap wfi, trap smc */
.macro configure_hyp_role operation
- mrc p15, 4, r2, c1, c1, 0 @ HCR
- bic r2, r2, #HCR_VIRT_EXCP_MASK
- ldr r3, =HCR_GUEST_MASK
.if \operation == vmentry
- orr r2, r2, r3
+ ldr r2, [vcpu, #VCPU_HCR]
ldr r3, [vcpu, #VCPU_IRQ_LINES]
orr r2, r2, r3
.else
- bic r2, r2, r3
+ mov r2, #0
.endif
- mcr p15, 4, r2, c1, c1, 0
+ mcr p15, 4, r2, c1, c1, 0 @ HCR
.endm
.macro load_vcpu
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 08/10] ARM: KVM: introduce per-vcpu HYP Configuration Register
2014-01-22 14:56 ` [PATCH v2 08/10] ARM: KVM: introduce per-vcpu HYP Configuration Register Marc Zyngier
@ 2014-01-29 20:08 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:08 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:40PM +0000, Marc Zyngier wrote:
> So far, KVM/ARM used a fixed HCR configuration per guest, except for
> the VI/VF/VA bits to control the interrupt in absence of VGIC.
>
> With the upcoming need to dynamically reconfigure trapping, it becomes
> necessary to allow the HCR to be changed on a per-vcpu basis.
>
> The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR
> field, initialized at setup time.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_arm.h | 1 -
> arch/arm/include/asm/kvm_host.h | 9 ++++++---
> arch/arm/kernel/asm-offsets.c | 1 +
> arch/arm/kvm/guest.c | 1 +
> arch/arm/kvm/interrupts_head.S | 9 +++------
> 5 files changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
> index 1d3153c..a843e74 100644
> --- a/arch/arm/include/asm/kvm_arm.h
> +++ b/arch/arm/include/asm/kvm_arm.h
> @@ -69,7 +69,6 @@
> #define HCR_GUEST_MASK (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
> HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
> HCR_TWE | HCR_SWIO | HCR_TIDCP)
> -#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
>
> /* System Control Register (SCTLR) bits */
> #define SCTLR_TE (1 << 30)
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index ba6d33a..918fdc1 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -101,6 +101,12 @@ struct kvm_vcpu_arch {
> /* The CPU type we expose to the VM */
> u32 midr;
>
> + /* HYP trapping configuration */
> + u32 hcr;
> +
> + /* Interrupt related fields */
> + u32 irq_lines; /* IRQ and FIQ levels */
> +
> /* Exception Information */
> struct kvm_vcpu_fault_info fault;
>
> @@ -128,9 +134,6 @@ struct kvm_vcpu_arch {
> /* IO related fields */
> struct kvm_decode mmio_decode;
>
> - /* Interrupt related fields */
> - u32 irq_lines; /* IRQ and FIQ levels */
> -
> /* Cache some mmu pages needed inside spinlock regions */
> struct kvm_mmu_memory_cache mmu_page_cache;
>
> diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
> index dbe0476..713e807 100644
> --- a/arch/arm/kernel/asm-offsets.c
> +++ b/arch/arm/kernel/asm-offsets.c
> @@ -174,6 +174,7 @@ int main(void)
> DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
> DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
> DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
> + DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
> DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
> DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
> DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
> diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
> index 20f8d97..0c8c044 100644
> --- a/arch/arm/kvm/guest.c
> +++ b/arch/arm/kvm/guest.c
> @@ -38,6 +38,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>
> int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
> {
> + vcpu->arch.hcr = HCR_GUEST_MASK;
> return 0;
> }
>
> diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
> index 4a2a97a..7cb41e1 100644
> --- a/arch/arm/kvm/interrupts_head.S
> +++ b/arch/arm/kvm/interrupts_head.S
> @@ -597,17 +597,14 @@ vcpu .req r0 @ vcpu pointer always in r0
>
> /* Enable/Disable: stage-2 trans., trap interrupts, trap wfi, trap smc */
> .macro configure_hyp_role operation
> - mrc p15, 4, r2, c1, c1, 0 @ HCR
> - bic r2, r2, #HCR_VIRT_EXCP_MASK
> - ldr r3, =HCR_GUEST_MASK
> .if \operation == vmentry
> - orr r2, r2, r3
> + ldr r2, [vcpu, #VCPU_HCR]
> ldr r3, [vcpu, #VCPU_IRQ_LINES]
> orr r2, r2, r3
> .else
> - bic r2, r2, r3
> + mov r2, #0
> .endif
> - mcr p15, 4, r2, c1, c1, 0
> + mcr p15, 4, r2, c1, c1, 0 @ HCR
> .endm
>
> .macro load_vcpu
> --
> 1.8.3.4
>
Looks good:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 09/10] ARM: KVM: trap VM system registers until MMU and caches are ON
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (7 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 08/10] ARM: KVM: introduce per-vcpu HYP Configuration Register Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:08 ` Christoffer Dall
2014-01-22 14:56 ` [PATCH v2 10/10] ARM: KVM: add world-switch for AMAIR{0,1} Marc Zyngier
2014-01-28 12:11 ` [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Pranavkumar Sawargaonkar
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
In order to be able to detect the point where the guest enables
its MMU and caches, trap all the VM related system registers.
Once we see the guest enabling both the MMU and the caches, we
can go back to a saner mode of operation, which is to leave these
registers in complete control of the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_arm.h | 3 +-
arch/arm/kvm/coproc.c | 85 ++++++++++++++++++++++++++++++++++--------
arch/arm/kvm/coproc_a15.c | 2 +-
arch/arm/kvm/coproc_a7.c | 2 +-
4 files changed, 73 insertions(+), 19 deletions(-)
diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
index a843e74..816db0b 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -55,6 +55,7 @@
* The bits we set in HCR:
* TAC: Trap ACTLR
* TSC: Trap SMC
+ * TVM: Trap VM ops (until MMU and caches are on)
* TSW: Trap cache operations by set/way
* TWI: Trap WFI
* TWE: Trap WFE
@@ -68,7 +69,7 @@
*/
#define HCR_GUEST_MASK (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
- HCR_TWE | HCR_SWIO | HCR_TIDCP)
+ HCR_TVM | HCR_TWE | HCR_SWIO | HCR_TIDCP)
/* System Control Register (SCTLR) bits */
#define SCTLR_TE (1 << 30)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 126c90d..1839770 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -23,6 +23,7 @@
#include <asm/kvm_host.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#include <trace/events/kvm.h>
@@ -205,6 +206,55 @@ done:
}
/*
+ * Generic accessor for VM registers. Only called as long as HCR_TVM
+ * is set.
+ */
+static bool access_vm_reg(struct kvm_vcpu *vcpu,
+ const struct coproc_params *p,
+ const struct coproc_reg *r)
+{
+ if (p->is_write) {
+ vcpu->arch.cp15[r->reg] = *vcpu_reg(vcpu, p->Rt1);
+ if (p->is_64bit)
+ vcpu->arch.cp15[r->reg + 1] = *vcpu_reg(vcpu, p->Rt2);
+ } else {
+ *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[r->reg];
+ if (p->is_64bit)
+ *vcpu_reg(vcpu, p->Rt2) = vcpu->arch.cp15[r->reg + 1];
+ }
+
+ return true;
+}
+
+/*
+ * SCTLR accessor. Only called as long as HCR_TVM is set. If the
+ * guest enables the MMU, we stop trapping the VM sys_regs and leave
+ * it in complete control of the caches.
+ *
+ * Used by the cpu-specific code.
+ */
+bool access_sctlr(struct kvm_vcpu *vcpu,
+ const struct coproc_params *p,
+ const struct coproc_reg *r)
+{
+ if (p->is_write) {
+ unsigned long val;
+
+ val = *vcpu_reg(vcpu, p->Rt1);
+ vcpu->arch.cp15[r->reg] = val;
+
+ if ((val & (0b101)) == 0b101) { /* MMU+Caches enabled? */
+ vcpu->arch.hcr &= ~HCR_TVM;
+ stage2_flush_vm(vcpu->kvm);
+ }
+ } else {
+ *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[r->reg];
+ }
+
+ return true;
+}
+
+/*
* We could trap ID_DFR0 and tell the guest we don't support performance
* monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
* NAKed, so it will read the PMCR anyway.
@@ -261,33 +311,36 @@ static const struct coproc_reg cp15_regs[] = {
{ CRn( 1), CRm( 0), Op1( 0), Op2( 2), is32,
NULL, reset_val, c1_CPACR, 0x00000000 },
- /* TTBR0/TTBR1: swapped by interrupt.S. */
- { CRm64( 2), Op1( 0), is64, NULL, reset_unknown64, c2_TTBR0 },
- { CRm64( 2), Op1( 1), is64, NULL, reset_unknown64, c2_TTBR1 },
-
- /* TTBCR: swapped by interrupt.S. */
+ /* TTBR0/TTBR1/TTBCR: swapped by interrupt.S. */
+ { CRm64( 2), Op1( 0), is64, access_vm_reg, reset_unknown64, c2_TTBR0 },
+ { CRn(2), CRm( 0), Op1( 0), Op2( 0), is32,
+ access_vm_reg, reset_unknown, c2_TTBR0 },
+ { CRn(2), CRm( 0), Op1( 0), Op2( 1), is32,
+ access_vm_reg, reset_unknown, c2_TTBR1 },
{ CRn( 2), CRm( 0), Op1( 0), Op2( 2), is32,
- NULL, reset_val, c2_TTBCR, 0x00000000 },
+ access_vm_reg, reset_val, c2_TTBCR, 0x00000000 },
+ { CRm64( 2), Op1( 1), is64, access_vm_reg, reset_unknown64, c2_TTBR1 },
+
/* DACR: swapped by interrupt.S. */
{ CRn( 3), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c3_DACR },
+ access_vm_reg, reset_unknown, c3_DACR },
/* DFSR/IFSR/ADFSR/AIFSR: swapped by interrupt.S. */
{ CRn( 5), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c5_DFSR },
+ access_vm_reg, reset_unknown, c5_DFSR },
{ CRn( 5), CRm( 0), Op1( 0), Op2( 1), is32,
- NULL, reset_unknown, c5_IFSR },
+ access_vm_reg, reset_unknown, c5_IFSR },
{ CRn( 5), CRm( 1), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c5_ADFSR },
+ access_vm_reg, reset_unknown, c5_ADFSR },
{ CRn( 5), CRm( 1), Op1( 0), Op2( 1), is32,
- NULL, reset_unknown, c5_AIFSR },
+ access_vm_reg, reset_unknown, c5_AIFSR },
/* DFAR/IFAR: swapped by interrupt.S. */
{ CRn( 6), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c6_DFAR },
+ access_vm_reg, reset_unknown, c6_DFAR },
{ CRn( 6), CRm( 0), Op1( 0), Op2( 2), is32,
- NULL, reset_unknown, c6_IFAR },
+ access_vm_reg, reset_unknown, c6_IFAR },
/* PAR swapped by interrupt.S */
{ CRm64( 7), Op1( 0), is64, NULL, reset_unknown64, c7_PAR },
@@ -324,9 +377,9 @@ static const struct coproc_reg cp15_regs[] = {
/* PRRR/NMRR (aka MAIR0/MAIR1): swapped by interrupt.S. */
{ CRn(10), CRm( 2), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c10_PRRR},
+ access_vm_reg, reset_unknown, c10_PRRR},
{ CRn(10), CRm( 2), Op1( 0), Op2( 1), is32,
- NULL, reset_unknown, c10_NMRR},
+ access_vm_reg, reset_unknown, c10_NMRR},
/* VBAR: swapped by interrupt.S. */
{ CRn(12), CRm( 0), Op1( 0), Op2( 0), is32,
@@ -334,7 +387,7 @@ static const struct coproc_reg cp15_regs[] = {
/* CONTEXTIDR/TPIDRURW/TPIDRURO/TPIDRPRW: swapped by interrupt.S. */
{ CRn(13), CRm( 0), Op1( 0), Op2( 1), is32,
- NULL, reset_val, c13_CID, 0x00000000 },
+ access_vm_reg, reset_val, c13_CID, 0x00000000 },
{ CRn(13), CRm( 0), Op1( 0), Op2( 2), is32,
NULL, reset_unknown, c13_TID_URW },
{ CRn(13), CRm( 0), Op1( 0), Op2( 3), is32,
diff --git a/arch/arm/kvm/coproc_a15.c b/arch/arm/kvm/coproc_a15.c
index bb0cac1..e6f4ae4 100644
--- a/arch/arm/kvm/coproc_a15.c
+++ b/arch/arm/kvm/coproc_a15.c
@@ -34,7 +34,7 @@
static const struct coproc_reg a15_regs[] = {
/* SCTLR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_val, c1_SCTLR, 0x00C50078 },
+ access_sctlr, reset_val, c1_SCTLR, 0x00C50078 },
};
static struct kvm_coproc_target_table a15_target_table = {
diff --git a/arch/arm/kvm/coproc_a7.c b/arch/arm/kvm/coproc_a7.c
index 1df76733..17fc7cd 100644
--- a/arch/arm/kvm/coproc_a7.c
+++ b/arch/arm/kvm/coproc_a7.c
@@ -37,7 +37,7 @@
static const struct coproc_reg a7_regs[] = {
/* SCTLR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_val, c1_SCTLR, 0x00C50878 },
+ access_sctlr, reset_val, c1_SCTLR, 0x00C50878 },
};
static struct kvm_coproc_target_table a7_target_table = {
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 09/10] ARM: KVM: trap VM system registers until MMU and caches are ON
2014-01-22 14:56 ` [PATCH v2 09/10] ARM: KVM: trap VM system registers until MMU and caches are ON Marc Zyngier
@ 2014-01-29 20:08 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:08 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:41PM +0000, Marc Zyngier wrote:
> In order to be able to detect the point where the guest enables
> its MMU and caches, trap all the VM related system registers.
>
> Once we see the guest enabling both the MMU and the caches, we
> can go back to a saner mode of operation, which is to leave these
> registers in complete control of the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_arm.h | 3 +-
> arch/arm/kvm/coproc.c | 85 ++++++++++++++++++++++++++++++++++--------
> arch/arm/kvm/coproc_a15.c | 2 +-
> arch/arm/kvm/coproc_a7.c | 2 +-
> 4 files changed, 73 insertions(+), 19 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
> index a843e74..816db0b 100644
> --- a/arch/arm/include/asm/kvm_arm.h
> +++ b/arch/arm/include/asm/kvm_arm.h
> @@ -55,6 +55,7 @@
> * The bits we set in HCR:
> * TAC: Trap ACTLR
> * TSC: Trap SMC
> + * TVM: Trap VM ops (until MMU and caches are on)
> * TSW: Trap cache operations by set/way
> * TWI: Trap WFI
> * TWE: Trap WFE
> @@ -68,7 +69,7 @@
> */
> #define HCR_GUEST_MASK (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
> HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
> - HCR_TWE | HCR_SWIO | HCR_TIDCP)
> + HCR_TVM | HCR_TWE | HCR_SWIO | HCR_TIDCP)
>
> /* System Control Register (SCTLR) bits */
> #define SCTLR_TE (1 << 30)
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 126c90d..1839770 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -23,6 +23,7 @@
> #include <asm/kvm_host.h>
> #include <asm/kvm_emulate.h>
> #include <asm/kvm_coproc.h>
> +#include <asm/kvm_mmu.h>
> #include <asm/cacheflush.h>
> #include <asm/cputype.h>
> #include <trace/events/kvm.h>
> @@ -205,6 +206,55 @@ done:
> }
>
> /*
> + * Generic accessor for VM registers. Only called as long as HCR_TVM
> + * is set.
> + */
> +static bool access_vm_reg(struct kvm_vcpu *vcpu,
> + const struct coproc_params *p,
> + const struct coproc_reg *r)
> +{
> + if (p->is_write) {
> + vcpu->arch.cp15[r->reg] = *vcpu_reg(vcpu, p->Rt1);
> + if (p->is_64bit)
> + vcpu->arch.cp15[r->reg + 1] = *vcpu_reg(vcpu, p->Rt2);
> + } else {
hmm, the TVM in the ARM ARM v7 says it only traps for write accesses...?
> + *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[r->reg];
> + if (p->is_64bit)
> + *vcpu_reg(vcpu, p->Rt2) = vcpu->arch.cp15[r->reg + 1];
> + }
> +
> + return true;
> +}
> +
> +/*
> + * SCTLR accessor. Only called as long as HCR_TVM is set. If the
> + * guest enables the MMU, we stop trapping the VM sys_regs and leave
> + * it in complete control of the caches.
> + *
> + * Used by the cpu-specific code.
> + */
> +bool access_sctlr(struct kvm_vcpu *vcpu,
> + const struct coproc_params *p,
> + const struct coproc_reg *r)
> +{
> + if (p->is_write) {
> + unsigned long val;
> +
> + val = *vcpu_reg(vcpu, p->Rt1);
> + vcpu->arch.cp15[r->reg] = val;
again, would it make sense to just call access_vm_reg and do the check
for caches enabled afterwards?
> +
> + if ((val & (0b101)) == 0b101) { /* MMU+Caches enabled? */
> + vcpu->arch.hcr &= ~HCR_TVM;
> + stage2_flush_vm(vcpu->kvm);
> + }
my favorite static inline, again, again, ...
> + } else {
> + *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[r->reg];
> + }
> +
> + return true;
> +}
> +
> +/*
> * We could trap ID_DFR0 and tell the guest we don't support performance
> * monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
> * NAKed, so it will read the PMCR anyway.
> @@ -261,33 +311,36 @@ static const struct coproc_reg cp15_regs[] = {
> { CRn( 1), CRm( 0), Op1( 0), Op2( 2), is32,
> NULL, reset_val, c1_CPACR, 0x00000000 },
>
> - /* TTBR0/TTBR1: swapped by interrupt.S. */
> - { CRm64( 2), Op1( 0), is64, NULL, reset_unknown64, c2_TTBR0 },
> - { CRm64( 2), Op1( 1), is64, NULL, reset_unknown64, c2_TTBR1 },
> -
> - /* TTBCR: swapped by interrupt.S. */
> + /* TTBR0/TTBR1/TTBCR: swapped by interrupt.S. */
> + { CRm64( 2), Op1( 0), is64, access_vm_reg, reset_unknown64, c2_TTBR0 },
> + { CRn(2), CRm( 0), Op1( 0), Op2( 0), is32,
> + access_vm_reg, reset_unknown, c2_TTBR0 },
> + { CRn(2), CRm( 0), Op1( 0), Op2( 1), is32,
> + access_vm_reg, reset_unknown, c2_TTBR1 },
> { CRn( 2), CRm( 0), Op1( 0), Op2( 2), is32,
> - NULL, reset_val, c2_TTBCR, 0x00000000 },
> + access_vm_reg, reset_val, c2_TTBCR, 0x00000000 },
> + { CRm64( 2), Op1( 1), is64, access_vm_reg, reset_unknown64, c2_TTBR1 },
> +
>
> /* DACR: swapped by interrupt.S. */
> { CRn( 3), CRm( 0), Op1( 0), Op2( 0), is32,
> - NULL, reset_unknown, c3_DACR },
> + access_vm_reg, reset_unknown, c3_DACR },
>
> /* DFSR/IFSR/ADFSR/AIFSR: swapped by interrupt.S. */
> { CRn( 5), CRm( 0), Op1( 0), Op2( 0), is32,
> - NULL, reset_unknown, c5_DFSR },
> + access_vm_reg, reset_unknown, c5_DFSR },
> { CRn( 5), CRm( 0), Op1( 0), Op2( 1), is32,
> - NULL, reset_unknown, c5_IFSR },
> + access_vm_reg, reset_unknown, c5_IFSR },
> { CRn( 5), CRm( 1), Op1( 0), Op2( 0), is32,
> - NULL, reset_unknown, c5_ADFSR },
> + access_vm_reg, reset_unknown, c5_ADFSR },
> { CRn( 5), CRm( 1), Op1( 0), Op2( 1), is32,
> - NULL, reset_unknown, c5_AIFSR },
> + access_vm_reg, reset_unknown, c5_AIFSR },
>
> /* DFAR/IFAR: swapped by interrupt.S. */
> { CRn( 6), CRm( 0), Op1( 0), Op2( 0), is32,
> - NULL, reset_unknown, c6_DFAR },
> + access_vm_reg, reset_unknown, c6_DFAR },
> { CRn( 6), CRm( 0), Op1( 0), Op2( 2), is32,
> - NULL, reset_unknown, c6_IFAR },
> + access_vm_reg, reset_unknown, c6_IFAR },
>
> /* PAR swapped by interrupt.S */
> { CRm64( 7), Op1( 0), is64, NULL, reset_unknown64, c7_PAR },
> @@ -324,9 +377,9 @@ static const struct coproc_reg cp15_regs[] = {
>
> /* PRRR/NMRR (aka MAIR0/MAIR1): swapped by interrupt.S. */
> { CRn(10), CRm( 2), Op1( 0), Op2( 0), is32,
> - NULL, reset_unknown, c10_PRRR},
> + access_vm_reg, reset_unknown, c10_PRRR},
> { CRn(10), CRm( 2), Op1( 0), Op2( 1), is32,
> - NULL, reset_unknown, c10_NMRR},
> + access_vm_reg, reset_unknown, c10_NMRR},
>
> /* VBAR: swapped by interrupt.S. */
> { CRn(12), CRm( 0), Op1( 0), Op2( 0), is32,
> @@ -334,7 +387,7 @@ static const struct coproc_reg cp15_regs[] = {
>
> /* CONTEXTIDR/TPIDRURW/TPIDRURO/TPIDRPRW: swapped by interrupt.S. */
> { CRn(13), CRm( 0), Op1( 0), Op2( 1), is32,
> - NULL, reset_val, c13_CID, 0x00000000 },
> + access_vm_reg, reset_val, c13_CID, 0x00000000 },
> { CRn(13), CRm( 0), Op1( 0), Op2( 2), is32,
> NULL, reset_unknown, c13_TID_URW },
> { CRn(13), CRm( 0), Op1( 0), Op2( 3), is32,
> diff --git a/arch/arm/kvm/coproc_a15.c b/arch/arm/kvm/coproc_a15.c
> index bb0cac1..e6f4ae4 100644
> --- a/arch/arm/kvm/coproc_a15.c
> +++ b/arch/arm/kvm/coproc_a15.c
> @@ -34,7 +34,7 @@
> static const struct coproc_reg a15_regs[] = {
> /* SCTLR: swapped by interrupt.S. */
> { CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
> - NULL, reset_val, c1_SCTLR, 0x00C50078 },
> + access_sctlr, reset_val, c1_SCTLR, 0x00C50078 },
> };
>
> static struct kvm_coproc_target_table a15_target_table = {
> diff --git a/arch/arm/kvm/coproc_a7.c b/arch/arm/kvm/coproc_a7.c
> index 1df76733..17fc7cd 100644
> --- a/arch/arm/kvm/coproc_a7.c
> +++ b/arch/arm/kvm/coproc_a7.c
> @@ -37,7 +37,7 @@
> static const struct coproc_reg a7_regs[] = {
> /* SCTLR: swapped by interrupt.S. */
> { CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
> - NULL, reset_val, c1_SCTLR, 0x00C50878 },
> + access_sctlr, reset_val, c1_SCTLR, 0x00C50878 },
> };
>
> static struct kvm_coproc_target_table a7_target_table = {
> --
> 1.8.3.4
>
Otherwise looks good,
--
Christoffer
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 10/10] ARM: KVM: add world-switch for AMAIR{0,1}
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (8 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 09/10] ARM: KVM: trap VM system registers until MMU and caches are ON Marc Zyngier
@ 2014-01-22 14:56 ` Marc Zyngier
2014-01-29 20:08 ` Christoffer Dall
2014-01-28 12:11 ` [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Pranavkumar Sawargaonkar
10 siblings, 1 reply; 22+ messages in thread
From: Marc Zyngier @ 2014-01-22 14:56 UTC (permalink / raw)
To: linux-arm-kernel
HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1.
In order to minimise the amount of surprise a guest could generate by
trying to access these registers with caches off, add them to the
list of registers we switch/handle.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 4 +++-
arch/arm/kvm/coproc.c | 6 ++++++
arch/arm/kvm/interrupts_head.S | 12 ++++++++++--
3 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 661da11..53b3c4a 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -48,7 +48,9 @@
#define c13_TID_URO 26 /* Thread ID, User R/O */
#define c13_TID_PRIV 27 /* Thread ID, Privileged */
#define c14_CNTKCTL 28 /* Timer Control Register (PL1) */
-#define NR_CP15_REGS 29 /* Number of regs (incl. invalid) */
+#define c10_AMAIR0 29 /* Auxilary Memory Attribute Indirection Reg0 */
+#define c10_AMAIR1 30 /* Auxilary Memory Attribute Indirection Reg1 */
+#define NR_CP15_REGS 31 /* Number of regs (incl. invalid) */
#define ARM_EXCEPTION_RESET 0
#define ARM_EXCEPTION_UNDEFINED 1
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 1839770..539f6d4 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -381,6 +381,12 @@ static const struct coproc_reg cp15_regs[] = {
{ CRn(10), CRm( 2), Op1( 0), Op2( 1), is32,
access_vm_reg, reset_unknown, c10_NMRR},
+ /* AMAIR0/AMAIR1: swapped by interrupt.S. */
+ { CRn(10), CRm( 3), Op1( 0), Op2( 0), is32,
+ access_vm_reg, reset_unknown, c10_AMAIR0},
+ { CRn(10), CRm( 3), Op1( 0), Op2( 1), is32,
+ access_vm_reg, reset_unknown, c10_AMAIR1},
+
/* VBAR: swapped by interrupt.S. */
{ CRn(12), CRm( 0), Op1( 0), Op2( 0), is32,
NULL, reset_val, c12_VBAR, 0x00000000 },
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 7cb41e1..e4eaf30 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -303,13 +303,17 @@ vcpu .req r0 @ vcpu pointer always in r0
mrc p15, 0, r2, c14, c1, 0 @ CNTKCTL
mrrc p15, 0, r4, r5, c7 @ PAR
+ mrc p15, 0, r6, c10, c3, 0 @ AMAIR0
+ mrc p15, 0, r7, c10, c3, 1 @ AMAIR1
.if \store_to_vcpu == 0
- push {r2,r4-r5}
+ push {r2,r4-r7}
.else
str r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
add r12, vcpu, #CP15_OFFSET(c7_PAR)
strd r4, r5, [r12]
+ str r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
+ str r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
.endif
.endm
@@ -322,15 +326,19 @@ vcpu .req r0 @ vcpu pointer always in r0
*/
.macro write_cp15_state read_from_vcpu
.if \read_from_vcpu == 0
- pop {r2,r4-r5}
+ pop {r2,r4-r7}
.else
ldr r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
add r12, vcpu, #CP15_OFFSET(c7_PAR)
ldrd r4, r5, [r12]
+ ldr r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
+ ldr r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
.endif
mcr p15, 0, r2, c14, c1, 0 @ CNTKCTL
mcrr p15, 0, r4, r5, c7 @ PAR
+ mcr p15, 0, r6, c10, c3, 0 @ AMAIR0
+ mcr p15, 0, r7, c10, c3, 1 @ AMAIR1
.if \read_from_vcpu == 0
pop {r2-r12}
--
1.8.3.4
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH v2 10/10] ARM: KVM: add world-switch for AMAIR{0,1}
2014-01-22 14:56 ` [PATCH v2 10/10] ARM: KVM: add world-switch for AMAIR{0,1} Marc Zyngier
@ 2014-01-29 20:08 ` Christoffer Dall
0 siblings, 0 replies; 22+ messages in thread
From: Christoffer Dall @ 2014-01-29 20:08 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 22, 2014 at 02:56:42PM +0000, Marc Zyngier wrote:
> HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1.
> In order to minimise the amount of surprise a guest could generate by
> trying to access these registers with caches off, add them to the
> list of registers we switch/handle.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_asm.h | 4 +++-
> arch/arm/kvm/coproc.c | 6 ++++++
> arch/arm/kvm/interrupts_head.S | 12 ++++++++++--
> 3 files changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
> index 661da11..53b3c4a 100644
> --- a/arch/arm/include/asm/kvm_asm.h
> +++ b/arch/arm/include/asm/kvm_asm.h
> @@ -48,7 +48,9 @@
> #define c13_TID_URO 26 /* Thread ID, User R/O */
> #define c13_TID_PRIV 27 /* Thread ID, Privileged */
> #define c14_CNTKCTL 28 /* Timer Control Register (PL1) */
> -#define NR_CP15_REGS 29 /* Number of regs (incl. invalid) */
> +#define c10_AMAIR0 29 /* Auxilary Memory Attribute Indirection Reg0 */
> +#define c10_AMAIR1 30 /* Auxilary Memory Attribute Indirection Reg1 */
> +#define NR_CP15_REGS 31 /* Number of regs (incl. invalid) */
>
> #define ARM_EXCEPTION_RESET 0
> #define ARM_EXCEPTION_UNDEFINED 1
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 1839770..539f6d4 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -381,6 +381,12 @@ static const struct coproc_reg cp15_regs[] = {
> { CRn(10), CRm( 2), Op1( 0), Op2( 1), is32,
> access_vm_reg, reset_unknown, c10_NMRR},
>
> + /* AMAIR0/AMAIR1: swapped by interrupt.S. */
> + { CRn(10), CRm( 3), Op1( 0), Op2( 0), is32,
> + access_vm_reg, reset_unknown, c10_AMAIR0},
> + { CRn(10), CRm( 3), Op1( 0), Op2( 1), is32,
> + access_vm_reg, reset_unknown, c10_AMAIR1},
> +
> /* VBAR: swapped by interrupt.S. */
> { CRn(12), CRm( 0), Op1( 0), Op2( 0), is32,
> NULL, reset_val, c12_VBAR, 0x00000000 },
> diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
> index 7cb41e1..e4eaf30 100644
> --- a/arch/arm/kvm/interrupts_head.S
> +++ b/arch/arm/kvm/interrupts_head.S
> @@ -303,13 +303,17 @@ vcpu .req r0 @ vcpu pointer always in r0
>
> mrc p15, 0, r2, c14, c1, 0 @ CNTKCTL
> mrrc p15, 0, r4, r5, c7 @ PAR
> + mrc p15, 0, r6, c10, c3, 0 @ AMAIR0
> + mrc p15, 0, r7, c10, c3, 1 @ AMAIR1
>
> .if \store_to_vcpu == 0
> - push {r2,r4-r5}
> + push {r2,r4-r7}
> .else
> str r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
> add r12, vcpu, #CP15_OFFSET(c7_PAR)
> strd r4, r5, [r12]
> + str r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
> + str r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
> .endif
> .endm
>
> @@ -322,15 +326,19 @@ vcpu .req r0 @ vcpu pointer always in r0
> */
> .macro write_cp15_state read_from_vcpu
> .if \read_from_vcpu == 0
> - pop {r2,r4-r5}
> + pop {r2,r4-r7}
> .else
> ldr r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
> add r12, vcpu, #CP15_OFFSET(c7_PAR)
> ldrd r4, r5, [r12]
> + ldr r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
> + ldr r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
> .endif
>
> mcr p15, 0, r2, c14, c1, 0 @ CNTKCTL
> mcrr p15, 0, r4, r5, c7 @ PAR
> + mcr p15, 0, r6, c10, c3, 0 @ AMAIR0
> + mcr p15, 0, r7, c10, c3, 1 @ AMAIR1
>
> .if \read_from_vcpu == 0
> pop {r2-r12}
> --
> 1.8.3.4
>
Looks good, but shouldn't this be added before patch 9 to maintain
functional bisectability?
Otherwise:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off
2014-01-22 14:56 [PATCH v2 00/10] arm/arm64: KVM: host cache maintainance when guest caches are off Marc Zyngier
` (9 preceding siblings ...)
2014-01-22 14:56 ` [PATCH v2 10/10] ARM: KVM: add world-switch for AMAIR{0,1} Marc Zyngier
@ 2014-01-28 12:11 ` Pranavkumar Sawargaonkar
10 siblings, 0 replies; 22+ messages in thread
From: Pranavkumar Sawargaonkar @ 2014-01-28 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 22 January 2014 20:26, Marc Zyngier <marc.zyngier@arm.com> wrote:
> When we run a guest with cache disabled, we don't flush the cache to
> the Point of Coherency, hence possibly missing bits of data that have
> been written in the cache, but have not yet reached memory.
>
> We also have the opposite issue: when a guest enables its cache,
> whatever sits in the cache is suddenly going to become visible,
> shadowing whatever the guest has written into RAM.
>
> There are several approaches to these issues:
> - Using the DC bit when caches are off: this breaks guests assuming
> caches off while doing DMA operations. Bootloaders, for example.
> It also breaks the I-D coherency.
> - Fetch the memory attributes on translation fault, and flush the
> cache while handling the fault. This relies on using the PAR_EL1
> register to obtain the Stage-1 memory attributes, and tends to be
> slow.
> - Detecting the translation faults occuring with MMU off (and
> performing a cache clean), and trapping SCTLR_EL1 to detect the
> moment when the guest is turning its caches on (and performing a
> cache invalidation). Trapping of SCTLR_EL1 is then disabled to
> ensure the best performance.
>
> This patch series implements the last solution, for both arm and
> arm64. Tested on TC2 (ARMv7) and FVP model (ARMv8).
>
> >From v1 (http://www.spinics.net/lists/kvm/msg99404.html):
> - Fixed AArch32 VM handling on arm64 (Reported by Anup)
> - Added ARMv7 support:
> * Fixed a couple of issues regarding handling of 64bit cp15 regs
> * Per-vcpu HCR
> * Switching of AMAIR0 and AMAIR1
>
> Marc Zyngier (10):
> arm64: KVM: force cache clean on page fault when caches are off
> arm64: KVM: allows discrimination of AArch32 sysreg access
> arm64: KVM: trap VM system registers until MMU and caches are ON
> arm64: KVM: flush VM pages before letting the guest enable caches
> ARM: KVM: force cache clean on page fault when caches are off
> ARM: KVM: fix handling of trapped 64bit coprocessor accesses
> ARM: KVM: fix ordering of 64bit coprocessor accesses
> ARM: KVM: introduce per-vcpu HYP Configuration Register
> ARM: KVM: trap VM system registers until MMU and caches are ON
> ARM: KVM: add world-switch for AMAIR{0,1}
>
> arch/arm/include/asm/kvm_arm.h | 4 +-
> arch/arm/include/asm/kvm_asm.h | 4 +-
> arch/arm/include/asm/kvm_host.h | 9 ++--
> arch/arm/include/asm/kvm_mmu.h | 11 ++--
> arch/arm/kernel/asm-offsets.c | 1 +
> arch/arm/kvm/coproc.c | 95 +++++++++++++++++++++++++++-------
> arch/arm/kvm/coproc.h | 14 +++--
> arch/arm/kvm/coproc_a15.c | 2 +-
> arch/arm/kvm/coproc_a7.c | 2 +-
> arch/arm/kvm/guest.c | 1 +
> arch/arm/kvm/interrupts_head.S | 21 +++++---
> arch/arm/kvm/mmu.c | 87 ++++++++++++++++++++++++++++++-
> arch/arm64/include/asm/kvm_arm.h | 3 +-
> arch/arm64/include/asm/kvm_asm.h | 3 +-
> arch/arm64/include/asm/kvm_mmu.h | 12 +++--
> arch/arm64/kvm/sys_regs.c | 107 ++++++++++++++++++++++++++++++++++-----
> arch/arm64/kvm/sys_regs.h | 2 +
> 17 files changed, 316 insertions(+), 62 deletions(-)
>
Me and Anup have successfully tested this patch set on XGENE and on
foundation model.
> --
> 1.8.3.4
>
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
Thanks,
Pranav
^ permalink raw reply [flat|nested] 22+ messages in thread