* [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
@ 2015-05-04 1:51 shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 01/47] arm64: KVM: force cache clean on page fault when caches are off shannon.zhao
` (47 more replies)
0 siblings, 48 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:51 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Shannon Zhao <shannon.zhao@linaro.org>
For KVM/ARM there are many fixes which have been applied upstream while
not committed to stable kernels. Here we backport the important fixes
to 3.14.y stable kernel.
We have compile-tested each patch on arm/arm64/x86 to make sure the
series are bisectable and have booted the resulting kernel on Fastmodel
and started 2 VMs for arm/arm64, and have boot-tested on TC2 and
started a guest.
These patches are applied on the top of 3.14.40. They can be fetched
from following address:
https://git.linaro.org/people/shannon.zhao/linux-stable.git linux-3.14.y
Thanks,
Shannon
Alex Bennée (1):
arm64: KVM: export demux regids as KVM_REG_ARM64
Andre Przywara (1):
KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses
Ard Biesheuvel (3):
ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()
arm/arm64: KVM: fix potential NULL dereference in user_mem_abort()
arm/arm64: kvm: drop inappropriate use of kvm_is_mmio_pfn()
Christoffer Dall (11):
arm/arm64: KVM: Fix and refactor unmap_range
arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset
arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
arm/arm64: KVM: vgic: Fix error code in kvm_vgic_create()
arm/arm64: KVM: Don't clear the VCPU_POWER_OFF flag
arm/arm64: KVM: Correct KVM_ARM_VCPU_INIT power off option
arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu
arm/arm64: KVM: Introduce stage2_unmap_vm
arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized
arm/arm64: KVM: Require in-kernel vgic for the arch timers
arm/arm64: KVM: Keep elrsr/aisr in sync with software model
Eric Auger (1):
ARM: KVM: Unmap IPA on memslot delete/move
Geoff Levand (1):
arm64/kvm: Fix assembler compatibility of macros
Haibin Wang (1):
KVM: ARM: vgic: Fix the overlap check action about setting the GICD &
GICC base address.
Joel Schopp (1):
arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc
Kim Phillips (1):
ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
Li Liu (1):
ARM: virt: fix wrong HSCTLR.EE bit setting
Marc Zyngier (15):
arm64: KVM: force cache clean on page fault when caches are off
arm64: KVM: allows discrimination of AArch32 sysreg access
arm64: KVM: trap VM system registers until MMU and caches are ON
ARM: KVM: introduce kvm_p*d_addr_end
arm64: KVM: flush VM pages before letting the guest enable caches
ARM: KVM: force cache clean on page fault when caches are off
ARM: KVM: fix handling of trapped 64bit coprocessor accesses
ARM: KVM: fix ordering of 64bit coprocessor accesses
ARM: KVM: introduce per-vcpu HYP Configuration Register
ARM: KVM: add world-switch for AMAIR{0,1}
ARM: KVM: trap VM system registers until MMU and caches are ON
KVM: ARM: vgic: plug irq injection race
arm64: KVM: Fix TLB invalidation by IPA/VMID
arm64: KVM: Fix HCR setting for 32bit guests
arm64: KVM: Do not use pgd_index to index stage-2 pgd
Mark Rutland (1):
arm64: KVM: fix unmapping with 48-bit VAs
Steve Capper (1):
arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort
Victor Kamensky (1):
ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word
Vladimir Murzin (1):
arm: kvm: fix CPU hotplug
Will Deacon (6):
arm64: kvm: use inner-shareable barriers for inner-shareable
maintenance
kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform
KVM: ARM/arm64: fix non-const declaration of function returning const
KVM: ARM/arm64: fix broken __percpu annotation
KVM: ARM/arm64: avoid returning negative error code as bool
KVM: vgic: return int instead of bool when checking I/O ranges
Documentation/virtual/kvm/api.txt | 3 +-
arch/arm/include/asm/kvm_arm.h | 4 +-
arch/arm/include/asm/kvm_asm.h | 4 +-
arch/arm/include/asm/kvm_emulate.h | 5 +
arch/arm/include/asm/kvm_host.h | 11 +-
arch/arm/include/asm/kvm_mmu.h | 55 +++--
arch/arm/kernel/asm-offsets.c | 1 +
arch/arm/kernel/hyp-stub.S | 4 +-
arch/arm/kvm/arm.c | 73 +++----
arch/arm/kvm/coproc.c | 86 ++++++--
arch/arm/kvm/coproc.h | 14 +-
arch/arm/kvm/coproc_a15.c | 2 +-
arch/arm/kvm/coproc_a7.c | 2 +-
arch/arm/kvm/interrupts_head.S | 21 +-
arch/arm/kvm/mmu.c | 398 ++++++++++++++++++++++++++++-------
arch/arm64/include/asm/kvm_arm.h | 35 ++-
arch/arm64/include/asm/kvm_asm.h | 3 +-
arch/arm64/include/asm/kvm_emulate.h | 7 +
arch/arm64/include/asm/kvm_host.h | 4 +-
arch/arm64/include/asm/kvm_mmu.h | 58 +++--
arch/arm64/kvm/guest.c | 1 -
arch/arm64/kvm/hyp.S | 15 +-
arch/arm64/kvm/reset.c | 1 -
arch/arm64/kvm/sys_regs.c | 103 +++++++--
arch/arm64/kvm/sys_regs.h | 2 +
include/kvm/arm_arch_timer.h | 10 +-
virt/kvm/arm/arch_timer.c | 30 ++-
virt/kvm/arm/vgic.c | 65 ++++--
28 files changed, 756 insertions(+), 261 deletions(-)
--
2.1.0
^ permalink raw reply [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 01/47] arm64: KVM: force cache clean on page fault when caches are off
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
@ 2015-05-04 1:51 ` shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 02/47] arm64: KVM: allows discrimination of AArch32 sysreg access shannon.zhao
` (46 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:51 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 2d58b733c87689d3d5144e4ac94ea861cc729145 upstream.
In order for the guest with caches off to observe data written
contained in a given page, we need to make sure that page is
committed to memory, and not just hanging in the cache (as
guest accesses are completely bypassing the cache until it
decides to enable it).
For this purpose, hook into the coherent_icache_guest_page
function and flush the region if the guest SCTLR_EL1
register doesn't show the MMU and caches as being enabled.
The function also get renamed to coherent_cache_guest_page.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 4 ++--
arch/arm/kvm/mmu.c | 4 ++--
arch/arm64/include/asm/kvm_mmu.h | 16 ++++++++++++----
3 files changed, 16 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 2d122ad..6d0f3d3 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -116,8 +116,8 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
struct kvm;
-static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
- unsigned long size)
+static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
+ unsigned long size)
{
/*
* If we are going to insert an instruction page and the icache is
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 575d790..2fcd3a3 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -717,7 +717,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pmd_writable(&new_pmd);
kvm_set_pfn_dirty(pfn);
}
- coherent_icache_guest_page(kvm, hva & PMD_MASK, PMD_SIZE);
+ coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE);
ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
} else {
pte_t new_pte = pfn_pte(pfn, PAGE_S2);
@@ -725,7 +725,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn);
}
- coherent_icache_guest_page(kvm, hva, PAGE_SIZE);
+ coherent_cache_guest_page(vcpu, hva, PAGE_SIZE);
ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false);
}
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7f1f940..6eaf69b 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -106,7 +106,6 @@ static inline bool kvm_is_write_fault(unsigned long esr)
return true;
}
-static inline void kvm_clean_dcache_area(void *addr, size_t size) {}
static inline void kvm_clean_pgd(pgd_t *pgd) {}
static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
static inline void kvm_clean_pte(pte_t *pte) {}
@@ -124,9 +123,19 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
struct kvm;
-static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
- unsigned long size)
+#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
+
+static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{
+ return (vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101;
+}
+
+static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
+ unsigned long size)
+{
+ if (!vcpu_has_cache_enabled(vcpu))
+ kvm_flush_dcache_to_poc((void *)hva, size);
+
if (!icache_is_aliasing()) { /* PIPT */
flush_icache_range(hva, hva + size);
} else if (!icache_is_aivivt()) { /* non ASID-tagged VIVT */
@@ -135,7 +144,6 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva,
}
}
-#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x))
#endif /* __ASSEMBLY__ */
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 02/47] arm64: KVM: allows discrimination of AArch32 sysreg access
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 01/47] arm64: KVM: force cache clean on page fault when caches are off shannon.zhao
@ 2015-05-04 1:51 ` shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 03/47] arm64: KVM: trap VM system registers until MMU and caches are ON shannon.zhao
` (45 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:51 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 2072d29c46b73e39b3c6c56c6027af77086f45fd upstream.
The current handling of AArch32 trapping is slightly less than
perfect, as it is not possible (from a handler point of view)
to distinguish it from an AArch64 access, nor to tell a 32bit
from a 64bit access either.
Fix this by introducing two additional flags:
- is_aarch32: true if the access was made in AArch32 mode
- is_32bit: true if is_aarch32 == true and a MCR/MRC instruction
was used to perform the access (as opposed to MCRR/MRRC).
This allows a handler to cover all the possible conditions in which
a system register gets trapped.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 6 ++++++
arch/arm64/kvm/sys_regs.h | 2 ++
2 files changed, 8 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 02e9d09..bf03e0f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -437,6 +437,8 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
u32 hsr = kvm_vcpu_get_hsr(vcpu);
int Rt2 = (hsr >> 10) & 0xf;
+ params.is_aarch32 = true;
+ params.is_32bit = false;
params.CRm = (hsr >> 1) & 0xf;
params.Rt = (hsr >> 5) & 0xf;
params.is_write = ((hsr & 1) == 0);
@@ -480,6 +482,8 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
struct sys_reg_params params;
u32 hsr = kvm_vcpu_get_hsr(vcpu);
+ params.is_aarch32 = true;
+ params.is_32bit = true;
params.CRm = (hsr >> 1) & 0xf;
params.Rt = (hsr >> 5) & 0xf;
params.is_write = ((hsr & 1) == 0);
@@ -549,6 +553,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
struct sys_reg_params params;
unsigned long esr = kvm_vcpu_get_hsr(vcpu);
+ params.is_aarch32 = false;
+ params.is_32bit = false;
params.Op0 = (esr >> 20) & 3;
params.Op1 = (esr >> 14) & 0x7;
params.CRn = (esr >> 10) & 0xf;
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index d50d372..d411e25 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -30,6 +30,8 @@ struct sys_reg_params {
u8 Op2;
u8 Rt;
bool is_write;
+ bool is_aarch32;
+ bool is_32bit; /* Only valid if is_aarch32 is true */
};
struct sys_reg_desc {
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 03/47] arm64: KVM: trap VM system registers until MMU and caches are ON
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 01/47] arm64: KVM: force cache clean on page fault when caches are off shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 02/47] arm64: KVM: allows discrimination of AArch32 sysreg access shannon.zhao
@ 2015-05-04 1:51 ` shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 04/47] ARM: KVM: introduce kvm_p*d_addr_end shannon.zhao
` (44 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:51 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 4d44923b17bff283c002ed961373848284aaff1b upstream.
In order to be able to detect the point where the guest enables
its MMU and caches, trap all the VM related system registers.
Once we see the guest enabling both the MMU and the caches, we
can go back to a saner mode of operation, which is to leave these
registers in complete control of the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/include/asm/kvm_arm.h | 3 +-
arch/arm64/include/asm/kvm_asm.h | 3 +-
arch/arm64/kvm/sys_regs.c | 91 ++++++++++++++++++++++++++++++++++------
3 files changed, 83 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 0eb3986..00fbaa7 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -62,6 +62,7 @@
* RW: 64bit by default, can be overriden for 32bit VMs
* TAC: Trap ACTLR
* TSC: Trap SMC
+ * TVM: Trap VM ops (until M+C set in SCTLR_EL1)
* TSW: Trap cache operations by set/way
* TWE: Trap WFE
* TWI: Trap WFI
@@ -74,7 +75,7 @@
* SWIO: Turn set/way invalidates into set/way clean+invalidate
*/
#define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
- HCR_BSU_IS | HCR_FB | HCR_TAC | \
+ HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
HCR_AMO | HCR_IMO | HCR_FMO | \
HCR_SWIO | HCR_TIDCP | HCR_RW)
#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index b25763b..9fcd54b 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -79,7 +79,8 @@
#define c13_TID_URW (TPIDR_EL0 * 2) /* Thread ID, User R/W */
#define c13_TID_URO (TPIDRRO_EL0 * 2)/* Thread ID, User R/O */
#define c13_TID_PRIV (TPIDR_EL1 * 2) /* Thread ID, Privileged */
-#define c10_AMAIR (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
+#define c10_AMAIR0 (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
+#define c10_AMAIR1 (c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
#define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
#define NR_CP15_REGS (NR_SYS_REGS * 2)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bf03e0f..f9bfd44 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -27,6 +27,7 @@
#include <asm/kvm_host.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#include <trace/events/kvm.h>
@@ -121,6 +122,46 @@ done:
}
/*
+ * Generic accessor for VM registers. Only called as long as HCR_TVM
+ * is set.
+ */
+static bool access_vm_reg(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ unsigned long val;
+
+ BUG_ON(!p->is_write);
+
+ val = *vcpu_reg(vcpu, p->Rt);
+ if (!p->is_aarch32) {
+ vcpu_sys_reg(vcpu, r->reg) = val;
+ } else {
+ vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL;
+ if (!p->is_32bit)
+ vcpu_cp15(vcpu, r->reg + 1) = val >> 32;
+ }
+ return true;
+}
+
+/*
+ * SCTLR_EL1 accessor. Only called as long as HCR_TVM is set. If the
+ * guest enables the MMU, we stop trapping the VM sys_regs and leave
+ * it in complete control of the caches.
+ */
+static bool access_sctlr(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ access_vm_reg(vcpu, p, r);
+
+ if (vcpu_has_cache_enabled(vcpu)) /* MMU+Caches enabled? */
+ vcpu->arch.hcr_el2 &= ~HCR_TVM;
+
+ return true;
+}
+
+/*
* We could trap ID_DFR0 and tell the guest we don't support performance
* monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
* NAKed, so it will read the PMCR anyway.
@@ -185,32 +226,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
NULL, reset_mpidr, MPIDR_EL1 },
/* SCTLR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
- NULL, reset_val, SCTLR_EL1, 0x00C50078 },
+ access_sctlr, reset_val, SCTLR_EL1, 0x00C50078 },
/* CPACR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010),
NULL, reset_val, CPACR_EL1, 0 },
/* TTBR0_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
- NULL, reset_unknown, TTBR0_EL1 },
+ access_vm_reg, reset_unknown, TTBR0_EL1 },
/* TTBR1_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
- NULL, reset_unknown, TTBR1_EL1 },
+ access_vm_reg, reset_unknown, TTBR1_EL1 },
/* TCR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
- NULL, reset_val, TCR_EL1, 0 },
+ access_vm_reg, reset_val, TCR_EL1, 0 },
/* AFSR0_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
- NULL, reset_unknown, AFSR0_EL1 },
+ access_vm_reg, reset_unknown, AFSR0_EL1 },
/* AFSR1_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
- NULL, reset_unknown, AFSR1_EL1 },
+ access_vm_reg, reset_unknown, AFSR1_EL1 },
/* ESR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
- NULL, reset_unknown, ESR_EL1 },
+ access_vm_reg, reset_unknown, ESR_EL1 },
/* FAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
- NULL, reset_unknown, FAR_EL1 },
+ access_vm_reg, reset_unknown, FAR_EL1 },
/* PAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0111), CRm(0b0100), Op2(0b000),
NULL, reset_unknown, PAR_EL1 },
@@ -224,17 +265,17 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* MAIR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
- NULL, reset_unknown, MAIR_EL1 },
+ access_vm_reg, reset_unknown, MAIR_EL1 },
/* AMAIR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
- NULL, reset_amair_el1, AMAIR_EL1 },
+ access_vm_reg, reset_amair_el1, AMAIR_EL1 },
/* VBAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
NULL, reset_val, VBAR_EL1, 0 },
/* CONTEXTIDR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
- NULL, reset_val, CONTEXTIDR_EL1, 0 },
+ access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 },
/* TPIDR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
NULL, reset_unknown, TPIDR_EL1 },
@@ -305,14 +346,32 @@ static const struct sys_reg_desc sys_reg_descs[] = {
NULL, reset_val, FPEXC32_EL2, 0x70 },
};
-/* Trapped cp15 registers */
+/*
+ * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
+ * depending on the way they are accessed (as a 32bit or a 64bit
+ * register).
+ */
static const struct sys_reg_desc cp15_regs[] = {
+ { Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+ { Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_sctlr, NULL, c1_SCTLR },
+ { Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+ { Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 },
+ { Op1( 0), CRn( 2), CRm( 0), Op2( 2), access_vm_reg, NULL, c2_TTBCR },
+ { Op1( 0), CRn( 3), CRm( 0), Op2( 0), access_vm_reg, NULL, c3_DACR },
+ { Op1( 0), CRn( 5), CRm( 0), Op2( 0), access_vm_reg, NULL, c5_DFSR },
+ { Op1( 0), CRn( 5), CRm( 0), Op2( 1), access_vm_reg, NULL, c5_IFSR },
+ { Op1( 0), CRn( 5), CRm( 1), Op2( 0), access_vm_reg, NULL, c5_ADFSR },
+ { Op1( 0), CRn( 5), CRm( 1), Op2( 1), access_vm_reg, NULL, c5_AIFSR },
+ { Op1( 0), CRn( 6), CRm( 0), Op2( 0), access_vm_reg, NULL, c6_DFAR },
+ { Op1( 0), CRn( 6), CRm( 0), Op2( 2), access_vm_reg, NULL, c6_IFAR },
+
/*
* DC{C,I,CI}SW operations:
*/
{ Op1( 0), CRn( 7), CRm( 6), Op2( 2), access_dcsw },
{ Op1( 0), CRn( 7), CRm(10), Op2( 2), access_dcsw },
{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
+
{ Op1( 0), CRn( 9), CRm(12), Op2( 0), pm_fake },
{ Op1( 0), CRn( 9), CRm(12), Op2( 1), pm_fake },
{ Op1( 0), CRn( 9), CRm(12), Op2( 2), pm_fake },
@@ -326,6 +385,14 @@ static const struct sys_reg_desc cp15_regs[] = {
{ Op1( 0), CRn( 9), CRm(14), Op2( 0), pm_fake },
{ Op1( 0), CRn( 9), CRm(14), Op2( 1), pm_fake },
{ Op1( 0), CRn( 9), CRm(14), Op2( 2), pm_fake },
+
+ { Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
+ { Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
+ { Op1( 0), CRn(10), CRm( 3), Op2( 0), access_vm_reg, NULL, c10_AMAIR0 },
+ { Op1( 0), CRn(10), CRm( 3), Op2( 1), access_vm_reg, NULL, c10_AMAIR1 },
+ { Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+ { Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
};
/* Target specific emulation tables */
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 04/47] ARM: KVM: introduce kvm_p*d_addr_end
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (2 preceding siblings ...)
2015-05-04 1:51 ` [PATCH for 3.14.y stable 03/47] arm64: KVM: trap VM system registers until MMU and caches are ON shannon.zhao
@ 2015-05-04 1:51 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 05/47] arm64: KVM: flush VM pages before letting the guest enable caches shannon.zhao
` (43 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:51 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit a3c8bd31af260a17d626514f636849ee1cd1f63e upstream.
The use of p*d_addr_end with stage-2 translation is slightly dodgy,
as the IPA is 40bits, while all the p*d_addr_end helpers are
taking an unsigned long (arm64 is fine with that as unligned long
is 64bit).
The fix is to introduce 64bit clean versions of the same helpers,
and use them in the stage-2 page table code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 13 +++++++++++++
arch/arm/kvm/mmu.c | 10 +++++-----
arch/arm64/include/asm/kvm_mmu.h | 4 ++++
3 files changed, 22 insertions(+), 5 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 6d0f3d3..891afe7 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -114,6 +114,19 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
pmd_val(*pmd) |= L_PMD_S2_RDWR;
}
+/* Open coded p*d_addr_end that can deal with 64bit addresses */
+#define kvm_pgd_addr_end(addr, end) \
+({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \
+ (__boundary - 1 < (end) - 1)? __boundary: (end); \
+})
+
+#define kvm_pud_addr_end(addr,end) (end)
+
+#define kvm_pmd_addr_end(addr, end) \
+({ u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK; \
+ (__boundary - 1 < (end) - 1)? __boundary: (end); \
+})
+
struct kvm;
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 2fcd3a3..04d59f1 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -147,7 +147,7 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
pgd = pgdp + pgd_index(addr);
pud = pud_offset(pgd, addr);
if (pud_none(*pud)) {
- addr = pud_addr_end(addr, end);
+ addr = kvm_pud_addr_end(addr, end);
continue;
}
@@ -157,13 +157,13 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
* move on.
*/
clear_pud_entry(kvm, pud, addr);
- addr = pud_addr_end(addr, end);
+ addr = kvm_pud_addr_end(addr, end);
continue;
}
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd)) {
- addr = pmd_addr_end(addr, end);
+ addr = kvm_pmd_addr_end(addr, end);
continue;
}
@@ -178,10 +178,10 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
*/
if (kvm_pmd_huge(*pmd) || page_empty(pte)) {
clear_pmd_entry(kvm, pmd, addr);
- next = pmd_addr_end(addr, end);
+ next = kvm_pmd_addr_end(addr, end);
if (page_empty(pmd) && !page_empty(pud)) {
clear_pud_entry(kvm, pud, addr);
- next = pud_addr_end(addr, end);
+ next = kvm_pud_addr_end(addr, end);
}
}
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 6eaf69b..00c0cc8 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -121,6 +121,10 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
pmd_val(*pmd) |= PMD_S2_RDWR;
}
+#define kvm_pgd_addr_end(addr, end) pgd_addr_end(addr, end)
+#define kvm_pud_addr_end(addr, end) pud_addr_end(addr, end)
+#define kvm_pmd_addr_end(addr, end) pmd_addr_end(addr, end)
+
struct kvm;
#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 05/47] arm64: KVM: flush VM pages before letting the guest enable caches
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (3 preceding siblings ...)
2015-05-04 1:51 ` [PATCH for 3.14.y stable 04/47] ARM: KVM: introduce kvm_p*d_addr_end shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 06/47] ARM: KVM: force cache clean on page fault when caches are off shannon.zhao
` (42 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 9d218a1fcf4c6b759d442ef702842fae92e1ea61 upstream.
When the guest runs with caches disabled (like in an early boot
sequence, for example), all the writes are diectly going to RAM,
bypassing the caches altogether.
Once the MMU and caches are enabled, whatever sits in the cache
becomes suddenly visible, which isn't what the guest expects.
A way to avoid this potential disaster is to invalidate the cache
when the MMU is being turned on. For this, we hook into the SCTLR_EL1
trapping code, and scan the stage-2 page tables, invalidating the
pages/sections that have already been mapped in.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 2 +
arch/arm/kvm/mmu.c | 93 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/include/asm/kvm_mmu.h | 2 +
arch/arm64/kvm/sys_regs.c | 4 +-
4 files changed, 100 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 891afe7..eb85b81 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -155,6 +155,8 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
#define kvm_virt_to_phys(x) virt_to_idmap((unsigned long)(x))
+void stage2_flush_vm(struct kvm *kvm);
+
#endif /* !__ASSEMBLY__ */
#endif /* __ARM_KVM_MMU_H__ */
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 04d59f1..c93ef38 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -189,6 +189,99 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
}
}
+static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
+ phys_addr_t addr, phys_addr_t end)
+{
+ pte_t *pte;
+
+ pte = pte_offset_kernel(pmd, addr);
+ do {
+ if (!pte_none(*pte)) {
+ hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
+ kvm_flush_dcache_to_poc((void*)hva, PAGE_SIZE);
+ }
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+}
+
+static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
+ phys_addr_t addr, phys_addr_t end)
+{
+ pmd_t *pmd;
+ phys_addr_t next;
+
+ pmd = pmd_offset(pud, addr);
+ do {
+ next = kvm_pmd_addr_end(addr, end);
+ if (!pmd_none(*pmd)) {
+ if (kvm_pmd_huge(*pmd)) {
+ hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
+ kvm_flush_dcache_to_poc((void*)hva, PMD_SIZE);
+ } else {
+ stage2_flush_ptes(kvm, pmd, addr, next);
+ }
+ }
+ } while (pmd++, addr = next, addr != end);
+}
+
+static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
+ phys_addr_t addr, phys_addr_t end)
+{
+ pud_t *pud;
+ phys_addr_t next;
+
+ pud = pud_offset(pgd, addr);
+ do {
+ next = kvm_pud_addr_end(addr, end);
+ if (!pud_none(*pud)) {
+ if (pud_huge(*pud)) {
+ hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT);
+ kvm_flush_dcache_to_poc((void*)hva, PUD_SIZE);
+ } else {
+ stage2_flush_pmds(kvm, pud, addr, next);
+ }
+ }
+ } while (pud++, addr = next, addr != end);
+}
+
+static void stage2_flush_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *memslot)
+{
+ phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
+ phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
+ phys_addr_t next;
+ pgd_t *pgd;
+
+ pgd = kvm->arch.pgd + pgd_index(addr);
+ do {
+ next = kvm_pgd_addr_end(addr, end);
+ stage2_flush_puds(kvm, pgd, addr, next);
+ } while (pgd++, addr = next, addr != end);
+}
+
+/**
+ * stage2_flush_vm - Invalidate cache for pages mapped in stage 2
+ * @kvm: The struct kvm pointer
+ *
+ * Go through the stage 2 page tables and invalidate any cache lines
+ * backing memory already mapped to the VM.
+ */
+void stage2_flush_vm(struct kvm *kvm)
+{
+ struct kvm_memslots *slots;
+ struct kvm_memory_slot *memslot;
+ int idx;
+
+ idx = srcu_read_lock(&kvm->srcu);
+ spin_lock(&kvm->mmu_lock);
+
+ slots = kvm_memslots(kvm);
+ kvm_for_each_memslot(memslot, slots)
+ stage2_flush_memslot(kvm, memslot);
+
+ spin_unlock(&kvm->mmu_lock);
+ srcu_read_unlock(&kvm->srcu, idx);
+}
+
/**
* free_boot_hyp_pgd - free HYP boot page tables
*
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 00c0cc8..7d29847 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -150,5 +150,7 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x))
+void stage2_flush_vm(struct kvm *kvm);
+
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f9bfd44..0324458 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -155,8 +155,10 @@ static bool access_sctlr(struct kvm_vcpu *vcpu,
{
access_vm_reg(vcpu, p, r);
- if (vcpu_has_cache_enabled(vcpu)) /* MMU+Caches enabled? */
+ if (vcpu_has_cache_enabled(vcpu)) { /* MMU+Caches enabled? */
vcpu->arch.hcr_el2 &= ~HCR_TVM;
+ stage2_flush_vm(vcpu->kvm);
+ }
return true;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 06/47] ARM: KVM: force cache clean on page fault when caches are off
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (4 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 05/47] arm64: KVM: flush VM pages before letting the guest enable caches shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 07/47] ARM: KVM: fix handling of trapped 64bit coprocessor accesses shannon.zhao
` (41 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 159793001d7d85af17855630c94f0a176848e16b upstream.
In order for a guest with caches disabled to observe data written
contained in a given page, we need to make sure that page is
committed to memory, and not just hanging in the cache (as guest
accesses are completely bypassing the cache until it decides to
enable it).
For this purpose, hook into the coherent_cache_guest_page
function and flush the region if the guest SCTLR
register doesn't show the MMU and caches as being enabled.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index eb85b81..5c7aa3c 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -129,9 +129,19 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
struct kvm;
+#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
+
+static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
+{
+ return (vcpu->arch.cp15[c1_SCTLR] & 0b101) == 0b101;
+}
+
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
unsigned long size)
{
+ if (!vcpu_has_cache_enabled(vcpu))
+ kvm_flush_dcache_to_poc((void *)hva, size);
+
/*
* If we are going to insert an instruction page and the icache is
* either VIPT or PIPT, there is a potential problem where the host
@@ -152,7 +162,6 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
}
}
-#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
#define kvm_virt_to_phys(x) virt_to_idmap((unsigned long)(x))
void stage2_flush_vm(struct kvm *kvm);
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 07/47] ARM: KVM: fix handling of trapped 64bit coprocessor accesses
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (5 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 06/47] ARM: KVM: force cache clean on page fault when caches are off shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 08/47] ARM: KVM: fix ordering of " shannon.zhao
` (40 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 46c214dd595381c880794413facadfa07fba5c95 upstream.
Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
changed the way we match the 64bit coprocessor access from
user space, but didn't update the trap handler for the same
set of registers.
The effect is that a trapped 64bit access is never matched, leading
to a fault being injected into the guest. This went unnoticed as we
didn't really trap any 64bit register so far.
Placing the CRm field of the access into the CRn field of the matching
structure fixes the problem. Also update the debug feature to emit the
expected string in case of failing match.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/coproc.c | 4 ++--
arch/arm/kvm/coproc.h | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 78c0885..126c90d 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -443,7 +443,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
struct coproc_params params;
- params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
+ params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
params.is_64bit = true;
@@ -451,7 +451,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
params.Op2 = 0;
params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
- params.CRn = 0;
+ params.CRm = 0;
return emulate_cp15(vcpu, ¶ms);
}
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index 0461d5c..c5ad7ff 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -58,8 +58,8 @@ static inline void print_cp_instr(const struct coproc_params *p)
{
/* Look, we even formatted it for you to paste into the table! */
if (p->is_64bit) {
- kvm_pr_unimpl(" { CRm(%2lu), Op1(%2lu), is64, func_%s },\n",
- p->CRm, p->Op1, p->is_write ? "write" : "read");
+ kvm_pr_unimpl(" { CRm64(%2lu), Op1(%2lu), is64, func_%s },\n",
+ p->CRn, p->Op1, p->is_write ? "write" : "read");
} else {
kvm_pr_unimpl(" { CRn(%2lu), CRm(%2lu), Op1(%2lu), Op2(%2lu), is32,"
" func_%s },\n",
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 08/47] ARM: KVM: fix ordering of 64bit coprocessor accesses
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (6 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 07/47] ARM: KVM: fix handling of trapped 64bit coprocessor accesses shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 09/47] ARM: KVM: introduce per-vcpu HYP Configuration Register shannon.zhao
` (39 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 547f781378a22b65c2ab468f235c23001b5924da upstream.
Commit 240e99cbd00a (ARM: KVM: Fix 64-bit coprocessor handling)
added an ordering dependency for the 64bit registers.
The order described is: CRn, CRm, Op1, Op2, 64bit-first.
Unfortunately, the implementation is: CRn, 64bit-first, CRm...
Move the 64bit test to be last in order to match the documentation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/coproc.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index c5ad7ff..8dda870 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -135,13 +135,13 @@ static inline int cmp_reg(const struct coproc_reg *i1,
return -1;
if (i1->CRn != i2->CRn)
return i1->CRn - i2->CRn;
- if (i1->is_64 != i2->is_64)
- return i2->is_64 - i1->is_64;
if (i1->CRm != i2->CRm)
return i1->CRm - i2->CRm;
if (i1->Op1 != i2->Op1)
return i1->Op1 - i2->Op1;
- return i1->Op2 - i2->Op2;
+ if (i1->Op2 != i2->Op2)
+ return i1->Op2 - i2->Op2;
+ return i2->is_64 - i1->is_64;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 09/47] ARM: KVM: introduce per-vcpu HYP Configuration Register
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (7 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 08/47] ARM: KVM: fix ordering of " shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 10/47] ARM: KVM: add world-switch for AMAIR{0,1} shannon.zhao
` (38 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit ac30a11e8e92a03dbe236b285c5cbae0bf563141 upstream.
So far, KVM/ARM used a fixed HCR configuration per guest, except for
the VI/VF/VA bits to control the interrupt in absence of VGIC.
With the upcoming need to dynamically reconfigure trapping, it becomes
necessary to allow the HCR to be changed on a per-vcpu basis.
The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR
field, initialized at setup time.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_arm.h | 1 -
arch/arm/include/asm/kvm_host.h | 9 ++++++---
arch/arm/kernel/asm-offsets.c | 1 +
arch/arm/kvm/guest.c | 1 +
arch/arm/kvm/interrupts_head.S | 9 +++------
5 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
index 1d3153c..a843e74 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -69,7 +69,6 @@
#define HCR_GUEST_MASK (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
HCR_TWE | HCR_SWIO | HCR_TIDCP)
-#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
/* System Control Register (SCTLR) bits */
#define SCTLR_TE (1 << 30)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 098f7dd..09af149 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -101,6 +101,12 @@ struct kvm_vcpu_arch {
/* The CPU type we expose to the VM */
u32 midr;
+ /* HYP trapping configuration */
+ u32 hcr;
+
+ /* Interrupt related fields */
+ u32 irq_lines; /* IRQ and FIQ levels */
+
/* Exception Information */
struct kvm_vcpu_fault_info fault;
@@ -128,9 +134,6 @@ struct kvm_vcpu_arch {
/* IO related fields */
struct kvm_decode mmio_decode;
- /* Interrupt related fields */
- u32 irq_lines; /* IRQ and FIQ levels */
-
/* Cache some mmu pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index ded0417..85598b5 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -174,6 +174,7 @@ int main(void)
DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
+ DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index 2786eae..b23a59c 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -38,6 +38,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{
+ vcpu->arch.hcr = HCR_GUEST_MASK;
return 0;
}
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 6f18695..a37270d 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -597,17 +597,14 @@ vcpu .req r0 @ vcpu pointer always in r0
/* Enable/Disable: stage-2 trans., trap interrupts, trap wfi, trap smc */
.macro configure_hyp_role operation
- mrc p15, 4, r2, c1, c1, 0 @ HCR
- bic r2, r2, #HCR_VIRT_EXCP_MASK
- ldr r3, =HCR_GUEST_MASK
.if \operation == vmentry
- orr r2, r2, r3
+ ldr r2, [vcpu, #VCPU_HCR]
ldr r3, [vcpu, #VCPU_IRQ_LINES]
orr r2, r2, r3
.else
- bic r2, r2, r3
+ mov r2, #0
.endif
- mcr p15, 4, r2, c1, c1, 0
+ mcr p15, 4, r2, c1, c1, 0 @ HCR
.endm
.macro load_vcpu
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 10/47] ARM: KVM: add world-switch for AMAIR{0,1}
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (8 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 09/47] ARM: KVM: introduce per-vcpu HYP Configuration Register shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 11/47] ARM: KVM: trap VM system registers until MMU and caches are ON shannon.zhao
` (37 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit af20814ee927ed888288d98917a766b4179c4fe0 upstream.
HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1.
In order to minimise the amount of surprise a guest could generate by
trying to access these registers with caches off, add them to the
list of registers we switch/handle.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_asm.h | 4 +++-
arch/arm/kvm/coproc.c | 23 +++++++++++++++++++++++
arch/arm/kvm/interrupts_head.S | 12 ++++++++++--
3 files changed, 36 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 661da11..53b3c4a 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -48,7 +48,9 @@
#define c13_TID_URO 26 /* Thread ID, User R/O */
#define c13_TID_PRIV 27 /* Thread ID, Privileged */
#define c14_CNTKCTL 28 /* Timer Control Register (PL1) */
-#define NR_CP15_REGS 29 /* Number of regs (incl. invalid) */
+#define c10_AMAIR0 29 /* Auxilary Memory Attribute Indirection Reg0 */
+#define c10_AMAIR1 30 /* Auxilary Memory Attribute Indirection Reg1 */
+#define NR_CP15_REGS 31 /* Number of regs (incl. invalid) */
#define ARM_EXCEPTION_RESET 0
#define ARM_EXCEPTION_UNDEFINED 1
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 126c90d..c6be883 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -205,6 +205,23 @@ done:
}
/*
+ * Generic accessor for VM registers. Only called as long as HCR_TVM
+ * is set.
+ */
+static bool access_vm_reg(struct kvm_vcpu *vcpu,
+ const struct coproc_params *p,
+ const struct coproc_reg *r)
+{
+ BUG_ON(!p->is_write);
+
+ vcpu->arch.cp15[r->reg] = *vcpu_reg(vcpu, p->Rt1);
+ if (p->is_64bit)
+ vcpu->arch.cp15[r->reg + 1] = *vcpu_reg(vcpu, p->Rt2);
+
+ return true;
+}
+
+/*
* We could trap ID_DFR0 and tell the guest we don't support performance
* monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
* NAKed, so it will read the PMCR anyway.
@@ -328,6 +345,12 @@ static const struct coproc_reg cp15_regs[] = {
{ CRn(10), CRm( 2), Op1( 0), Op2( 1), is32,
NULL, reset_unknown, c10_NMRR},
+ /* AMAIR0/AMAIR1: swapped by interrupt.S. */
+ { CRn(10), CRm( 3), Op1( 0), Op2( 0), is32,
+ access_vm_reg, reset_unknown, c10_AMAIR0},
+ { CRn(10), CRm( 3), Op1( 0), Op2( 1), is32,
+ access_vm_reg, reset_unknown, c10_AMAIR1},
+
/* VBAR: swapped by interrupt.S. */
{ CRn(12), CRm( 0), Op1( 0), Op2( 0), is32,
NULL, reset_val, c12_VBAR, 0x00000000 },
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index a37270d..76af9302 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -303,13 +303,17 @@ vcpu .req r0 @ vcpu pointer always in r0
mrc p15, 0, r2, c14, c1, 0 @ CNTKCTL
mrrc p15, 0, r4, r5, c7 @ PAR
+ mrc p15, 0, r6, c10, c3, 0 @ AMAIR0
+ mrc p15, 0, r7, c10, c3, 1 @ AMAIR1
.if \store_to_vcpu == 0
- push {r2,r4-r5}
+ push {r2,r4-r7}
.else
str r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
add r12, vcpu, #CP15_OFFSET(c7_PAR)
strd r4, r5, [r12]
+ str r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
+ str r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
.endif
.endm
@@ -322,15 +326,19 @@ vcpu .req r0 @ vcpu pointer always in r0
*/
.macro write_cp15_state read_from_vcpu
.if \read_from_vcpu == 0
- pop {r2,r4-r5}
+ pop {r2,r4-r7}
.else
ldr r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
add r12, vcpu, #CP15_OFFSET(c7_PAR)
ldrd r4, r5, [r12]
+ ldr r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
+ ldr r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
.endif
mcr p15, 0, r2, c14, c1, 0 @ CNTKCTL
mcrr p15, 0, r4, r5, c7 @ PAR
+ mcr p15, 0, r6, c10, c3, 0 @ AMAIR0
+ mcr p15, 0, r7, c10, c3, 1 @ AMAIR1
.if \read_from_vcpu == 0
pop {r2-r12}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 11/47] ARM: KVM: trap VM system registers until MMU and caches are ON
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (9 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 10/47] ARM: KVM: add world-switch for AMAIR{0,1} shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 12/47] KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses shannon.zhao
` (36 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 8034699a42d68043b495c7e0cfafccd920707ec8 upstream.
In order to be able to detect the point where the guest enables
its MMU and caches, trap all the VM related system registers.
Once we see the guest enabling both the MMU and the caches, we
can go back to a saner mode of operation, which is to leave these
registers in complete control of the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_arm.h | 3 ++-
arch/arm/kvm/coproc.c | 61 +++++++++++++++++++++++++++++-------------
arch/arm/kvm/coproc.h | 4 +++
arch/arm/kvm/coproc_a15.c | 2 +-
arch/arm/kvm/coproc_a7.c | 2 +-
5 files changed, 51 insertions(+), 21 deletions(-)
diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
index a843e74..816db0b 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -55,6 +55,7 @@
* The bits we set in HCR:
* TAC: Trap ACTLR
* TSC: Trap SMC
+ * TVM: Trap VM ops (until MMU and caches are on)
* TSW: Trap cache operations by set/way
* TWI: Trap WFI
* TWE: Trap WFE
@@ -68,7 +69,7 @@
*/
#define HCR_GUEST_MASK (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
- HCR_TWE | HCR_SWIO | HCR_TIDCP)
+ HCR_TVM | HCR_TWE | HCR_SWIO | HCR_TIDCP)
/* System Control Register (SCTLR) bits */
#define SCTLR_TE (1 << 30)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index c6be883..c58a351 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -23,6 +23,7 @@
#include <asm/kvm_host.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#include <trace/events/kvm.h>
@@ -209,8 +210,8 @@ done:
* is set.
*/
static bool access_vm_reg(struct kvm_vcpu *vcpu,
- const struct coproc_params *p,
- const struct coproc_reg *r)
+ const struct coproc_params *p,
+ const struct coproc_reg *r)
{
BUG_ON(!p->is_write);
@@ -222,6 +223,27 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
}
/*
+ * SCTLR accessor. Only called as long as HCR_TVM is set. If the
+ * guest enables the MMU, we stop trapping the VM sys_regs and leave
+ * it in complete control of the caches.
+ *
+ * Used by the cpu-specific code.
+ */
+bool access_sctlr(struct kvm_vcpu *vcpu,
+ const struct coproc_params *p,
+ const struct coproc_reg *r)
+{
+ access_vm_reg(vcpu, p, r);
+
+ if (vcpu_has_cache_enabled(vcpu)) { /* MMU+Caches enabled? */
+ vcpu->arch.hcr &= ~HCR_TVM;
+ stage2_flush_vm(vcpu->kvm);
+ }
+
+ return true;
+}
+
+/*
* We could trap ID_DFR0 and tell the guest we don't support performance
* monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
* NAKed, so it will read the PMCR anyway.
@@ -278,33 +300,36 @@ static const struct coproc_reg cp15_regs[] = {
{ CRn( 1), CRm( 0), Op1( 0), Op2( 2), is32,
NULL, reset_val, c1_CPACR, 0x00000000 },
- /* TTBR0/TTBR1: swapped by interrupt.S. */
- { CRm64( 2), Op1( 0), is64, NULL, reset_unknown64, c2_TTBR0 },
- { CRm64( 2), Op1( 1), is64, NULL, reset_unknown64, c2_TTBR1 },
-
- /* TTBCR: swapped by interrupt.S. */
+ /* TTBR0/TTBR1/TTBCR: swapped by interrupt.S. */
+ { CRm64( 2), Op1( 0), is64, access_vm_reg, reset_unknown64, c2_TTBR0 },
+ { CRn(2), CRm( 0), Op1( 0), Op2( 0), is32,
+ access_vm_reg, reset_unknown, c2_TTBR0 },
+ { CRn(2), CRm( 0), Op1( 0), Op2( 1), is32,
+ access_vm_reg, reset_unknown, c2_TTBR1 },
{ CRn( 2), CRm( 0), Op1( 0), Op2( 2), is32,
- NULL, reset_val, c2_TTBCR, 0x00000000 },
+ access_vm_reg, reset_val, c2_TTBCR, 0x00000000 },
+ { CRm64( 2), Op1( 1), is64, access_vm_reg, reset_unknown64, c2_TTBR1 },
+
/* DACR: swapped by interrupt.S. */
{ CRn( 3), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c3_DACR },
+ access_vm_reg, reset_unknown, c3_DACR },
/* DFSR/IFSR/ADFSR/AIFSR: swapped by interrupt.S. */
{ CRn( 5), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c5_DFSR },
+ access_vm_reg, reset_unknown, c5_DFSR },
{ CRn( 5), CRm( 0), Op1( 0), Op2( 1), is32,
- NULL, reset_unknown, c5_IFSR },
+ access_vm_reg, reset_unknown, c5_IFSR },
{ CRn( 5), CRm( 1), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c5_ADFSR },
+ access_vm_reg, reset_unknown, c5_ADFSR },
{ CRn( 5), CRm( 1), Op1( 0), Op2( 1), is32,
- NULL, reset_unknown, c5_AIFSR },
+ access_vm_reg, reset_unknown, c5_AIFSR },
/* DFAR/IFAR: swapped by interrupt.S. */
{ CRn( 6), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c6_DFAR },
+ access_vm_reg, reset_unknown, c6_DFAR },
{ CRn( 6), CRm( 0), Op1( 0), Op2( 2), is32,
- NULL, reset_unknown, c6_IFAR },
+ access_vm_reg, reset_unknown, c6_IFAR },
/* PAR swapped by interrupt.S */
{ CRm64( 7), Op1( 0), is64, NULL, reset_unknown64, c7_PAR },
@@ -341,9 +366,9 @@ static const struct coproc_reg cp15_regs[] = {
/* PRRR/NMRR (aka MAIR0/MAIR1): swapped by interrupt.S. */
{ CRn(10), CRm( 2), Op1( 0), Op2( 0), is32,
- NULL, reset_unknown, c10_PRRR},
+ access_vm_reg, reset_unknown, c10_PRRR},
{ CRn(10), CRm( 2), Op1( 0), Op2( 1), is32,
- NULL, reset_unknown, c10_NMRR},
+ access_vm_reg, reset_unknown, c10_NMRR},
/* AMAIR0/AMAIR1: swapped by interrupt.S. */
{ CRn(10), CRm( 3), Op1( 0), Op2( 0), is32,
@@ -357,7 +382,7 @@ static const struct coproc_reg cp15_regs[] = {
/* CONTEXTIDR/TPIDRURW/TPIDRURO/TPIDRPRW: swapped by interrupt.S. */
{ CRn(13), CRm( 0), Op1( 0), Op2( 1), is32,
- NULL, reset_val, c13_CID, 0x00000000 },
+ access_vm_reg, reset_val, c13_CID, 0x00000000 },
{ CRn(13), CRm( 0), Op1( 0), Op2( 2), is32,
NULL, reset_unknown, c13_TID_URW },
{ CRn(13), CRm( 0), Op1( 0), Op2( 3), is32,
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index 8dda870..1a44bbe 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -153,4 +153,8 @@ static inline int cmp_reg(const struct coproc_reg *i1,
#define is64 .is_64 = true
#define is32 .is_64 = false
+bool access_sctlr(struct kvm_vcpu *vcpu,
+ const struct coproc_params *p,
+ const struct coproc_reg *r);
+
#endif /* __ARM_KVM_COPROC_LOCAL_H__ */
diff --git a/arch/arm/kvm/coproc_a15.c b/arch/arm/kvm/coproc_a15.c
index bb0cac1..e6f4ae4 100644
--- a/arch/arm/kvm/coproc_a15.c
+++ b/arch/arm/kvm/coproc_a15.c
@@ -34,7 +34,7 @@
static const struct coproc_reg a15_regs[] = {
/* SCTLR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_val, c1_SCTLR, 0x00C50078 },
+ access_sctlr, reset_val, c1_SCTLR, 0x00C50078 },
};
static struct kvm_coproc_target_table a15_target_table = {
diff --git a/arch/arm/kvm/coproc_a7.c b/arch/arm/kvm/coproc_a7.c
index 1df7673..17fc7cd 100644
--- a/arch/arm/kvm/coproc_a7.c
+++ b/arch/arm/kvm/coproc_a7.c
@@ -37,7 +37,7 @@
static const struct coproc_reg a7_regs[] = {
/* SCTLR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
- NULL, reset_val, c1_SCTLR, 0x00C50878 },
+ access_sctlr, reset_val, c1_SCTLR, 0x00C50878 },
};
static struct kvm_coproc_target_table a7_target_table = {
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 12/47] KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (10 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 11/47] ARM: KVM: trap VM system registers until MMU and caches are ON shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 13/47] KVM: ARM: vgic: Fix the overlap check action about setting the GICD & GICC base address shannon.zhao
` (35 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Andre Przywara,
Marc Zyngier
From: Andre Przywara <andre.przywara@arm.com>
commit f2ae85b2ab3776b9e4e42e5b6fa090f40d396794 upstream.
Since KVM internally represents the ICFGR registers by stuffing two
of them into one word, the offset for accessing the internal
representation and the one for the MMIO based access are different.
So keep the original offset around, but adjust the internal array
offset by one bit.
Reported-by: Haibin Wang <wanghaibin.wang@huawei.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 26954a7..2f8aee5 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -548,11 +548,10 @@ static bool handle_mmio_cfg_reg(struct kvm_vcpu *vcpu,
u32 val;
u32 *reg;
- offset >>= 1;
reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_cfg,
- vcpu->vcpu_id, offset);
+ vcpu->vcpu_id, offset >> 1);
- if (offset & 2)
+ if (offset & 4)
val = *reg >> 16;
else
val = *reg & 0xffff;
@@ -561,13 +560,13 @@ static bool handle_mmio_cfg_reg(struct kvm_vcpu *vcpu,
vgic_reg_access(mmio, &val, offset,
ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);
if (mmio->is_write) {
- if (offset < 4) {
+ if (offset < 8) {
*reg = ~0U; /* Force PPIs/SGIs to 1 */
return false;
}
val = vgic_cfg_compress(val);
- if (offset & 2) {
+ if (offset & 4) {
*reg &= 0xffff;
*reg |= val << 16;
} else {
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 13/47] KVM: ARM: vgic: Fix the overlap check action about setting the GICD & GICC base address.
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (11 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 12/47] KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 14/47] arm64: kvm: use inner-shareable barriers for inner-shareable maintenance shannon.zhao
` (34 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Haibin Wang
From: Haibin Wang <wanghaibin.wang@huawei.com>
commit 30c2117085bc4e05d091cee6eba79f069b41a9cd upstream.
Currently below check in vgic_ioaddr_overlap will always succeed,
because the vgic dist base and vgic cpu base are still kept UNDEF
after initialization. The code as follows will be return forever.
if (IS_VGIC_ADDR_UNDEF(dist) || IS_VGIC_ADDR_UNDEF(cpu))
return 0;
So, before invoking the vgic_ioaddr_overlap, it needs to set the
corresponding base address firstly.
Signed-off-by: Haibin Wang <wanghaibin.wang@huawei.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 2f8aee5..4dc45e2 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1667,10 +1667,11 @@ static int vgic_ioaddr_assign(struct kvm *kvm, phys_addr_t *ioaddr,
if (addr + size < addr)
return -EINVAL;
+ *ioaddr = addr;
ret = vgic_ioaddr_overlap(kvm);
if (ret)
- return ret;
- *ioaddr = addr;
+ *ioaddr = VGIC_ADDR_UNDEF;
+
return ret;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 14/47] arm64: kvm: use inner-shareable barriers for inner-shareable maintenance
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (12 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 13/47] KVM: ARM: vgic: Fix the overlap check action about setting the GICD & GICC base address shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 15/47] kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform shannon.zhao
` (33 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Will Deacon,
Catalin Marinas
From: Will Deacon <will.deacon@arm.com>
commit ee9e101c11478680d579bd20bb38a4d3e2514fe3 upstream.
In order to ensure completion of inner-shareable maintenance instructions
(cache and TLB) on AArch64, we can use the -ish suffix to the dsb
instruction.
This patch relaxes our dsb sy instructions to dsb ish where possible.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/hyp.S | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 2c56012..b0d1512 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -630,9 +630,15 @@ ENTRY(__kvm_tlb_flush_vmid_ipa)
* whole of Stage-1. Weep...
*/
tlbi ipas2e1is, x1
- dsb sy
+ /*
+ * We have to ensure completion of the invalidation at Stage-2,
+ * since a table walk on another CPU could refill a TLB with a
+ * complete (S1 + S2) walk based on the old Stage-2 mapping if
+ * the Stage-1 invalidation happened first.
+ */
+ dsb ish
tlbi vmalle1is
- dsb sy
+ dsb ish
isb
msr vttbr_el2, xzr
@@ -643,7 +649,7 @@ ENTRY(__kvm_flush_vm_context)
dsb ishst
tlbi alle1is
ic ialluis
- dsb sy
+ dsb ish
ret
ENDPROC(__kvm_flush_vm_context)
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 15/47] kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (13 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 14/47] arm64: kvm: use inner-shareable barriers for inner-shareable maintenance shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 16/47] arm/arm64: KVM: Fix and refactor unmap_range shannon.zhao
` (32 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable
Cc: gregkh, christoffer.dall, shannon.zhao, Will Deacon, Marc Zyngier,
Gleb Natapov, Paolo Bonzini, Joel Schopp, Don Dutile
From: Will Deacon <will.deacon@arm.com>
commit 63afbe7a0ac184ef8485dac4914e87b211b5bfaa upstream.
If the physical address of GICV isn't page-aligned, then we end up
creating a stage-2 mapping of the page containing it, which causes us to
map neighbouring memory locations directly into the guest.
As an example, consider a platform with GICV at physical 0x2c02f000
running a 64k-page host kernel. If qemu maps this into the guest at
0x80010000, then guest physical addresses 0x80010000 - 0x8001efff will
map host physical region 0x2c020000 - 0x2c02efff. Accesses to these
physical regions may cause UNPREDICTABLE behaviour, for example, on the
Juno platform this will cause an SError exception to EL3, which brings
down the entire physical CPU resulting in RCU stalls / HYP panics / host
crashing / wasted weeks of debugging.
SBSA recommends that systems alias the 4k GICV across the bounding 64k
region, in which case GICV physical could be described as 0x2c020000 in
the above scenario.
This patch fixes the problem by failing the vgic probe if the physical
base address or the size of GICV aren't page-aligned. Note that this
generated a warning in dmesg about freeing enabled IRQs, so I had to
move the IRQ enabling later in the probe.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joel Schopp <joel.schopp@amd.com>
Cc: Don Dutile <ddutile@redhat.com>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Joel Schopp <joel.schopp@amd.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 4dc45e2..4eec2d4 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1526,17 +1526,33 @@ int kvm_vgic_hyp_init(void)
goto out_unmap;
}
- kvm_info("%s@%llx IRQ%d\n", vgic_node->name,
- vctrl_res.start, vgic_maint_irq);
- on_each_cpu(vgic_init_maintenance_interrupt, NULL, 1);
-
if (of_address_to_resource(vgic_node, 3, &vcpu_res)) {
kvm_err("Cannot obtain VCPU resource\n");
ret = -ENXIO;
goto out_unmap;
}
+
+ if (!PAGE_ALIGNED(vcpu_res.start)) {
+ kvm_err("GICV physical address 0x%llx not page aligned\n",
+ (unsigned long long)vcpu_res.start);
+ ret = -ENXIO;
+ goto out_unmap;
+ }
+
+ if (!PAGE_ALIGNED(resource_size(&vcpu_res))) {
+ kvm_err("GICV size 0x%llx not a multiple of page size 0x%lx\n",
+ (unsigned long long)resource_size(&vcpu_res),
+ PAGE_SIZE);
+ ret = -ENXIO;
+ goto out_unmap;
+ }
+
vgic_vcpu_base = vcpu_res.start;
+ kvm_info("%s@%llx IRQ%d\n", vgic_node->name,
+ vctrl_res.start, vgic_maint_irq);
+ on_each_cpu(vgic_init_maintenance_interrupt, NULL, 1);
+
goto out;
out_unmap:
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 16/47] arm/arm64: KVM: Fix and refactor unmap_range
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (14 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 15/47] kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 17/47] ARM: KVM: Unmap IPA on memslot delete/move shannon.zhao
` (31 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 4f853a714bf16338ff5261128e6c7ae2569e9505 upstream.
unmap_range() was utterly broken, to quote Marc, and broke in all sorts
of situations. It was also quite complicated to follow and didn't
follow the usual scheme of having a separate iterating function for each
level of page tables.
Address this by refactoring the code and introduce a pgd_clear()
function.
Reviewed-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Mario Smarduch <m.smarduch@samsung.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 12 +++
arch/arm/kvm/mmu.c | 156 +++++++++++++++++++++------------------
arch/arm64/include/asm/kvm_mmu.h | 15 ++++
3 files changed, 111 insertions(+), 72 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 5c7aa3c..5cc0b0f 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -127,6 +127,18 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
(__boundary - 1 < (end) - 1)? __boundary: (end); \
})
+static inline bool kvm_page_empty(void *ptr)
+{
+ struct page *ptr_page = virt_to_page(ptr);
+ return page_count(ptr_page) == 1;
+}
+
+
+#define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
+#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#define kvm_pud_table_empty(pudp) (0)
+
+
struct kvm;
#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index c93ef38..1cfede7 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -90,103 +90,115 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
return p;
}
-static bool page_empty(void *ptr)
+static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
{
- struct page *ptr_page = virt_to_page(ptr);
- return page_count(ptr_page) == 1;
+ pud_t *pud_table __maybe_unused = pud_offset(pgd, 0);
+ pgd_clear(pgd);
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+ pud_free(NULL, pud_table);
+ put_page(virt_to_page(pgd));
}
static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
{
- if (pud_huge(*pud)) {
- pud_clear(pud);
- kvm_tlb_flush_vmid_ipa(kvm, addr);
- } else {
- pmd_t *pmd_table = pmd_offset(pud, 0);
- pud_clear(pud);
- kvm_tlb_flush_vmid_ipa(kvm, addr);
- pmd_free(NULL, pmd_table);
- }
+ pmd_t *pmd_table = pmd_offset(pud, 0);
+ VM_BUG_ON(pud_huge(*pud));
+ pud_clear(pud);
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+ pmd_free(NULL, pmd_table);
put_page(virt_to_page(pud));
}
static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
{
- if (kvm_pmd_huge(*pmd)) {
- pmd_clear(pmd);
- kvm_tlb_flush_vmid_ipa(kvm, addr);
- } else {
- pte_t *pte_table = pte_offset_kernel(pmd, 0);
- pmd_clear(pmd);
- kvm_tlb_flush_vmid_ipa(kvm, addr);
- pte_free_kernel(NULL, pte_table);
- }
+ pte_t *pte_table = pte_offset_kernel(pmd, 0);
+ VM_BUG_ON(kvm_pmd_huge(*pmd));
+ pmd_clear(pmd);
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+ pte_free_kernel(NULL, pte_table);
put_page(virt_to_page(pmd));
}
-static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr)
+static void unmap_ptes(struct kvm *kvm, pmd_t *pmd,
+ phys_addr_t addr, phys_addr_t end)
{
- if (pte_present(*pte)) {
- kvm_set_pte(pte, __pte(0));
- put_page(virt_to_page(pte));
- kvm_tlb_flush_vmid_ipa(kvm, addr);
+ phys_addr_t start_addr = addr;
+ pte_t *pte, *start_pte;
+
+ start_pte = pte = pte_offset_kernel(pmd, addr);
+ do {
+ if (!pte_none(*pte)) {
+ kvm_set_pte(pte, __pte(0));
+ put_page(virt_to_page(pte));
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+ }
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+
+ if (kvm_pte_table_empty(start_pte))
+ clear_pmd_entry(kvm, pmd, start_addr);
}
-}
-static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
- unsigned long long start, u64 size)
+static void unmap_pmds(struct kvm *kvm, pud_t *pud,
+ phys_addr_t addr, phys_addr_t end)
{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
- unsigned long long addr = start, end = start + size;
- u64 next;
+ phys_addr_t next, start_addr = addr;
+ pmd_t *pmd, *start_pmd;
- while (addr < end) {
- pgd = pgdp + pgd_index(addr);
- pud = pud_offset(pgd, addr);
- if (pud_none(*pud)) {
- addr = kvm_pud_addr_end(addr, end);
- continue;
- }
-
- if (pud_huge(*pud)) {
- /*
- * If we are dealing with a huge pud, just clear it and
- * move on.
- */
- clear_pud_entry(kvm, pud, addr);
- addr = kvm_pud_addr_end(addr, end);
- continue;
+ start_pmd = pmd = pmd_offset(pud, addr);
+ do {
+ next = kvm_pmd_addr_end(addr, end);
+ if (!pmd_none(*pmd)) {
+ if (kvm_pmd_huge(*pmd)) {
+ pmd_clear(pmd);
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+ put_page(virt_to_page(pmd));
+ } else {
+ unmap_ptes(kvm, pmd, addr, next);
+ }
}
+ } while (pmd++, addr = next, addr != end);
- pmd = pmd_offset(pud, addr);
- if (pmd_none(*pmd)) {
- addr = kvm_pmd_addr_end(addr, end);
- continue;
- }
+ if (kvm_pmd_table_empty(start_pmd))
+ clear_pud_entry(kvm, pud, start_addr);
+}
- if (!kvm_pmd_huge(*pmd)) {
- pte = pte_offset_kernel(pmd, addr);
- clear_pte_entry(kvm, pte, addr);
- next = addr + PAGE_SIZE;
- }
+static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
+ phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t next, start_addr = addr;
+ pud_t *pud, *start_pud;
- /*
- * If the pmd entry is to be cleared, walk back up the ladder
- */
- if (kvm_pmd_huge(*pmd) || page_empty(pte)) {
- clear_pmd_entry(kvm, pmd, addr);
- next = kvm_pmd_addr_end(addr, end);
- if (page_empty(pmd) && !page_empty(pud)) {
- clear_pud_entry(kvm, pud, addr);
- next = kvm_pud_addr_end(addr, end);
+ start_pud = pud = pud_offset(pgd, addr);
+ do {
+ next = kvm_pud_addr_end(addr, end);
+ if (!pud_none(*pud)) {
+ if (pud_huge(*pud)) {
+ pud_clear(pud);
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+ put_page(virt_to_page(pud));
+ } else {
+ unmap_pmds(kvm, pud, addr, next);
}
}
+ } while (pud++, addr = next, addr != end);
- addr = next;
- }
+ if (kvm_pud_table_empty(start_pud))
+ clear_pgd_entry(kvm, pgd, start_addr);
+}
+
+
+static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
+ phys_addr_t start, u64 size)
+{
+ pgd_t *pgd;
+ phys_addr_t addr = start, end = start + size;
+ phys_addr_t next;
+
+ pgd = pgdp + pgd_index(addr);
+ do {
+ next = kvm_pgd_addr_end(addr, end);
+ unmap_puds(kvm, pgd, addr, next);
+ } while (pgd++, addr = next, addr != end);
}
static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7d29847..8e138c7 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -125,6 +125,21 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
#define kvm_pud_addr_end(addr, end) pud_addr_end(addr, end)
#define kvm_pmd_addr_end(addr, end) pmd_addr_end(addr, end)
+static inline bool kvm_page_empty(void *ptr)
+{
+ struct page *ptr_page = virt_to_page(ptr);
+ return page_count(ptr_page) == 1;
+}
+
+#define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
+#ifndef CONFIG_ARM64_64K_PAGES
+#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#else
+#define kvm_pmd_table_empty(pmdp) (0)
+#endif
+#define kvm_pud_table_empty(pudp) (0)
+
+
struct kvm;
#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 17/47] ARM: KVM: Unmap IPA on memslot delete/move
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (15 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 16/47] arm/arm64: KVM: Fix and refactor unmap_range shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 18/47] ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping shannon.zhao
` (30 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Eric Auger
From: Eric Auger <eric.auger@linaro.org>
commit df6ce24f2ee485c4f9a5cb610063a5eb60da8267 upstream.
Currently when a KVM region is deleted or moved after
KVM_SET_USER_MEMORY_REGION ioctl, the corresponding
intermediate physical memory is not unmapped.
This patch corrects this and unmaps the region's IPA range
in kvm_arch_commit_memory_region using unmap_stage2_range.
Signed-off-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 37 -------------------------------------
arch/arm/kvm/mmu.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 46 insertions(+), 37 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bd18bb8..f92a7fb 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -155,16 +155,6 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
}
-void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
- struct kvm_memory_slot *dont)
-{
-}
-
-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
- unsigned long npages)
-{
- return 0;
-}
/**
* kvm_arch_destroy_vm - destroy the VM data structure
@@ -224,33 +214,6 @@ long kvm_arch_dev_ioctl(struct file *filp,
return -EINVAL;
}
-void kvm_arch_memslots_updated(struct kvm *kvm)
-{
-}
-
-int kvm_arch_prepare_memory_region(struct kvm *kvm,
- struct kvm_memory_slot *memslot,
- struct kvm_userspace_memory_region *mem,
- enum kvm_mr_change change)
-{
- return 0;
-}
-
-void kvm_arch_commit_memory_region(struct kvm *kvm,
- struct kvm_userspace_memory_region *mem,
- const struct kvm_memory_slot *old,
- enum kvm_mr_change change)
-{
-}
-
-void kvm_arch_flush_shadow_all(struct kvm *kvm)
-{
-}
-
-void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
- struct kvm_memory_slot *slot)
-{
-}
struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
{
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 1cfede7..3d96ece 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1111,3 +1111,49 @@ out:
free_hyp_pgds();
return err;
}
+
+void kvm_arch_commit_memory_region(struct kvm *kvm,
+ struct kvm_userspace_memory_region *mem,
+ const struct kvm_memory_slot *old,
+ enum kvm_mr_change change)
+{
+ gpa_t gpa = old->base_gfn << PAGE_SHIFT;
+ phys_addr_t size = old->npages << PAGE_SHIFT;
+ if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
+ spin_lock(&kvm->mmu_lock);
+ unmap_stage2_range(kvm, gpa, size);
+ spin_unlock(&kvm->mmu_lock);
+ }
+}
+
+int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ struct kvm_memory_slot *memslot,
+ struct kvm_userspace_memory_region *mem,
+ enum kvm_mr_change change)
+{
+ return 0;
+}
+
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ unsigned long npages)
+{
+ return 0;
+}
+
+void kvm_arch_memslots_updated(struct kvm *kvm)
+{
+}
+
+void kvm_arch_flush_shadow_all(struct kvm *kvm)
+{
+}
+
+void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *slot)
+{
+}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 18/47] ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (16 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 17/47] ARM: KVM: Unmap IPA on memslot delete/move shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 19/47] arm64: KVM: export demux regids as KVM_REG_ARM64 shannon.zhao
` (29 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Kim Phillips,
Marc Zyngier
From: Kim Phillips <kim.phillips@linaro.org>
commit b88657674d39fc2127d62d0de9ca142e166443c8 upstream.
A userspace process can map device MMIO memory via VFIO or /dev/mem,
e.g., for platform device passthrough support in QEMU.
During early development, we found the PAGE_S2 memory type being used
for MMIO mappings. This patch corrects that by using the more strongly
ordered memory type for device MMIO mappings: PAGE_S2_DEVICE.
Signed-off-by: Kim Phillips <kim.phillips@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/mmu.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 3d96ece..70ed2c1 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -759,6 +759,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct vm_area_struct *vma;
pfn_t pfn;
+ pgprot_t mem_type = PAGE_S2;
write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu));
if (fault_status == FSC_PERM && !write_fault) {
@@ -809,6 +810,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (is_error_pfn(pfn))
return -EFAULT;
+ if (kvm_is_mmio_pfn(pfn))
+ mem_type = PAGE_S2_DEVICE;
+
spin_lock(&kvm->mmu_lock);
if (mmu_notifier_retry(kvm, mmu_seq))
goto out_unlock;
@@ -816,7 +820,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
if (hugetlb) {
- pmd_t new_pmd = pfn_pmd(pfn, PAGE_S2);
+ pmd_t new_pmd = pfn_pmd(pfn, mem_type);
new_pmd = pmd_mkhuge(new_pmd);
if (writable) {
kvm_set_s2pmd_writable(&new_pmd);
@@ -825,13 +829,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE);
ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
} else {
- pte_t new_pte = pfn_pte(pfn, PAGE_S2);
+ pte_t new_pte = pfn_pte(pfn, mem_type);
if (writable) {
kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn);
}
coherent_cache_guest_page(vcpu, hva, PAGE_SIZE);
- ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false);
+ ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte,
+ mem_type == PAGE_S2_DEVICE);
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 19/47] arm64: KVM: export demux regids as KVM_REG_ARM64
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (17 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 18/47] ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 20/47] ARM: virt: fix wrong HSCTLR.EE bit setting shannon.zhao
` (28 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable
Cc: gregkh, christoffer.dall, shannon.zhao, Alex Bennée,
Marc Zyngier
From: Alex Bennée <alex.bennee@linaro.org>
commit efd48ceacea78e4d4656aa0a6bf4c5b92ed22130 upstream.
I suspect this is a -ECUTPASTE fault from the initial implementation. If
we don't declare the register ID to be KVM_REG_ARM64 the KVM_GET_ONE_REG
implementation kvm_arm_get_reg() returns -EINVAL and hilarity ensues.
The kvm/api.txt document describes all arm64 registers as starting with
0x60xx... (i.e KVM_REG_ARM64).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/sys_regs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0324458..5ee99e4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -962,7 +962,7 @@ static unsigned int num_demux_regs(void)
static int write_demux_regids(u64 __user *uindices)
{
- u64 val = KVM_REG_ARM | KVM_REG_SIZE_U32 | KVM_REG_ARM_DEMUX;
+ u64 val = KVM_REG_ARM64 | KVM_REG_SIZE_U32 | KVM_REG_ARM_DEMUX;
unsigned int i;
val |= KVM_REG_ARM_DEMUX_ID_CCSIDR;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 20/47] ARM: virt: fix wrong HSCTLR.EE bit setting
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (18 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 19/47] arm64: KVM: export demux regids as KVM_REG_ARM64 shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 21/47] ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word shannon.zhao
` (27 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Li Liu, Marc Zyngier
From: Li Liu <john.liuli@huawei.com>
commit af92394efc8be73edd2301fc15f9b57fd430cd18 upstream.
HSCTLR.EE is defined as bit[25] referring to arm manual
DDI0606C.b(p1590).
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Li Liu <john.liuli@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kernel/hyp-stub.S | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S
index 797b1a6..7e666cf 100644
--- a/arch/arm/kernel/hyp-stub.S
+++ b/arch/arm/kernel/hyp-stub.S
@@ -134,9 +134,7 @@ ENTRY(__hyp_stub_install_secondary)
mcr p15, 4, r7, c1, c1, 3 @ HSTR
THUMB( orr r7, #(1 << 30) ) @ HSCTLR.TE
-#ifdef CONFIG_CPU_BIG_ENDIAN
- orr r7, #(1 << 9) @ HSCTLR.EE
-#endif
+ARM_BE8(orr r7, r7, #(1 << 25)) @ HSCTLR.EE
mcr p15, 4, r7, c1, c0, 0 @ HSCTLR
mrc p15, 4, r7, c1, c1, 1 @ HDCR
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 21/47] ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (19 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 20/47] ARM: virt: fix wrong HSCTLR.EE bit setting shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 22/47] KVM: ARM/arm64: fix non-const declaration of function returning const shannon.zhao
` (26 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable
Cc: gregkh, christoffer.dall, shannon.zhao, Victor Kamensky,
Marc Zyngier
From: Victor Kamensky <victor.kamensky@linaro.org>
commit ba083d20d8cfa9e999043cd89c4ebc964ccf8927 upstream.
esr_el2 field of struct kvm_vcpu_fault_info has u32 type.
It should be stored as word. Current code works in LE case
because existing puts least significant word of x1 into
esr_el2, and it puts most significant work of x1 into next
field, which accidentally is OK because it is updated again
by next instruction. But existing code breaks in BE case.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/hyp.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index b0d1512..5dfc8331 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -830,7 +830,7 @@ el1_trap:
mrs x2, far_el2
2: mrs x0, tpidr_el2
- str x1, [x0, #VCPU_ESR_EL2]
+ str w1, [x0, #VCPU_ESR_EL2]
str x2, [x0, #VCPU_FAR_EL2]
str x3, [x0, #VCPU_HPFAR_EL2]
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 22/47] KVM: ARM/arm64: fix non-const declaration of function returning const
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (20 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 21/47] ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 23/47] KVM: ARM/arm64: fix broken __percpu annotation shannon.zhao
` (25 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Will Deacon, Marc Zyngier
From: Will Deacon <will.deacon@arm.com>
commit 6951e48bff0b55d2a8e825a953fc1f8e3a34bf1c upstream.
Sparse kicks up about a type mismatch for kvm_target_cpu:
arch/arm64/kvm/guest.c:271:25: error: symbol 'kvm_target_cpu' redeclared with different type (originally declared at ./arch/arm64/include/asm/kvm_host.h:45) - different modifiers
so fix this by adding the missing const attribute to the function
declaration.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_host.h | 2 +-
arch/arm64/include/asm/kvm_host.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 09af149..530f56e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -42,7 +42,7 @@
struct kvm_vcpu;
u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
-int kvm_target_cpu(void);
+int __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
void kvm_reset_coprocs(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0a1d697..518d000 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -42,7 +42,7 @@
#define KVM_VCPU_MAX_FEATURES 2
struct kvm_vcpu;
-int kvm_target_cpu(void);
+int __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
int kvm_arch_dev_ioctl_check_extension(long ext);
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 23/47] KVM: ARM/arm64: fix broken __percpu annotation
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (21 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 22/47] KVM: ARM/arm64: fix non-const declaration of function returning const shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 24/47] KVM: ARM/arm64: avoid returning negative error code as bool shannon.zhao
` (24 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Will Deacon, Marc Zyngier
From: Will Deacon <will.deacon@arm.com>
commit 4000be423cb01a8d09de878bb8184511c49d4238 upstream.
Running sparse results in a bunch of noisy address space mismatches
thanks to the broken __percpu annotation on kvm_get_running_vcpus.
This function returns a pcpu pointer to a pointer, not a pointer to a
pcpu pointer. This patch fixes the annotation, which kills the warnings
from sparse.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 2 +-
arch/arm64/include/asm/kvm_host.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f92a7fb..df6e75e 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -82,7 +82,7 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void)
/**
* kvm_arm_get_running_vcpus - get the per-CPU array of currently running vcpus.
*/
-struct kvm_vcpu __percpu **kvm_get_running_vcpus(void)
+struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void)
{
return &kvm_arm_running_vcpu;
}
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 518d000..3fb0946 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -177,7 +177,7 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
}
struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
-struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
+struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
u64 kvm_call_hyp(void *hypfn, ...);
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 24/47] KVM: ARM/arm64: avoid returning negative error code as bool
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (22 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 23/47] KVM: ARM/arm64: fix broken __percpu annotation shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 25/47] KVM: vgic: return int instead of bool when checking I/O ranges shannon.zhao
` (23 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Will Deacon, Marc Zyngier
From: Will Deacon <will.deacon@arm.com>
commit 18d457661fb9fa69352822ab98d39331c3d0e571 upstream.
is_valid_cache returns true if the specified cache is valid.
Unfortunately, if the parameter passed it out of range, we return
-ENOENT, which ends up as true leading to potential hilarity.
This patch returns false on the failure path instead.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/coproc.c | 2 +-
arch/arm64/kvm/sys_regs.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index c58a351..7c73290 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -742,7 +742,7 @@ static bool is_valid_cache(u32 val)
u32 level, ctype;
if (val >= CSSELR_MAX)
- return -ENOENT;
+ return false;
/* Bottom bit is Instruction or Data bit. Next 3 bits are level. */
level = (val >> 1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5ee99e4..7691b25 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -836,7 +836,7 @@ static bool is_valid_cache(u32 val)
u32 level, ctype;
if (val >= CSSELR_MAX)
- return -ENOENT;
+ return false;
/* Bottom bit is Instruction or Data bit. Next 3 bits are level. */
level = (val >> 1);
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 25/47] KVM: vgic: return int instead of bool when checking I/O ranges
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (23 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 24/47] KVM: ARM/arm64: avoid returning negative error code as bool shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 26/47] ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault() shannon.zhao
` (22 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Will Deacon, Marc Zyngier
From: Will Deacon <will.deacon@arm.com>
commit 1fa451bcc67fa921a04c5fac8dbcde7844d54512 upstream.
vgic_ioaddr_overlap claims to return a bool, but in reality it returns
an int. Shut sparse up by fixing the type signature.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 4eec2d4..1316e55 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1654,7 +1654,7 @@ out:
return ret;
}
-static bool vgic_ioaddr_overlap(struct kvm *kvm)
+static int vgic_ioaddr_overlap(struct kvm *kvm)
{
phys_addr_t dist = kvm->arch.vgic.vgic_dist_base;
phys_addr_t cpu = kvm->arch.vgic.vgic_cpu_base;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 26/47] ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (24 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 25/47] KVM: vgic: return int instead of bool when checking I/O ranges shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 27/47] KVM: ARM: vgic: plug irq injection race shannon.zhao
` (21 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Ard Biesheuvel,
Marc Zyngier
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
commit a7d079cea2dffb112e26da2566dd84c0ef1fce97 upstream.
The ISS encoding for an exception from a Data Abort has a WnR
bit[6] that indicates whether the Data Abort was caused by a
read or a write instruction. While there are several fields
in the encoding that are only valid if the ISV bit[24] is set,
WnR is not one of them, so we can read it unconditionally.
Instead of fixing both implementations of kvm_is_write_fault()
in place, reimplement it just once using kvm_vcpu_dabt_iswrite(),
which already does the right thing with respect to the WnR bit.
Also fix up the callers to pass 'vcpu'
Acked-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 11 -----------
arch/arm/kvm/mmu.c | 10 +++++++++-
arch/arm64/include/asm/kvm_mmu.h | 13 -------------
3 files changed, 9 insertions(+), 25 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 5cc0b0f..3f688b4 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -78,17 +78,6 @@ static inline void kvm_set_pte(pte_t *pte, pte_t new_pte)
flush_pmd_entry(pte);
}
-static inline bool kvm_is_write_fault(unsigned long hsr)
-{
- unsigned long hsr_ec = hsr >> HSR_EC_SHIFT;
- if (hsr_ec == HSR_EC_IABT)
- return false;
- else if ((hsr & HSR_ISV) && !(hsr & HSR_WNR))
- return false;
- else
- return true;
-}
-
static inline void kvm_clean_pgd(pgd_t *pgd)
{
clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 70ed2c1..049c56e 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -746,6 +746,14 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap)
return false;
}
+static bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
+{
+ if (kvm_vcpu_trap_is_iabt(vcpu))
+ return false;
+
+ return kvm_vcpu_dabt_iswrite(vcpu);
+}
+
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_memory_slot *memslot,
unsigned long fault_status)
@@ -761,7 +769,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
pfn_t pfn;
pgprot_t mem_type = PAGE_S2;
- write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu));
+ write_fault = kvm_is_write_fault(vcpu);
if (fault_status == FSC_PERM && !write_fault) {
kvm_err("Unexpected L2 read permission error\n");
return -EFAULT;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 8e138c7..737da74 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -93,19 +93,6 @@ void kvm_clear_hyp_idmap(void);
#define kvm_set_pte(ptep, pte) set_pte(ptep, pte)
#define kvm_set_pmd(pmdp, pmd) set_pmd(pmdp, pmd)
-static inline bool kvm_is_write_fault(unsigned long esr)
-{
- unsigned long esr_ec = esr >> ESR_EL2_EC_SHIFT;
-
- if (esr_ec == ESR_EL2_EC_IABT)
- return false;
-
- if ((esr & ESR_EL2_ISV) && !(esr & ESR_EL2_WNR))
- return false;
-
- return true;
-}
-
static inline void kvm_clean_pgd(pgd_t *pgd) {}
static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
static inline void kvm_clean_pte(pte_t *pte) {}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 27/47] KVM: ARM: vgic: plug irq injection race
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (25 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 26/47] ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault() shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 28/47] arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset shannon.zhao
` (20 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 71afaba4a2e98bb7bdeba5078370ab43d46e67a1 upstream.
As it stands, nothing prevents userspace from injecting an interrupt
before the guest's GIC is actually initialized.
This goes unnoticed so far (as everything is pretty much statically
allocated), but ends up exploding in a spectacular way once we switch
to a more dynamic allocation (the GIC data structure isn't there yet).
The fix is to test for the "ready" flag in the VGIC distributor before
trying to inject the interrupt. Note that in order to avoid breaking
userspace, we have to ignore what is essentially an error.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 1316e55..2187318 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1387,7 +1387,8 @@ out:
int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
bool level)
{
- if (vgic_update_irq_state(kvm, cpuid, irq_num, level))
+ if (likely(vgic_initialized(kvm)) &&
+ vgic_update_irq_state(kvm, cpuid, irq_num, level))
vgic_kick_vcpus(kvm);
return 0;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 28/47] arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (26 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 27/47] KVM: ARM: vgic: plug irq injection race shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 29/47] arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc shannon.zhao
` (19 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 0fea6d7628ed6e25a9ee1b67edf7c859718d39e8 upstream.
The sgi values calculated in read_set_clear_sgi_pend_reg() and
write_set_clear_sgi_pend_reg() were horribly incorrectly multiplied by 4
with catastrophic results in that subfunctions ended up overwriting
memory not allocated for the expected purpose.
This showed up as bugs in kfree() and the kernel complaining a lot of
you turn on memory debugging.
This addresses: http://marc.info/?l=kvm&m=141164910007868&w=2
Reported-by: Shannon Zhao <zhaoshenglong@huawei.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 2187318..5309a1d 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -674,7 +674,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
{
struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
int sgi;
- int min_sgi = (offset & ~0x3) * 4;
+ int min_sgi = (offset & ~0x3);
int max_sgi = min_sgi + 3;
int vcpu_id = vcpu->vcpu_id;
u32 reg = 0;
@@ -695,7 +695,7 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
{
struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
int sgi;
- int min_sgi = (offset & ~0x3) * 4;
+ int min_sgi = (offset & ~0x3);
int max_sgi = min_sgi + 3;
int vcpu_id = vcpu->vcpu_id;
u32 reg;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 29/47] arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (27 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 28/47] arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 30/47] arm: kvm: fix CPU hotplug shannon.zhao
` (18 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Joel Schopp
From: Joel Schopp <joel.schopp@amd.com>
commit dbff124e29fa24aff9705b354b5f4648cd96e0bb upstream.
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits
and not all the bits in the PA range. This is clearly a bug that
manifests itself on systems that allocate memory in the higher address
space range.
[ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT
instead of a hard-coded value and to move the alignment check of the
allocation to mmu.c. Also added a comment explaining why we hardcode
the IPA range and changed the stage-2 pgd allocation to be based on
the 40 bit IPA range instead of the maximum possible 48 bit PA range.
- Christoffer ]
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Joel Schopp <joel.schopp@amd.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 4 ++--
arch/arm64/include/asm/kvm_arm.h | 13 ++++++++++++-
arch/arm64/include/asm/kvm_mmu.h | 5 ++---
3 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index df6e75e..55c1ebf 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -427,9 +427,9 @@ static void update_vttbr(struct kvm *kvm)
/* update vttbr to be used with the new vmid */
pgd_phys = virt_to_phys(kvm->arch.pgd);
+ BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
- kvm->arch.vttbr = pgd_phys & VTTBR_BADDR_MASK;
- kvm->arch.vttbr |= vmid;
+ kvm->arch.vttbr = pgd_phys | vmid;
spin_unlock(&kvm_vmid_lock);
}
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 00fbaa7..2bc2602 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -122,6 +122,17 @@
#define VTCR_EL2_T0SZ_MASK 0x3f
#define VTCR_EL2_T0SZ_40B 24
+/*
+ * We configure the Stage-2 page tables to always restrict the IPA space to be
+ * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are
+ * not known to exist and will break with this configuration.
+ *
+ * Note that when using 4K pages, we concatenate two first level page tables
+ * together.
+ *
+ * The magic numbers used for VTTBR_X in this patch can be found in Tables
+ * D4-23 and D4-25 in ARM DDI 0487A.b.
+ */
#ifdef CONFIG_ARM64_64K_PAGES
/*
* Stage2 translation configuration:
@@ -151,7 +162,7 @@
#endif
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
-#define VTTBR_BADDR_MASK (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
+#define VTTBR_BADDR_MASK (((1LLU << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
#define VTTBR_VMID_SHIFT (48LLU)
#define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 737da74..a030d16 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -59,10 +59,9 @@
#define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
/*
- * Align KVM with the kernel's view of physical memory. Should be
- * 40bit IPA, with PGD being 8kB aligned in the 4KB page configuration.
+ * We currently only support a 40bit IPA.
*/
-#define KVM_PHYS_SHIFT PHYS_MASK_SHIFT
+#define KVM_PHYS_SHIFT (40)
#define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT)
#define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL)
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 30/47] arm: kvm: fix CPU hotplug
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (28 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 29/47] arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 31/47] arm/arm64: KVM: fix potential NULL dereference in user_mem_abort() shannon.zhao
` (17 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Vladimir Murzin
From: Vladimir Murzin <vladimir.murzin@arm.com>
commit 37a34ac1d4775aafbc73b9db53c7daebbbc67e6a upstream.
On some platforms with no power management capabilities, the hotplug
implementation is allowed to return from a smp_ops.cpu_die() call as a
function return. Upon a CPU onlining event, the KVM CPU notifier tries
to reinstall the hyp stub, which fails on platform where no reset took
place following a hotplug event, with the message:
CPU1: smp_ops.cpu_die() returned, trying to resuscitate
CPU1: Booted secondary processor
Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x80409540
unexpected data abort in Hyp mode at: 0x80401fe8
unexpected HVC/SVC trap in Hyp mode at: 0x805c6170
since KVM code is trying to reinstall the stub on a system where it is
already configured.
To prevent this issue, this patch adds a check in the KVM hotplug
notifier that detects if the HYP stub really needs re-installing when a
CPU is onlined and skips the installation call if the stub is already in
place, which means that the CPU has not been reset.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 55c1ebf..fb9c291 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -825,7 +825,8 @@ static int hyp_init_cpu_notify(struct notifier_block *self,
switch (action) {
case CPU_STARTING:
case CPU_STARTING_FROZEN:
- cpu_init_hyp_mode(NULL);
+ if (__hyp_get_vectors() == hyp_default_vectors)
+ cpu_init_hyp_mode(NULL);
break;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 31/47] arm/arm64: KVM: fix potential NULL dereference in user_mem_abort()
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (29 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 30/47] arm: kvm: fix CPU hotplug shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 32/47] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE shannon.zhao
` (16 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Ard Biesheuvel
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
commit 37b544087ef3f65ca68465ba39291a07195dac26 upstream.
Handle the potential NULL return value of find_vma_intersection()
before dereferencing it.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/mmu.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 049c56e..8cd0387 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -778,6 +778,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
/* Let's check if we will get back a huge page backed by hugetlbfs */
down_read(¤t->mm->mmap_sem);
vma = find_vma_intersection(current->mm, hva, hva + 1);
+ if (unlikely(!vma)) {
+ kvm_err("Failed to find VMA for hva 0x%lx\n", hva);
+ up_read(¤t->mm->mmap_sem);
+ return -EFAULT;
+ }
+
if (is_vm_hugetlb_page(vma)) {
hugetlb = true;
gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 32/47] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (30 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 31/47] arm/arm64: KVM: fix potential NULL dereference in user_mem_abort() shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 33/47] arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort shannon.zhao
` (15 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit c3058d5da2222629bc2223c488a4512b59bb4baf upstream.
When creating or moving a memslot, make sure the IPA space is within the
addressable range of the guest. Otherwise, user space can create too
large a memslot and KVM would try to access potentially unallocated page
table entries when inserting entries in the Stage-2 page tables.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/mmu.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 8cd0387..8a677ae 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -926,6 +926,9 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
memslot = gfn_to_memslot(vcpu->kvm, gfn);
+ /* Userspace should not be able to register out-of-bounds IPAs */
+ VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
+
ret = user_mem_abort(vcpu, fault_ipa, memslot, fault_status);
if (ret == 0)
ret = 1;
@@ -1150,6 +1153,14 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem,
enum kvm_mr_change change)
{
+ /*
+ * Prevent userspace from creating a memory region outside of the IPA
+ * space addressable by the KVM guest IPA space.
+ */
+ if (memslot->base_gfn + memslot->npages >=
+ (KVM_PHYS_SIZE >> PAGE_SHIFT))
+ return -EFAULT;
+
return 0;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 33/47] arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (31 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 32/47] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 34/47] arm64: KVM: fix unmapping with 48-bit VAs shannon.zhao
` (14 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Steve Capper
From: Steve Capper <steve.capper@linaro.org>
commit 3d08c629244257473450a8ba17cb8184b91e68f8 upstream.
Commit:
b886576 ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
introduced some code in user_mem_abort that failed to compile if
STRICT_MM_TYPECHECKS was enabled.
This patch fixes up the failing comparison.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Kim Phillips <kim.phillips@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 8a677ae..8a998e0 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -850,7 +850,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
coherent_cache_guest_page(vcpu, hva, PAGE_SIZE);
ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte,
- mem_type == PAGE_S2_DEVICE);
+ pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE));
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 34/47] arm64: KVM: fix unmapping with 48-bit VAs
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (32 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 33/47] arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 35/47] arm/arm64: KVM: vgic: Fix error code in kvm_vgic_create() shannon.zhao
` (13 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable
Cc: gregkh, christoffer.dall, shannon.zhao, Mark Rutland,
Catalin Marinas, Jungseok Lee, Marc Zyngier, Paolo Bonzini
From: Mark Rutland <mark.rutland@arm.com>
commit 7cbb87d67e38cfc55680290a706fd7517f10050d upstream.
Currently if using a 48-bit VA, tearing down the hyp page tables (which
can happen in the absence of a GICH or GICV resource) results in the
rather nasty splat below, evidently becasue we access a table that
doesn't actually exist.
Commit 38f791a4e499792e (arm64: KVM: Implement 48 VA support for KVM EL2
and Stage-2) added a pgd_none check to __create_hyp_mappings to account
for the additional level of tables, but didn't add a corresponding check
to unmap_range, and this seems to be the source of the problem.
This patch adds the missing pgd_none check, ensuring we don't try to
access tables that don't exist.
Original splat below:
kvm [1]: Using HYP init bounce page @83fe94a000
kvm [1]: Cannot obtain GICH resource
Unable to handle kernel paging request at virtual address ffff7f7fff000000
pgd = ffff800000770000
[ffff7f7fff000000] *pgd=0000000000000000
Internal error: Oops: 96000004 [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 1 Comm: swapper/0 Not tainted 3.18.0-rc2+ #89
task: ffff8003eb500000 ti: ffff8003eb45c000 task.ti: ffff8003eb45c000
PC is at unmap_range+0x120/0x580
LR is at free_hyp_pgds+0xac/0xe4
pc : [<ffff80000009b768>] lr : [<ffff80000009cad8>] pstate: 80000045
sp : ffff8003eb45fbf0
x29: ffff8003eb45fbf0 x28: ffff800000736000
x27: ffff800000735000 x26: ffff7f7fff000000
x25: 0000000040000000 x24: ffff8000006f5000
x23: 0000000000000000 x22: 0000007fffffffff
x21: 0000800000000000 x20: 0000008000000000
x19: 0000000000000000 x18: ffff800000648000
x17: ffff800000537228 x16: 0000000000000000
x15: 000000000000001f x14: 0000000000000000
x13: 0000000000000001 x12: 0000000000000020
x11: 0000000000000062 x10: 0000000000000006
x9 : 0000000000000000 x8 : 0000000000000063
x7 : 0000000000000018 x6 : 00000003ff000000
x5 : ffff800000744188 x4 : 0000000000000001
x3 : 0000000040000000 x2 : ffff800000000000
x1 : 0000007fffffffff x0 : 000000003fffffff
Process swapper/0 (pid: 1, stack limit = 0xffff8003eb45c058)
Stack: (0xffff8003eb45fbf0 to 0xffff8003eb460000)
fbe0: eb45fcb0 ffff8003 0009cad8 ffff8000
fc00: 00000000 00000080 00736140 ffff8000 00736000 ffff8000 00000000 00007c80
fc20: 00000000 00000080 006f5000 ffff8000 00000000 00000080 00743000 ffff8000
fc40: 00735000 ffff8000 006d3030 ffff8000 006fe7b8 ffff8000 00000000 00000080
fc60: ffffffff 0000007f fdac1000 ffff8003 fd94b000 ffff8003 fda47000 ffff8003
fc80: 00502b40 ffff8000 ff000000 ffff7f7f fdec6000 00008003 fdac1630 ffff8003
fca0: eb45fcb0 ffff8003 ffffffff 0000007f eb45fd00 ffff8003 0009b378 ffff8000
fcc0: ffffffea 00000000 006fe000 ffff8000 00736728 ffff8000 00736120 ffff8000
fce0: 00000040 00000000 00743000 ffff8000 006fe7b8 ffff8000 0050cd48 00000000
fd00: eb45fd60 ffff8003 00096070 ffff8000 006f06e0 ffff8000 006f06e0 ffff8000
fd20: fd948b40 ffff8003 0009a320 ffff8000 00000000 00000000 00000000 00000000
fd40: 00000ae0 00000000 006aa25c ffff8000 eb45fd60 ffff8003 0017ca44 00000002
fd60: eb45fdc0 ffff8003 0009a33c ffff8000 006f06e0 ffff8000 006f06e0 ffff8000
fd80: fd948b40 ffff8003 0009a320 ffff8000 00000000 00000000 00735000 ffff8000
fda0: 006d3090 ffff8000 006aa25c ffff8000 00735000 ffff8000 006d3030 ffff8000
fdc0: eb45fdd0 ffff8003 000814c0 ffff8000 eb45fe50 ffff8003 006aaac4 ffff8000
fde0: 006ddd90 ffff8000 00000006 00000000 006d3000 ffff8000 00000095 00000000
fe00: 006a1e90 ffff8000 00735000 ffff8000 006d3000 ffff8000 006aa25c ffff8000
fe20: 00735000 ffff8000 006d3030 ffff8000 eb45fe50 ffff8003 006fac68 ffff8000
fe40: 00000006 00000006 fe293ee6 ffff8003 eb45feb0 ffff8003 004f8ee8 ffff8000
fe60: 004f8ed4 ffff8000 00735000 ffff8000 00000000 00000000 00000000 00000000
fe80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fea0: 00000000 00000000 00000000 00000000 00000000 00000000 000843d0 ffff8000
fec0: 004f8ed4 ffff8000 00000000 00000000 00000000 00000000 00000000 00000000
fee0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ffa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ffc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000005 00000000
ffe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Call trace:
[<ffff80000009b768>] unmap_range+0x120/0x580
[<ffff80000009cad4>] free_hyp_pgds+0xa8/0xe4
[<ffff80000009b374>] kvm_arch_init+0x268/0x44c
[<ffff80000009606c>] kvm_init+0x24/0x260
[<ffff80000009a338>] arm_init+0x18/0x24
[<ffff8000000814bc>] do_one_initcall+0x88/0x1a0
[<ffff8000006aaac0>] kernel_init_freeable+0x148/0x1e8
[<ffff8000004f8ee4>] kernel_init+0x10/0xd4
Code: 8b000263 92628479 d1000720 eb01001f (f9400340)
---[ end trace 3bc230562e926fa4 ]---
Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jungseok Lee <jungseoklee85@gmail.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/mmu.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 8a998e0..6b30b1b 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -197,7 +197,8 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
pgd = pgdp + pgd_index(addr);
do {
next = kvm_pgd_addr_end(addr, end);
- unmap_puds(kvm, pgd, addr, next);
+ if (!pgd_none(*pgd))
+ unmap_puds(kvm, pgd, addr, next);
} while (pgd++, addr = next, addr != end);
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 35/47] arm/arm64: KVM: vgic: Fix error code in kvm_vgic_create()
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (33 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 34/47] arm64: KVM: fix unmapping with 48-bit VAs shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 36/47] arm64/kvm: Fix assembler compatibility of macros shannon.zhao
` (12 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable
Cc: gregkh, christoffer.dall, shannon.zhao, Andre Przywara,
Marc Zyngier, Paolo Bonzini
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 6b50f54064a02b77a7b990032b80234fee59bcd6 upstream.
If we detect another vCPU is running we just exit and return 0 as if we
succesfully created the VGIC, but the VGIC wouldn't actual be created.
This shouldn't break in-kernel behavior because the kernel will not
observe the failed the attempt to create the VGIC, but userspace could
be rightfully confused.
Cc: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 5309a1d..c324a52 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1611,7 +1611,7 @@ out:
int kvm_vgic_create(struct kvm *kvm)
{
- int i, vcpu_lock_idx = -1, ret = 0;
+ int i, vcpu_lock_idx = -1, ret;
struct kvm_vcpu *vcpu;
mutex_lock(&kvm->lock);
@@ -1626,6 +1626,7 @@ int kvm_vgic_create(struct kvm *kvm)
* vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure
* that no other VCPUs are run while we create the vgic.
*/
+ ret = -EBUSY;
kvm_for_each_vcpu(i, vcpu, kvm) {
if (!mutex_trylock(&vcpu->mutex))
goto out_unlock;
@@ -1633,11 +1634,10 @@ int kvm_vgic_create(struct kvm *kvm)
}
kvm_for_each_vcpu(i, vcpu, kvm) {
- if (vcpu->arch.has_run_once) {
- ret = -EBUSY;
+ if (vcpu->arch.has_run_once)
goto out_unlock;
- }
}
+ ret = 0;
spin_lock_init(&kvm->arch.vgic.lock);
kvm->arch.vgic.vctrl_base = vgic_vctrl_base;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 36/47] arm64/kvm: Fix assembler compatibility of macros
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (34 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 35/47] arm/arm64: KVM: vgic: Fix error code in kvm_vgic_create() shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 37/47] arm/arm64: kvm: drop inappropriate use of kvm_is_mmio_pfn() shannon.zhao
` (11 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Geoff Levand, Will Deacon
From: Geoff Levand <geoff@infradead.org>
commit 286fb1cc32b11c18da3573a8c8c37a4f9da16e30 upstream.
Some of the macros defined in kvm_arm.h are useful in assembly files, but are
not compatible with the assembler. Change any C language integer constant
definitions using appended U, UL, or ULL to the UL() preprocessor macro. Also,
add a preprocessor include of the asm/memory.h file which defines the UL()
macro.
Fixes build errors like these when using kvm_arm.h in assembly
source files:
Error: unexpected characters following instruction at operand 3 -- `and x0,x1,#((1U<<25)-1)'
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/include/asm/kvm_arm.h | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 2bc2602..ea68925 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -18,6 +18,7 @@
#ifndef __ARM64_KVM_ARM_H__
#define __ARM64_KVM_ARM_H__
+#include <asm/memory.h>
#include <asm/types.h>
/* Hyp Configuration Register (HCR) bits */
@@ -162,9 +163,9 @@
#endif
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
-#define VTTBR_BADDR_MASK (((1LLU << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
-#define VTTBR_VMID_SHIFT (48LLU)
-#define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT)
+#define VTTBR_BADDR_MASK (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
+#define VTTBR_VMID_SHIFT (UL(48))
+#define VTTBR_VMID_MASK (UL(0xFF) << VTTBR_VMID_SHIFT)
/* Hyp System Trap Register */
#define HSTR_EL2_TTEE (1 << 16)
@@ -187,13 +188,13 @@
/* Exception Syndrome Register (ESR) bits */
#define ESR_EL2_EC_SHIFT (26)
-#define ESR_EL2_EC (0x3fU << ESR_EL2_EC_SHIFT)
-#define ESR_EL2_IL (1U << 25)
+#define ESR_EL2_EC (UL(0x3f) << ESR_EL2_EC_SHIFT)
+#define ESR_EL2_IL (UL(1) << 25)
#define ESR_EL2_ISS (ESR_EL2_IL - 1)
#define ESR_EL2_ISV_SHIFT (24)
-#define ESR_EL2_ISV (1U << ESR_EL2_ISV_SHIFT)
+#define ESR_EL2_ISV (UL(1) << ESR_EL2_ISV_SHIFT)
#define ESR_EL2_SAS_SHIFT (22)
-#define ESR_EL2_SAS (3U << ESR_EL2_SAS_SHIFT)
+#define ESR_EL2_SAS (UL(3) << ESR_EL2_SAS_SHIFT)
#define ESR_EL2_SSE (1 << 21)
#define ESR_EL2_SRT_SHIFT (16)
#define ESR_EL2_SRT_MASK (0x1f << ESR_EL2_SRT_SHIFT)
@@ -207,16 +208,16 @@
#define ESR_EL2_FSC_TYPE (0x3c)
#define ESR_EL2_CV_SHIFT (24)
-#define ESR_EL2_CV (1U << ESR_EL2_CV_SHIFT)
+#define ESR_EL2_CV (UL(1) << ESR_EL2_CV_SHIFT)
#define ESR_EL2_COND_SHIFT (20)
-#define ESR_EL2_COND (0xfU << ESR_EL2_COND_SHIFT)
+#define ESR_EL2_COND (UL(0xf) << ESR_EL2_COND_SHIFT)
#define FSC_FAULT (0x04)
#define FSC_PERM (0x0c)
/* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
-#define HPFAR_MASK (~0xFUL)
+#define HPFAR_MASK (~UL(0xf))
#define ESR_EL2_EC_UNKNOWN (0x00)
#define ESR_EL2_EC_WFI (0x01)
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 37/47] arm/arm64: kvm: drop inappropriate use of kvm_is_mmio_pfn()
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (35 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 36/47] arm64/kvm: Fix assembler compatibility of macros shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 38/47] arm/arm64: KVM: Don't clear the VCPU_POWER_OFF flag shannon.zhao
` (10 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Ard Biesheuvel,
Marc Zyngier
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
commit 07a9748c78cfc39b54f06125a216b67b9c8f09ed upstream.
Instead of using kvm_is_mmio_pfn() to decide whether a host region
should be stage 2 mapped with device attributes, add a new static
function kvm_is_device_pfn() that disregards RAM pages with the
reserved bit set, as those should usually not be mapped as device
memory.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/mmu.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 6b30b1b..f8c231d 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -755,6 +755,11 @@ static bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
return kvm_vcpu_dabt_iswrite(vcpu);
}
+static bool kvm_is_device_pfn(unsigned long pfn)
+{
+ return !pfn_valid(pfn);
+}
+
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_memory_slot *memslot,
unsigned long fault_status)
@@ -825,7 +830,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (is_error_pfn(pfn))
return -EFAULT;
- if (kvm_is_mmio_pfn(pfn))
+ if (kvm_is_device_pfn(pfn))
mem_type = PAGE_S2_DEVICE;
spin_lock(&kvm->mmu_lock);
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 38/47] arm/arm64: KVM: Don't clear the VCPU_POWER_OFF flag
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (36 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 37/47] arm/arm64: kvm: drop inappropriate use of kvm_is_mmio_pfn() shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 39/47] arm/arm64: KVM: Correct KVM_ARM_VCPU_INIT power off option shannon.zhao
` (9 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 03f1d4c17edb31b41b14ca3a749ae38d2dd6639d upstream.
If a VCPU was originally started with power off (typically to be brought
up by PSCI in SMP configurations), there is no need to clear the
POWER_OFF flag in the kernel, as this flag is only tested during the
init ioctl itself.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index fb9c291..4a7f538 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -678,7 +678,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
/*
* Handle the "start in power-off" case by marking the VCPU as paused.
*/
- if (__test_and_clear_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
+ if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
vcpu->arch.pause = true;
return 0;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 39/47] arm/arm64: KVM: Correct KVM_ARM_VCPU_INIT power off option
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (37 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 38/47] arm/arm64: KVM: Don't clear the VCPU_POWER_OFF flag shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 40/47] arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu shannon.zhao
` (8 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 3ad8b3de526a76fbe9466b366059e4958957b88f upstream.
The implementation of KVM_ARM_VCPU_INIT is currently not doing what
userspace expects, namely making sure that a vcpu which may have been
turned off using PSCI is returned to its initial state, which would be
powered on if userspace does not set the KVM_ARM_VCPU_POWER_OFF flag.
Implement the expected functionality and clarify the ABI.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
Documentation/virtual/kvm/api.txt | 3 ++-
arch/arm/kvm/arm.c | 2 ++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 6cd63a9..bc6d617 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2344,7 +2344,8 @@ should be created before this ioctl is invoked.
Possible features:
- KVM_ARM_VCPU_POWER_OFF: Starts the CPU in a power-off state.
- Depends on KVM_CAP_ARM_PSCI.
+ Depends on KVM_CAP_ARM_PSCI. If not set, the CPU will be powered on
+ and execute guest code when KVM_RUN is called.
- KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode.
Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 4a7f538..9c58125 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -680,6 +680,8 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
*/
if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
vcpu->arch.pause = true;
+ else
+ vcpu->arch.pause = false;
return 0;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 40/47] arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (38 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 39/47] arm/arm64: KVM: Correct KVM_ARM_VCPU_INIT power off option shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 41/47] arm/arm64: KVM: Introduce stage2_unmap_vm shannon.zhao
` (7 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit b856a59141b1066d3c896a0d0231f84dabd040af upstream.
When userspace resets the vcpu using KVM_ARM_VCPU_INIT, we should also
reset the HCR, because we now modify the HCR dynamically to
enable/disable trapping of guest accesses to the VM registers.
This is crucial for reboot of VMs working since otherwise we will not be
doing the necessary cache maintenance operations when faulting in pages
with the guest MMU off.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_emulate.h | 5 +++++
arch/arm/kvm/arm.c | 2 ++
arch/arm/kvm/guest.c | 1 -
arch/arm64/include/asm/kvm_emulate.h | 5 +++++
arch/arm64/kvm/guest.c | 1 -
5 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 0fa90c9..853e2be 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -33,6 +33,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.hcr = HCR_GUEST_MASK;
+}
+
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
{
return 1;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9c58125..077f82d0 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -675,6 +675,8 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
if (ret)
return ret;
+ vcpu_reset_hcr(vcpu);
+
/*
* Handle the "start in power-off" case by marking the VCPU as paused.
*/
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index b23a59c..2786eae 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{
- vcpu->arch.hcr = HCR_GUEST_MASK;
return 0;
}
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index dd8ecfc3..681cb90 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -38,6 +38,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+}
+
static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu)
{
return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pc;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 0874557..a8d81fa 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{
- vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
return 0;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 41/47] arm/arm64: KVM: Introduce stage2_unmap_vm
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (39 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 40/47] arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 42/47] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized shannon.zhao
` (6 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream.
Introduce a new function to unmap user RAM regions in the stage2 page
tables. This is needed on reboot (or when the guest turns off the MMU)
to ensure we fault in pages again and make the dcache, RAM, and icache
coherent.
Using unmap_stage2_range for the whole guest physical range does not
work, because that unmaps IO regions (such as the GIC) which will not be
recreated or in the best case faulted in on a page-by-page basis.
Call this function on secondary and subsequent calls to the
KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest
Stage-1 MMU is off when faulting in pages and make the caches coherent.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 1 +
arch/arm/kvm/arm.c | 7 +++++
arch/arm/kvm/mmu.c | 65 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/include/asm/kvm_mmu.h | 1 +
4 files changed, 74 insertions(+)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 3f688b4..c02a836 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -47,6 +47,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
void free_hyp_pgds(void);
+void stage2_unmap_vm(struct kvm *kvm);
int kvm_alloc_stage2_pgd(struct kvm *kvm);
void kvm_free_stage2_pgd(struct kvm *kvm);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 077f82d0..039df03 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -675,6 +675,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
if (ret)
return ret;
+ /*
+ * Ensure a rebooted VM will fault in RAM pages and detect if the
+ * guest MMU is turned off and flush the caches as needed.
+ */
+ if (vcpu->arch.has_run_once)
+ stage2_unmap_vm(vcpu->kvm);
+
vcpu_reset_hcr(vcpu);
/*
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index f8c231d..3df0f092 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -556,6 +556,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
unmap_range(kvm, kvm->arch.pgd, start, size);
}
+static void stage2_unmap_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *memslot)
+{
+ hva_t hva = memslot->userspace_addr;
+ phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
+ phys_addr_t size = PAGE_SIZE * memslot->npages;
+ hva_t reg_end = hva + size;
+
+ /*
+ * A memory region could potentially cover multiple VMAs, and any holes
+ * between them, so iterate over all of them to find out if we should
+ * unmap any of them.
+ *
+ * +--------------------------------------------+
+ * +---------------+----------------+ +----------------+
+ * | : VMA 1 | VMA 2 | | VMA 3 : |
+ * +---------------+----------------+ +----------------+
+ * | memory region |
+ * +--------------------------------------------+
+ */
+ do {
+ struct vm_area_struct *vma = find_vma(current->mm, hva);
+ hva_t vm_start, vm_end;
+
+ if (!vma || vma->vm_start >= reg_end)
+ break;
+
+ /*
+ * Take the intersection of this VMA with the memory region
+ */
+ vm_start = max(hva, vma->vm_start);
+ vm_end = min(reg_end, vma->vm_end);
+
+ if (!(vma->vm_flags & VM_PFNMAP)) {
+ gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
+ unmap_stage2_range(kvm, gpa, vm_end - vm_start);
+ }
+ hva = vm_end;
+ } while (hva < reg_end);
+}
+
+/**
+ * stage2_unmap_vm - Unmap Stage-2 RAM mappings
+ * @kvm: The struct kvm pointer
+ *
+ * Go through the memregions and unmap any reguler RAM
+ * backing memory already mapped to the VM.
+ */
+void stage2_unmap_vm(struct kvm *kvm)
+{
+ struct kvm_memslots *slots;
+ struct kvm_memory_slot *memslot;
+ int idx;
+
+ idx = srcu_read_lock(&kvm->srcu);
+ spin_lock(&kvm->mmu_lock);
+
+ slots = kvm_memslots(kvm);
+ kvm_for_each_memslot(memslot, slots)
+ stage2_unmap_memslot(kvm, memslot);
+
+ spin_unlock(&kvm->mmu_lock);
+ srcu_read_unlock(&kvm->srcu, idx);
+}
+
/**
* kvm_free_stage2_pgd - free all stage-2 tables
* @kvm: The KVM struct pointer for the VM.
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index a030d16..0d51874 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -74,6 +74,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
void free_hyp_pgds(void);
+void stage2_unmap_vm(struct kvm *kvm);
int kvm_alloc_stage2_pgd(struct kvm *kvm);
void kvm_free_stage2_pgd(struct kvm *kvm);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 42/47] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (40 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 41/47] arm/arm64: KVM: Introduce stage2_unmap_vm shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 43/47] arm/arm64: KVM: Require in-kernel vgic for the arch timers shannon.zhao
` (5 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 716139df2517fbc3f2306dbe8eba0fa88dca0189 upstream.
When the vgic initializes its internal state it does so based on the
number of VCPUs available at the time. If we allow KVM to create more
VCPUs after the VGIC has been initialized, we are likely to error out in
unfortunate ways later, perform buffer overflows etc.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 039df03..2e74a61 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -220,6 +220,11 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
int err;
struct kvm_vcpu *vcpu;
+ if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) {
+ err = -EBUSY;
+ goto out;
+ }
+
vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
if (!vcpu) {
err = -ENOMEM;
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 43/47] arm/arm64: KVM: Require in-kernel vgic for the arch timers
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (41 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 42/47] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 44/47] arm64: KVM: Fix TLB invalidation by IPA/VMID shannon.zhao
` (4 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao
From: Christoffer Dall <christoffer.dall@linaro.org>
commit 05971120fca43e0357789a14b3386bb56eef2201 upstream.
It is curently possible to run a VM with architected timers support
without creating an in-kernel VGIC, which will result in interrupts from
the virtual timer going nowhere.
To address this issue, move the architected timers initialization to the
time when we run a VCPU for the first time, and then only initialize
(and enable) the architected timers if we have a properly created and
initialized in-kernel VGIC.
When injecting interrupts from the virtual timer to the vgic, the
current setup should ensure that this never calls an on-demand init of
the VGIC, which is the only call path that could return an error from
kvm_vgic_inject_irq(), so capture the return value and raise a warning
if there's an error there.
We also change the kvm_timer_init() function from returning an int to be
a void function, since the function always succeeds.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/kvm/arm.c | 9 +++++++++
include/kvm/arm_arch_timer.h | 10 ++++------
virt/kvm/arm/arch_timer.c | 30 ++++++++++++++++++++++--------
3 files changed, 35 insertions(+), 14 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 2e74a61..9b67f6d6 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -441,6 +441,7 @@ static void update_vttbr(struct kvm *kvm)
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
{
+ struct kvm *kvm = vcpu->kvm;
int ret;
if (likely(vcpu->arch.has_run_once))
@@ -458,6 +459,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
return ret;
}
+ /*
+ * Enable the arch timers only if we have an in-kernel VGIC
+ * and it has been properly initialized, since we cannot handle
+ * interrupts from the virtual timer with a userspace gic.
+ */
+ if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
+ kvm_timer_enable(kvm);
+
return 0;
}
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 6d9aedd..327b155 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -60,7 +60,8 @@ struct arch_timer_cpu {
#ifdef CONFIG_KVM_ARM_TIMER
int kvm_timer_hyp_init(void);
-int kvm_timer_init(struct kvm *kvm);
+void kvm_timer_enable(struct kvm *kvm);
+void kvm_timer_init(struct kvm *kvm);
void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
const struct kvm_irq_level *irq);
void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
@@ -73,11 +74,8 @@ static inline int kvm_timer_hyp_init(void)
return 0;
};
-static inline int kvm_timer_init(struct kvm *kvm)
-{
- return 0;
-}
-
+static inline void kvm_timer_enable(struct kvm *kvm) {}
+static inline void kvm_timer_init(struct kvm *kvm) {}
static inline void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
const struct kvm_irq_level *irq) {}
static inline void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu) {}
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 5081e80..c6fe405 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -61,12 +61,14 @@ static void timer_disarm(struct arch_timer_cpu *timer)
static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu)
{
+ int ret;
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK;
- kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
- timer->irq->irq,
- timer->irq->level);
+ ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
+ timer->irq->irq,
+ timer->irq->level);
+ WARN_ON(ret);
}
static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
@@ -307,12 +309,24 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
timer_disarm(timer);
}
-int kvm_timer_init(struct kvm *kvm)
+void kvm_timer_enable(struct kvm *kvm)
{
- if (timecounter && wqueue) {
- kvm->arch.timer.cntvoff = kvm_phys_timer_read();
+ if (kvm->arch.timer.enabled)
+ return;
+
+ /*
+ * There is a potential race here between VCPUs starting for the first
+ * time, which may be enabling the timer multiple times. That doesn't
+ * hurt though, because we're just setting a variable to the same
+ * variable that it already was. The important thing is that all
+ * VCPUs have the enabled variable set, before entering the guest, if
+ * the arch timers are enabled.
+ */
+ if (timecounter && wqueue)
kvm->arch.timer.enabled = 1;
- }
+}
- return 0;
+void kvm_timer_init(struct kvm *kvm)
+{
+ kvm->arch.timer.cntvoff = kvm_phys_timer_read();
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 44/47] arm64: KVM: Fix TLB invalidation by IPA/VMID
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (42 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 43/47] arm/arm64: KVM: Require in-kernel vgic for the arch timers shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 45/47] arm64: KVM: Fix HCR setting for 32bit guests shannon.zhao
` (3 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier,
Paolo Bonzini
From: Marc Zyngier <marc.zyngier@arm.com>
commit 55e858b75808347378e5117c3c2339f46cc03575 upstream.
It took about two years for someone to notice that the IPA passed
to TLBI IPAS2E1IS must be shifted by 12 bits. Clearly our reviewing
is not as good as it should be...
Paper bag time for me.
Reported-by: Mario Smarduch <m.smarduch@samsung.com>
Tested-by: Mario Smarduch <m.smarduch@samsung.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/kvm/hyp.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 5dfc8331..3aaf3bc 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -629,6 +629,7 @@ ENTRY(__kvm_tlb_flush_vmid_ipa)
* Instead, we invalidate Stage-2 for this IPA, and the
* whole of Stage-1. Weep...
*/
+ lsr x1, x1, #12
tlbi ipas2e1is, x1
/*
* We have to ensure completion of the invalidation at Stage-2,
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 45/47] arm64: KVM: Fix HCR setting for 32bit guests
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (43 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 44/47] arm64: KVM: Fix TLB invalidation by IPA/VMID shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 46/47] arm64: KVM: Do not use pgd_index to index stage-2 pgd shannon.zhao
` (2 subsequent siblings)
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier,
Paolo Bonzini
From: Marc Zyngier <marc.zyngier@arm.com>
commit 801f6772cecea6cfc7da61aa197716ab64db5f9e upstream.
Commit b856a59141b1 (arm/arm64: KVM: Reset the HCR on each vcpu
when resetting the vcpu) moved the init of the HCR register to
happen later in the init of a vcpu, but left out the fixup
done in kvm_reset_vcpu when preparing for a 32bit guest.
As a result, the 32bit guest is run as a 64bit guest, but the
rest of the kernel still manages it as a 32bit. Fun follows.
Moving the fixup to vcpu_reset_hcr solves the problem for good.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm64/include/asm/kvm_emulate.h | 2 ++
arch/arm64/kvm/reset.c | 1 -
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 681cb90..91f33c2 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -41,6 +41,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+ if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
+ vcpu->arch.hcr_el2 &= ~HCR_RW;
}
static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 70a7816..0b43265 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -90,7 +90,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
if (!cpu_has_32bit_el1())
return -EINVAL;
cpu_reset = &default_regs_reset32;
- vcpu->arch.hcr_el2 &= ~HCR_RW;
} else {
cpu_reset = &default_regs_reset;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 46/47] arm64: KVM: Do not use pgd_index to index stage-2 pgd
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (44 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 45/47] arm64: KVM: Fix HCR setting for 32bit guests shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model shannon.zhao
2015-05-11 9:39 ` [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel Shannon Zhao
47 siblings, 0 replies; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier
From: Marc Zyngier <marc.zyngier@arm.com>
commit 04b8dc85bf4a64517e3cf20e409eeaa503b15cc1 upstream.
The kernel's pgd_index macro is designed to index a normal, page
sized array. KVM is a bit diffferent, as we can use concatenated
pages to have a bigger address space (for example 40bit IPA with
4kB pages gives us an 8kB PGD.
In the above case, the use of pgd_index will always return an index
inside the first 4kB, which makes a guest that has memory above
0x8000000000 rather unhappy, as it spins forever in a page fault,
whist the host happilly corrupts the lower pgd.
The obvious fix is to get our own kvm_pgd_index that does the right
thing(tm).
Tested on X-Gene with a hacked kvmtool that put memory at a stupidly
high address.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
arch/arm/include/asm/kvm_mmu.h | 3 ++-
arch/arm/kvm/mmu.c | 6 +++---
arch/arm64/include/asm/kvm_mmu.h | 2 ++
3 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index c02a836..d352f1a5 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -117,13 +117,14 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
(__boundary - 1 < (end) - 1)? __boundary: (end); \
})
+#define kvm_pgd_index(addr) pgd_index(addr)
+
static inline bool kvm_page_empty(void *ptr)
{
struct page *ptr_page = virt_to_page(ptr);
return page_count(ptr_page) == 1;
}
-
#define kvm_pte_table_empty(ptep) kvm_page_empty(ptep)
#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
#define kvm_pud_table_empty(pudp) (0)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 3df0f092..03ab5cc 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -194,7 +194,7 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
phys_addr_t addr = start, end = start + size;
phys_addr_t next;
- pgd = pgdp + pgd_index(addr);
+ pgd = pgdp + kvm_pgd_index(addr);
do {
next = kvm_pgd_addr_end(addr, end);
if (!pgd_none(*pgd))
@@ -264,7 +264,7 @@ static void stage2_flush_memslot(struct kvm *kvm,
phys_addr_t next;
pgd_t *pgd;
- pgd = kvm->arch.pgd + pgd_index(addr);
+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);
do {
next = kvm_pgd_addr_end(addr, end);
stage2_flush_puds(kvm, pgd, addr, next);
@@ -649,7 +649,7 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
pud_t *pud;
pmd_t *pmd;
- pgd = kvm->arch.pgd + pgd_index(addr);
+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);
pud = pud_offset(pgd, addr);
if (pud_none(*pud)) {
if (!cache)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 0d51874..15a8a86 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -69,6 +69,8 @@
#define PTRS_PER_S2_PGD (1 << (KVM_PHYS_SHIFT - PGDIR_SHIFT))
#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+#define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+
int create_hyp_mappings(void *from, void *to);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (45 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 46/47] arm64: KVM: Do not use pgd_index to index stage-2 pgd shannon.zhao
@ 2015-05-04 1:52 ` shannon.zhao
2015-05-11 15:17 ` Greg KH
2015-05-11 9:39 ` [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel Shannon Zhao
47 siblings, 1 reply; 58+ messages in thread
From: shannon.zhao @ 2015-05-04 1:52 UTC (permalink / raw)
To: stable
Cc: gregkh, christoffer.dall, shannon.zhao, Marc Zyngier,
Alex Bennée
From: Christoffer Dall <christoffer.dall@linaro.org>
commit ae705930fca6322600690df9dc1c7d0516145a93 upstream.
There is an interesting bug in the vgic code, which manifests itself
when the KVM run loop has a signal pending or needs a vmid generation
rollover after having disabled interrupts but before actually switching
to the guest.
In this case, we flush the vgic as usual, but we sync back the vgic
state and exit to userspace before entering the guest. The consequence
is that we will be syncing the list registers back to the software model
using the GICH_ELRSR and GICH_EISR from the last execution of the guest,
potentially overwriting a list register containing an interrupt.
This showed up during migration testing where we would capture a state
where the VM has masked the arch timer but there were no interrupts,
resulting in a hung test.
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reported-by: Alex Bennee <alex.bennee@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
virt/kvm/arm/vgic.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index c324a52..152ec76 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1042,6 +1042,7 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
lr, irq, vgic_cpu->vgic_lr[lr]);
BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
vgic_cpu->vgic_lr[lr] |= GICH_LR_PENDING_BIT;
+ __clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
return true;
}
@@ -1055,6 +1056,7 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq);
vgic_cpu->vgic_irq_lr_map[irq] = lr;
set_bit(lr, vgic_cpu->lr_used);
+ __clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
if (!vgic_irq_is_edge(vcpu, irq))
vgic_cpu->vgic_lr[lr] |= GICH_LR_EOI;
@@ -1209,6 +1211,14 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu)
if (vgic_cpu->vgic_misr & GICH_MISR_U)
vgic_cpu->vgic_hcr &= ~GICH_HCR_UIE;
+ /*
+ * In the next iterations of the vcpu loop, if we sync the vgic state
+ * after flushing it, but before entering the guest (this happens for
+ * pending signals and vmid rollovers), then make sure we don't pick
+ * up any old maintenance interrupts here.
+ */
+ memset(vgic_cpu->vgic_eisr, 0, sizeof(vgic_cpu->vgic_eisr[0]) * 2);
+
return level_pending;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
` (46 preceding siblings ...)
2015-05-04 1:52 ` [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model shannon.zhao
@ 2015-05-11 9:39 ` Shannon Zhao
2015-05-11 12:21 ` Greg KH
47 siblings, 1 reply; 58+ messages in thread
From: Shannon Zhao @ 2015-05-11 9:39 UTC (permalink / raw)
To: stable; +Cc: gregkh, christoffer.dall
Ping?
On 2015/5/4 9:51, shannon.zhao@linaro.org wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
>
> For KVM/ARM there are many fixes which have been applied upstream while
> not committed to stable kernels. Here we backport the important fixes
> to 3.14.y stable kernel.
>
> We have compile-tested each patch on arm/arm64/x86 to make sure the
> series are bisectable and have booted the resulting kernel on Fastmodel
> and started 2 VMs for arm/arm64, and have boot-tested on TC2 and
> started a guest.
>
> These patches are applied on the top of 3.14.40. They can be fetched
> from following address:
> https://git.linaro.org/people/shannon.zhao/linux-stable.git linux-3.14.y
>
> Thanks,
> Shannon
>
> Alex Bennée (1):
> arm64: KVM: export demux regids as KVM_REG_ARM64
>
> Andre Przywara (1):
> KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses
>
> Ard Biesheuvel (3):
> ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()
> arm/arm64: KVM: fix potential NULL dereference in user_mem_abort()
> arm/arm64: kvm: drop inappropriate use of kvm_is_mmio_pfn()
>
> Christoffer Dall (11):
> arm/arm64: KVM: Fix and refactor unmap_range
> arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset
> arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
> arm/arm64: KVM: vgic: Fix error code in kvm_vgic_create()
> arm/arm64: KVM: Don't clear the VCPU_POWER_OFF flag
> arm/arm64: KVM: Correct KVM_ARM_VCPU_INIT power off option
> arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu
> arm/arm64: KVM: Introduce stage2_unmap_vm
> arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized
> arm/arm64: KVM: Require in-kernel vgic for the arch timers
> arm/arm64: KVM: Keep elrsr/aisr in sync with software model
>
> Eric Auger (1):
> ARM: KVM: Unmap IPA on memslot delete/move
>
> Geoff Levand (1):
> arm64/kvm: Fix assembler compatibility of macros
>
> Haibin Wang (1):
> KVM: ARM: vgic: Fix the overlap check action about setting the GICD &
> GICC base address.
>
> Joel Schopp (1):
> arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc
>
> Kim Phillips (1):
> ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
>
> Li Liu (1):
> ARM: virt: fix wrong HSCTLR.EE bit setting
>
> Marc Zyngier (15):
> arm64: KVM: force cache clean on page fault when caches are off
> arm64: KVM: allows discrimination of AArch32 sysreg access
> arm64: KVM: trap VM system registers until MMU and caches are ON
> ARM: KVM: introduce kvm_p*d_addr_end
> arm64: KVM: flush VM pages before letting the guest enable caches
> ARM: KVM: force cache clean on page fault when caches are off
> ARM: KVM: fix handling of trapped 64bit coprocessor accesses
> ARM: KVM: fix ordering of 64bit coprocessor accesses
> ARM: KVM: introduce per-vcpu HYP Configuration Register
> ARM: KVM: add world-switch for AMAIR{0,1}
> ARM: KVM: trap VM system registers until MMU and caches are ON
> KVM: ARM: vgic: plug irq injection race
> arm64: KVM: Fix TLB invalidation by IPA/VMID
> arm64: KVM: Fix HCR setting for 32bit guests
> arm64: KVM: Do not use pgd_index to index stage-2 pgd
>
> Mark Rutland (1):
> arm64: KVM: fix unmapping with 48-bit VAs
>
> Steve Capper (1):
> arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort
>
> Victor Kamensky (1):
> ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word
>
> Vladimir Murzin (1):
> arm: kvm: fix CPU hotplug
>
> Will Deacon (6):
> arm64: kvm: use inner-shareable barriers for inner-shareable
> maintenance
> kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform
> KVM: ARM/arm64: fix non-const declaration of function returning const
> KVM: ARM/arm64: fix broken __percpu annotation
> KVM: ARM/arm64: avoid returning negative error code as bool
> KVM: vgic: return int instead of bool when checking I/O ranges
>
> Documentation/virtual/kvm/api.txt | 3 +-
> arch/arm/include/asm/kvm_arm.h | 4 +-
> arch/arm/include/asm/kvm_asm.h | 4 +-
> arch/arm/include/asm/kvm_emulate.h | 5 +
> arch/arm/include/asm/kvm_host.h | 11 +-
> arch/arm/include/asm/kvm_mmu.h | 55 +++--
> arch/arm/kernel/asm-offsets.c | 1 +
> arch/arm/kernel/hyp-stub.S | 4 +-
> arch/arm/kvm/arm.c | 73 +++----
> arch/arm/kvm/coproc.c | 86 ++++++--
> arch/arm/kvm/coproc.h | 14 +-
> arch/arm/kvm/coproc_a15.c | 2 +-
> arch/arm/kvm/coproc_a7.c | 2 +-
> arch/arm/kvm/interrupts_head.S | 21 +-
> arch/arm/kvm/mmu.c | 398 ++++++++++++++++++++++++++++-------
> arch/arm64/include/asm/kvm_arm.h | 35 ++-
> arch/arm64/include/asm/kvm_asm.h | 3 +-
> arch/arm64/include/asm/kvm_emulate.h | 7 +
> arch/arm64/include/asm/kvm_host.h | 4 +-
> arch/arm64/include/asm/kvm_mmu.h | 58 +++--
> arch/arm64/kvm/guest.c | 1 -
> arch/arm64/kvm/hyp.S | 15 +-
> arch/arm64/kvm/reset.c | 1 -
> arch/arm64/kvm/sys_regs.c | 103 +++++++--
> arch/arm64/kvm/sys_regs.h | 2 +
> include/kvm/arm_arch_timer.h | 10 +-
> virt/kvm/arm/arch_timer.c | 30 ++-
> virt/kvm/arm/vgic.c | 65 ++++--
> 28 files changed, 756 insertions(+), 261 deletions(-)
>
--
Shannon
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
2015-05-11 9:39 ` [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel Shannon Zhao
@ 2015-05-11 12:21 ` Greg KH
2015-05-11 15:03 ` Shannon Zhao
2015-05-11 15:20 ` Greg KH
0 siblings, 2 replies; 58+ messages in thread
From: Greg KH @ 2015-05-11 12:21 UTC (permalink / raw)
To: Shannon Zhao; +Cc: stable, christoffer.dall
On Mon, May 11, 2015 at 05:39:44PM +0800, Shannon Zhao wrote:
> Ping?
Large series of backports usually take me a while to get to as they are
outside of my "normal" workflow. Usually they take a few months to get
into the tree, waiting for a "slack time". I still have a number of
other series that have yet to be merged that were sent a long time
before yours. Please be patient.
greg k-h
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
2015-05-11 12:21 ` Greg KH
@ 2015-05-11 15:03 ` Shannon Zhao
2015-05-11 15:20 ` Greg KH
1 sibling, 0 replies; 58+ messages in thread
From: Shannon Zhao @ 2015-05-11 15:03 UTC (permalink / raw)
To: Greg KH; +Cc: stable, christoffer.dall
On 2015/5/11 20:21, Greg KH wrote:
> On Mon, May 11, 2015 at 05:39:44PM +0800, Shannon Zhao wrote:
>> >Ping?
> Large series of backports usually take me a while to get to as they are
> outside of my "normal" workflow. Usually they take a few months to get
> into the tree, waiting for a "slack time". I still have a number of
> other series that have yet to be merged that were sent a long time
> before yours. Please be patient.
Hi Greg,
Thanks for your reply. I just want to check whether these patches are on
your list.
Thanks,
Shannon
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model
2015-05-04 1:52 ` [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model shannon.zhao
@ 2015-05-11 15:17 ` Greg KH
2015-05-11 18:44 ` Christoffer Dall
0 siblings, 1 reply; 58+ messages in thread
From: Greg KH @ 2015-05-11 15:17 UTC (permalink / raw)
To: shannon.zhao; +Cc: stable, christoffer.dall, Marc Zyngier, Alex Bennée
On Mon, May 04, 2015 at 09:52:42AM +0800, shannon.zhao@linaro.org wrote:
> From: Christoffer Dall <christoffer.dall@linaro.org>
>
> commit ae705930fca6322600690df9dc1c7d0516145a93 upstream.
No, that's not what the patch below really is.
Do I have to go back by hand and verify each one of these really is the
patch you say it is? That's a major pain...
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
2015-05-11 12:21 ` Greg KH
2015-05-11 15:03 ` Shannon Zhao
@ 2015-05-11 15:20 ` Greg KH
2015-05-11 18:48 ` Christoffer Dall
1 sibling, 1 reply; 58+ messages in thread
From: Greg KH @ 2015-05-11 15:20 UTC (permalink / raw)
To: Shannon Zhao; +Cc: stable, christoffer.dall
On Mon, May 11, 2015 at 05:21:51AM -0700, Greg KH wrote:
> On Mon, May 11, 2015 at 05:39:44PM +0800, Shannon Zhao wrote:
> > Ping?
>
> Large series of backports usually take me a while to get to as they are
> outside of my "normal" workflow. Usually they take a few months to get
> into the tree, waiting for a "slack time". I still have a number of
> other series that have yet to be merged that were sent a long time
> before yours. Please be patient.
Actually, why aren't these being marked for -stable in the first place?
Going back and adding them "by hand" like this is a big pain, especially
when I have to hand-verify each git commit id, as the first one I looked
at is incorrect and now I don't trust any of them in the series.
Please work with the "normal" stable kernel workflow and mark the
patches properly so that you don't have to do this extra work, and I
don't either.
Right now I'm going to just dump this whole series from my queue.
Please just give me a series of git commit ids that should be applied to
the 3.14-stable kernel tree, and in what order they should be applied
in. If any need to be backported differently, please send those as a
separate series, and I will get to them at a different time, as that is
a lot more work having to hand-verify everything.
greg k-h
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model
2015-05-11 15:17 ` Greg KH
@ 2015-05-11 18:44 ` Christoffer Dall
2015-05-11 18:47 ` Greg KH
0 siblings, 1 reply; 58+ messages in thread
From: Christoffer Dall @ 2015-05-11 18:44 UTC (permalink / raw)
To: Greg KH; +Cc: shannon.zhao, stable, Marc Zyngier, Alex Bennée
Hi Greg,
On Mon, May 11, 2015 at 08:17:07AM -0700, Greg KH wrote:
> On Mon, May 04, 2015 at 09:52:42AM +0800, shannon.zhao@linaro.org wrote:
> > From: Christoffer Dall <christoffer.dall@linaro.org>
> >
> > commit ae705930fca6322600690df9dc1c7d0516145a93 upstream.
>
> No, that's not what the patch below really is.
>
> Do I have to go back by hand and verify each one of these really is the
> patch you say it is? That's a major pain...
>
This is a backport of the referenced commit, but it couldn't be applied
directly because of the churn in the vgic code.
I believed that the commit X upstream notation would indicate the
equivalent fix upstream, not the *exact* commit for the relevant stable
kernel.
Apologies if that was an incorrect assumption. I believe this is the
only patch which was significantly rewritten because of the churn in the
vgic code and enough users are seeing this in various distro kernels
that I figured it was important to refactor and backport.
What would you like me to do with this patch?
Note that I didn't understand that this is wrong from reading
Documentation/stable_kernel_rules.txt, which may just be because I'm
being stupid. Is the procedure that we're violating documented
somewhere?
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model
2015-05-11 18:44 ` Christoffer Dall
@ 2015-05-11 18:47 ` Greg KH
0 siblings, 0 replies; 58+ messages in thread
From: Greg KH @ 2015-05-11 18:47 UTC (permalink / raw)
To: Christoffer Dall; +Cc: shannon.zhao, stable, Marc Zyngier, Alex Bennée
On Mon, May 11, 2015 at 08:44:13PM +0200, Christoffer Dall wrote:
> Hi Greg,
>
> On Mon, May 11, 2015 at 08:17:07AM -0700, Greg KH wrote:
> > On Mon, May 04, 2015 at 09:52:42AM +0800, shannon.zhao@linaro.org wrote:
> > > From: Christoffer Dall <christoffer.dall@linaro.org>
> > >
> > > commit ae705930fca6322600690df9dc1c7d0516145a93 upstream.
> >
> > No, that's not what the patch below really is.
> >
> > Do I have to go back by hand and verify each one of these really is the
> > patch you say it is? That's a major pain...
> >
> This is a backport of the referenced commit, but it couldn't be applied
> directly because of the churn in the vgic code.
>
> I believed that the commit X upstream notation would indicate the
> equivalent fix upstream, not the *exact* commit for the relevant stable
> kernel.
>
> Apologies if that was an incorrect assumption. I believe this is the
> only patch which was significantly rewritten because of the churn in the
> vgic code and enough users are seeing this in various distro kernels
> that I figured it was important to refactor and backport.
>
> What would you like me to do with this patch?
You need to document the heck out of the fact that this looks very
different from the git commit id that you are referencing here. Not
saying anything makes me assume something went wrong.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
2015-05-11 15:20 ` Greg KH
@ 2015-05-11 18:48 ` Christoffer Dall
2015-05-11 20:07 ` Greg KH
0 siblings, 1 reply; 58+ messages in thread
From: Christoffer Dall @ 2015-05-11 18:48 UTC (permalink / raw)
To: Greg KH; +Cc: Shannon Zhao, stable
On Mon, May 11, 2015 at 08:20:25AM -0700, Greg KH wrote:
> On Mon, May 11, 2015 at 05:21:51AM -0700, Greg KH wrote:
> > On Mon, May 11, 2015 at 05:39:44PM +0800, Shannon Zhao wrote:
> > > Ping?
> >
> > Large series of backports usually take me a while to get to as they are
> > outside of my "normal" workflow. Usually they take a few months to get
> > into the tree, waiting for a "slack time". I still have a number of
> > other series that have yet to be merged that were sent a long time
> > before yours. Please be patient.
>
> Actually, why aren't these being marked for -stable in the first place?
It's mostly my fault for not recognizing that and because I knew many of
these wouldn't apply cleanly to stable trees, I didn't add them to
-stable. That was probably a mistake on my part, apologies.
> Going back and adding them "by hand" like this is a big pain, especially
> when I have to hand-verify each git commit id, as the first one I looked
> at is incorrect and now I don't trust any of them in the series.
You should be able to trust all of them (Shannon, speak up if that's not
true). This was the *only* one that I modified heavily.
>
> Please work with the "normal" stable kernel workflow and mark the
> patches properly so that you don't have to do this extra work, and I
> don't either.
Yes, that is indeed the intention. It has been intense lately and
cc'ing -stable was under-prioritized. As part of realizing we need to
be better at this, I went back and tried to rectify our mistakes.
Again, apologies.
>
> Right now I'm going to just dump this whole series from my queue.
> Please just give me a series of git commit ids that should be applied to
> the 3.14-stable kernel tree, and in what order they should be applied
> in. If any need to be backported differently, please send those as a
> separate series, and I will get to them at a different time, as that is
> a lot more work having to hand-verify everything.
>
Really? I would think you would prefer this series given the above
info. If you still prefer a list of commit IDs, then we'll provide
those instead.
We tried to make things easier for you guys, not the other way around.
We really appreciate the work you're doing!
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
2015-05-11 18:48 ` Christoffer Dall
@ 2015-05-11 20:07 ` Greg KH
2015-05-11 20:17 ` Christoffer Dall
0 siblings, 1 reply; 58+ messages in thread
From: Greg KH @ 2015-05-11 20:07 UTC (permalink / raw)
To: Christoffer Dall; +Cc: Shannon Zhao, stable
On Mon, May 11, 2015 at 08:48:06PM +0200, Christoffer Dall wrote:
> On Mon, May 11, 2015 at 08:20:25AM -0700, Greg KH wrote:
> > On Mon, May 11, 2015 at 05:21:51AM -0700, Greg KH wrote:
> > > On Mon, May 11, 2015 at 05:39:44PM +0800, Shannon Zhao wrote:
> > > > Ping?
> > >
> > > Large series of backports usually take me a while to get to as they are
> > > outside of my "normal" workflow. Usually they take a few months to get
> > > into the tree, waiting for a "slack time". I still have a number of
> > > other series that have yet to be merged that were sent a long time
> > > before yours. Please be patient.
> >
> > Actually, why aren't these being marked for -stable in the first place?
>
> It's mostly my fault for not recognizing that and because I knew many of
> these wouldn't apply cleanly to stable trees, I didn't add them to
> -stable. That was probably a mistake on my part, apologies.
>
> > Going back and adding them "by hand" like this is a big pain, especially
> > when I have to hand-verify each git commit id, as the first one I looked
> > at is incorrect and now I don't trust any of them in the series.
>
> You should be able to trust all of them (Shannon, speak up if that's not
> true). This was the *only* one that I modified heavily.
>
> >
> > Please work with the "normal" stable kernel workflow and mark the
> > patches properly so that you don't have to do this extra work, and I
> > don't either.
>
> Yes, that is indeed the intention. It has been intense lately and
> cc'ing -stable was under-prioritized. As part of realizing we need to
> be better at this, I went back and tried to rectify our mistakes.
> Again, apologies.
>
> >
> > Right now I'm going to just dump this whole series from my queue.
> > Please just give me a series of git commit ids that should be applied to
> > the 3.14-stable kernel tree, and in what order they should be applied
> > in. If any need to be backported differently, please send those as a
> > separate series, and I will get to them at a different time, as that is
> > a lot more work having to hand-verify everything.
> >
> Really? I would think you would prefer this series given the above
> info. If you still prefer a list of commit IDs, then we'll provide
> those instead.
My scripts handle a git commit id directly, it's trivial for me to take
that. If I have to deal with an email, I have to manually compare it to
the git commit id, see why it's different, write an angry email
complaining about the differences, etc. :)
Remember, we work using quilt patch series, not git patches for the
stable stuff, so I can't take a pull request here, sorry.
> We tried to make things easier for you guys, not the other way around.
Then tag things in the original patches please, that would be the
easiest thing for everyone involved.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel
2015-05-11 20:07 ` Greg KH
@ 2015-05-11 20:17 ` Christoffer Dall
0 siblings, 0 replies; 58+ messages in thread
From: Christoffer Dall @ 2015-05-11 20:17 UTC (permalink / raw)
To: Greg KH; +Cc: Shannon Zhao, Stable
On Mon, May 11, 2015 at 10:07 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Mon, May 11, 2015 at 08:48:06PM +0200, Christoffer Dall wrote:
>> On Mon, May 11, 2015 at 08:20:25AM -0700, Greg KH wrote:
>> > On Mon, May 11, 2015 at 05:21:51AM -0700, Greg KH wrote:
>> > > On Mon, May 11, 2015 at 05:39:44PM +0800, Shannon Zhao wrote:
>> > > > Ping?
>> > >
>> > > Large series of backports usually take me a while to get to as they are
>> > > outside of my "normal" workflow. Usually they take a few months to get
>> > > into the tree, waiting for a "slack time". I still have a number of
>> > > other series that have yet to be merged that were sent a long time
>> > > before yours. Please be patient.
>> >
>> > Actually, why aren't these being marked for -stable in the first place?
>>
>> It's mostly my fault for not recognizing that and because I knew many of
>> these wouldn't apply cleanly to stable trees, I didn't add them to
>> -stable. That was probably a mistake on my part, apologies.
>>
>> > Going back and adding them "by hand" like this is a big pain, especially
>> > when I have to hand-verify each git commit id, as the first one I looked
>> > at is incorrect and now I don't trust any of them in the series.
>>
>> You should be able to trust all of them (Shannon, speak up if that's not
>> true). This was the *only* one that I modified heavily.
>>
>> >
>> > Please work with the "normal" stable kernel workflow and mark the
>> > patches properly so that you don't have to do this extra work, and I
>> > don't either.
>>
>> Yes, that is indeed the intention. It has been intense lately and
>> cc'ing -stable was under-prioritized. As part of realizing we need to
>> be better at this, I went back and tried to rectify our mistakes.
>> Again, apologies.
>>
>> >
>> > Right now I'm going to just dump this whole series from my queue.
>> > Please just give me a series of git commit ids that should be applied to
>> > the 3.14-stable kernel tree, and in what order they should be applied
>> > in. If any need to be backported differently, please send those as a
>> > separate series, and I will get to them at a different time, as that is
>> > a lot more work having to hand-verify everything.
>> >
>> Really? I would think you would prefer this series given the above
>> info. If you still prefer a list of commit IDs, then we'll provide
>> those instead.
>
> My scripts handle a git commit id directly, it's trivial for me to take
> that. If I have to deal with an email, I have to manually compare it to
> the git commit id, see why it's different, write an angry email
> complaining about the differences, etc. :)
>
> Remember, we work using quilt patch series, not git patches for the
> stable stuff, so I can't take a pull request here, sorry.
>
ok, Shannon will send you a list of commit IDs and I'll re-send the
backported patch with a big fat comment in the commit text.
>> We tried to make things easier for you guys, not the other way around.
>
> Then tag things in the original patches please, that would be the
> easiest thing for everyone involved.
>
Will do for the future, thanks.
-Christoffer
^ permalink raw reply [flat|nested] 58+ messages in thread
end of thread, other threads:[~2015-05-11 20:17 UTC | newest]
Thread overview: 58+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-04 1:51 [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 01/47] arm64: KVM: force cache clean on page fault when caches are off shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 02/47] arm64: KVM: allows discrimination of AArch32 sysreg access shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 03/47] arm64: KVM: trap VM system registers until MMU and caches are ON shannon.zhao
2015-05-04 1:51 ` [PATCH for 3.14.y stable 04/47] ARM: KVM: introduce kvm_p*d_addr_end shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 05/47] arm64: KVM: flush VM pages before letting the guest enable caches shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 06/47] ARM: KVM: force cache clean on page fault when caches are off shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 07/47] ARM: KVM: fix handling of trapped 64bit coprocessor accesses shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 08/47] ARM: KVM: fix ordering of " shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 09/47] ARM: KVM: introduce per-vcpu HYP Configuration Register shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 10/47] ARM: KVM: add world-switch for AMAIR{0,1} shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 11/47] ARM: KVM: trap VM system registers until MMU and caches are ON shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 12/47] KVM: arm/arm64: vgic: fix GICD_ICFGR register accesses shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 13/47] KVM: ARM: vgic: Fix the overlap check action about setting the GICD & GICC base address shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 14/47] arm64: kvm: use inner-shareable barriers for inner-shareable maintenance shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 15/47] kvm: arm64: vgic: fix hyp panic with 64k pages on juno platform shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 16/47] arm/arm64: KVM: Fix and refactor unmap_range shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 17/47] ARM: KVM: Unmap IPA on memslot delete/move shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 18/47] ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 19/47] arm64: KVM: export demux regids as KVM_REG_ARM64 shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 20/47] ARM: virt: fix wrong HSCTLR.EE bit setting shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 21/47] ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 22/47] KVM: ARM/arm64: fix non-const declaration of function returning const shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 23/47] KVM: ARM/arm64: fix broken __percpu annotation shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 24/47] KVM: ARM/arm64: avoid returning negative error code as bool shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 25/47] KVM: vgic: return int instead of bool when checking I/O ranges shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 26/47] ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault() shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 27/47] KVM: ARM: vgic: plug irq injection race shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 28/47] arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 29/47] arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 30/47] arm: kvm: fix CPU hotplug shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 31/47] arm/arm64: KVM: fix potential NULL dereference in user_mem_abort() shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 32/47] arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 33/47] arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 34/47] arm64: KVM: fix unmapping with 48-bit VAs shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 35/47] arm/arm64: KVM: vgic: Fix error code in kvm_vgic_create() shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 36/47] arm64/kvm: Fix assembler compatibility of macros shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 37/47] arm/arm64: kvm: drop inappropriate use of kvm_is_mmio_pfn() shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 38/47] arm/arm64: KVM: Don't clear the VCPU_POWER_OFF flag shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 39/47] arm/arm64: KVM: Correct KVM_ARM_VCPU_INIT power off option shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 40/47] arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 41/47] arm/arm64: KVM: Introduce stage2_unmap_vm shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 42/47] arm/arm64: KVM: Don't allow creating VCPUs after vgic_initialized shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 43/47] arm/arm64: KVM: Require in-kernel vgic for the arch timers shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 44/47] arm64: KVM: Fix TLB invalidation by IPA/VMID shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 45/47] arm64: KVM: Fix HCR setting for 32bit guests shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 46/47] arm64: KVM: Do not use pgd_index to index stage-2 pgd shannon.zhao
2015-05-04 1:52 ` [PATCH for 3.14.y stable 47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model shannon.zhao
2015-05-11 15:17 ` Greg KH
2015-05-11 18:44 ` Christoffer Dall
2015-05-11 18:47 ` Greg KH
2015-05-11 9:39 ` [PATCH for 3.14.y stable 00/47] Backport fixes of KVM/ARM to 3.14.y stable kernel Shannon Zhao
2015-05-11 12:21 ` Greg KH
2015-05-11 15:03 ` Shannon Zhao
2015-05-11 15:20 ` Greg KH
2015-05-11 18:48 ` Christoffer Dall
2015-05-11 20:07 ` Greg KH
2015-05-11 20:17 ` Christoffer Dall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).