* [PATCH 0/5] kvm_arch_vcpu_runnable related improvements
@ 2017-09-29 11:30 Andrew Jones
2017-09-29 11:30 ` [PATCH 1/5] KVM: arm/arm64: tidy 'should sleep' conditions Andrew Jones
` (5 more replies)
0 siblings, 6 replies; 26+ messages in thread
From: Andrew Jones @ 2017-09-29 11:30 UTC (permalink / raw)
To: kvmarm; +Cc: marc.zyngier, cdall
Only patch 5/5 fixes anything, and that's just a theoretical bug,
but, IMHO, this series future-proofs and improves the maintainability
of the code. See https://www.spinics.net/lists/kvm-arm/msg27254.html
for a TL;DR analysis that led to this series.
Thanks,
drew
Andrew Jones (5):
KVM: arm/arm64: tidy 'should sleep' conditions
KVM: arm/arm64: replace power_off with mp_state=STOPPED
KVM: arm/arm64: factor out common wfe/wfi emulation code
KVM: arm/arm64: improve kvm_arch_vcpu_runnable
KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
Documentation/virtual/kvm/api.txt | 10 +++---
arch/arm/include/asm/kvm_host.h | 8 +++--
arch/arm/kvm/handle_exit.c | 14 +++------
arch/arm64/include/asm/kvm_host.h | 8 +++--
arch/arm64/kvm/handle_exit.c | 14 +++------
virt/kvm/arm/arm.c | 64 +++++++++++++++++++++++++++------------
virt/kvm/arm/psci.c | 32 ++++++++------------
7 files changed, 84 insertions(+), 66 deletions(-)
--
2.13.5
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 1/5] KVM: arm/arm64: tidy 'should sleep' conditions
2017-09-29 11:30 [PATCH 0/5] kvm_arch_vcpu_runnable related improvements Andrew Jones
@ 2017-09-29 11:30 ` Andrew Jones
2017-10-05 8:13 ` Marc Zyngier
2017-09-29 11:30 ` [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED Andrew Jones
` (4 subsequent siblings)
5 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2017-09-29 11:30 UTC (permalink / raw)
To: kvmarm; +Cc: marc.zyngier, cdall
Introduce vcpu_should_sleep() in order to tidy several places up
that the compound 'power_off || pause' condition is used.
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
virt/kvm/arm/arm.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index b9f68e4add71..e4bec508ee5b 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -403,17 +403,22 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
return 0;
}
+static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.power_off || vcpu->arch.pause;
+}
+
/**
* kvm_arch_vcpu_runnable - determine if the vcpu can be scheduled
- * @v: The VCPU pointer
+ * @vcpu: The VCPU pointer
*
* If the guest CPU is not waiting for interrupts or an interrupt line is
* asserted, the CPU is by definition runnable.
*/
-int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
+int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
{
- return ((!!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v))
- && !v->arch.power_off && !v->arch.pause);
+ return (!!vcpu->arch.irq_lines || kvm_vgic_vcpu_pending_irq(vcpu))
+ && !vcpu_should_sleep(vcpu);
}
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
@@ -570,10 +575,9 @@ static void vcpu_req_sleep(struct kvm_vcpu *vcpu)
{
struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
- swait_event_interruptible(*wq, ((!vcpu->arch.power_off) &&
- (!vcpu->arch.pause)));
+ swait_event_interruptible(*wq, !vcpu_should_sleep(vcpu));
- if (vcpu->arch.power_off || vcpu->arch.pause) {
+ if (vcpu_should_sleep(vcpu)) {
/* Awaken to handle a signal, request we sleep again later. */
kvm_make_request(KVM_REQ_SLEEP, vcpu);
}
--
2.13.5
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED
2017-09-29 11:30 [PATCH 0/5] kvm_arch_vcpu_runnable related improvements Andrew Jones
2017-09-29 11:30 ` [PATCH 1/5] KVM: arm/arm64: tidy 'should sleep' conditions Andrew Jones
@ 2017-09-29 11:30 ` Andrew Jones
2017-10-05 8:32 ` Marc Zyngier
2017-10-14 19:12 ` Christoffer Dall
2017-09-29 11:30 ` [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code Andrew Jones
` (3 subsequent siblings)
5 siblings, 2 replies; 26+ messages in thread
From: Andrew Jones @ 2017-09-29 11:30 UTC (permalink / raw)
To: kvmarm; +Cc: marc.zyngier, cdall
This prepares for more MP states to be used.
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
arch/arm/include/asm/kvm_host.h | 6 ++++--
arch/arm64/include/asm/kvm_host.h | 6 ++++--
virt/kvm/arm/arm.c | 26 ++++++++++++++------------
virt/kvm/arm/psci.c | 17 +++++------------
4 files changed, 27 insertions(+), 28 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4a879f6ff13b..85f3c20b9759 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -171,8 +171,8 @@ struct kvm_vcpu_arch {
* here.
*/
- /* vcpu power-off state */
- bool power_off;
+ /* Current VCPU MP state */
+ u32 mp_state;
/* Don't run the guest (internal implementation need) */
bool pause;
@@ -227,6 +227,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
+void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
void kvm_arm_halt_guest(struct kvm *kvm);
void kvm_arm_resume_guest(struct kvm *kvm);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e923b58606e2..25545a87de11 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -255,8 +255,8 @@ struct kvm_vcpu_arch {
u32 mdscr_el1;
} guest_debug_preserved;
- /* vcpu power-off state */
- bool power_off;
+ /* Current VCPU MP state */
+ u32 mp_state;
/* Don't run the guest (internal implementation need) */
bool pause;
@@ -328,6 +328,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
+void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
void kvm_arm_halt_guest(struct kvm *kvm);
void kvm_arm_resume_guest(struct kvm *kvm);
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index e4bec508ee5b..954e77608d29 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -368,21 +368,23 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_timer_vcpu_put(vcpu);
}
-static void vcpu_power_off(struct kvm_vcpu *vcpu)
+void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
{
- vcpu->arch.power_off = true;
+ vcpu->arch.mp_state = KVM_MP_STATE_STOPPED;
kvm_make_request(KVM_REQ_SLEEP, vcpu);
kvm_vcpu_kick(vcpu);
}
+void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+ kvm_vcpu_kick(vcpu);
+}
+
int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
struct kvm_mp_state *mp_state)
{
- if (vcpu->arch.power_off)
- mp_state->mp_state = KVM_MP_STATE_STOPPED;
- else
- mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
-
+ mp_state->mp_state = vcpu->arch.mp_state;
return 0;
}
@@ -391,10 +393,10 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
{
switch (mp_state->mp_state) {
case KVM_MP_STATE_RUNNABLE:
- vcpu->arch.power_off = false;
+ kvm_arm_vcpu_power_on(vcpu);
break;
case KVM_MP_STATE_STOPPED:
- vcpu_power_off(vcpu);
+ kvm_arm_vcpu_power_off(vcpu);
break;
default:
return -EINVAL;
@@ -405,7 +407,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
{
- return vcpu->arch.power_off || vcpu->arch.pause;
+ return vcpu->arch.mp_state == KVM_MP_STATE_STOPPED || vcpu->arch.pause;
}
/**
@@ -919,9 +921,9 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
* Handle the "start in power-off" case.
*/
if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
- vcpu_power_off(vcpu);
+ kvm_arm_vcpu_power_off(vcpu);
else
- vcpu->arch.power_off = false;
+ kvm_arm_vcpu_power_on(vcpu);
return 0;
}
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index f1e363bab5e8..a7bf152f1247 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -64,16 +64,13 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
{
- vcpu->arch.power_off = true;
- kvm_make_request(KVM_REQ_SLEEP, vcpu);
- kvm_vcpu_kick(vcpu);
+ kvm_arm_vcpu_power_off(vcpu);
}
static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
{
struct kvm *kvm = source_vcpu->kvm;
struct kvm_vcpu *vcpu = NULL;
- struct swait_queue_head *wq;
unsigned long cpu_id;
unsigned long context_id;
phys_addr_t target_pc;
@@ -90,7 +87,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
*/
if (!vcpu)
return PSCI_RET_INVALID_PARAMS;
- if (!vcpu->arch.power_off) {
+ if (vcpu->arch.mp_state != KVM_MP_STATE_STOPPED) {
if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
return PSCI_RET_ALREADY_ON;
else
@@ -118,11 +115,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
* the general puspose registers are undefined upon CPU_ON.
*/
vcpu_set_reg(vcpu, 0, context_id);
- vcpu->arch.power_off = false;
- smp_mb(); /* Make sure the above is visible */
-
- wq = kvm_arch_vcpu_wq(vcpu);
- swake_up(wq);
+ kvm_arm_vcpu_power_on(vcpu);
return PSCI_RET_SUCCESS;
}
@@ -156,7 +149,7 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
mpidr = kvm_vcpu_get_mpidr_aff(tmp);
if ((mpidr & target_affinity_mask) == target_affinity) {
matching_cpus++;
- if (!tmp->arch.power_off)
+ if (tmp->arch.mp_state != KVM_MP_STATE_STOPPED)
return PSCI_0_2_AFFINITY_LEVEL_ON;
}
}
@@ -182,7 +175,7 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
* re-initialized.
*/
kvm_for_each_vcpu(i, tmp, vcpu->kvm)
- tmp->arch.power_off = true;
+ tmp->arch.mp_state = KVM_MP_STATE_STOPPED;
kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
--
2.13.5
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code
2017-09-29 11:30 [PATCH 0/5] kvm_arch_vcpu_runnable related improvements Andrew Jones
2017-09-29 11:30 ` [PATCH 1/5] KVM: arm/arm64: tidy 'should sleep' conditions Andrew Jones
2017-09-29 11:30 ` [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED Andrew Jones
@ 2017-09-29 11:30 ` Andrew Jones
2017-10-05 8:36 ` Marc Zyngier
2017-10-14 19:13 ` Christoffer Dall
2017-09-29 11:30 ` [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable Andrew Jones
` (2 subsequent siblings)
5 siblings, 2 replies; 26+ messages in thread
From: Andrew Jones @ 2017-09-29 11:30 UTC (permalink / raw)
To: kvmarm; +Cc: marc.zyngier, cdall
Before we add more common code to the wfi emulation, let's first
factor out everything common.
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
arch/arm/include/asm/kvm_host.h | 2 ++
arch/arm/kvm/handle_exit.c | 14 +++++---------
arch/arm64/include/asm/kvm_host.h | 2 ++
arch/arm64/kvm/handle_exit.c | 14 +++++---------
virt/kvm/arm/arm.c | 13 +++++++++++++
virt/kvm/arm/psci.c | 15 +++++++--------
6 files changed, 34 insertions(+), 26 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 85f3c20b9759..964320825372 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -231,6 +231,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
void kvm_arm_halt_guest(struct kvm *kvm);
void kvm_arm_resume_guest(struct kvm *kvm);
+void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
+void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index cf8bf6bf87c4..e40466577c87 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -57,22 +57,18 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
* @run: the kvm_run structure pointer
*
* WFE: Yield the CPU and come back to this vcpu when the scheduler
- * decides to.
- * WFI: Simply call kvm_vcpu_block(), which will halt execution of
- * world-switches and schedule other host processes until there is an
- * incoming IRQ or FIQ to the VM.
+ * decides to.
+ * WFI: Halt execution of world-switches and schedule other host
+ * processes until there is an incoming IRQ or FIQ to the VM.
*/
static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
if (kvm_vcpu_get_hsr(vcpu) & HSR_WFI_IS_WFE) {
trace_kvm_wfx(*vcpu_pc(vcpu), true);
- vcpu->stat.wfe_exit_stat++;
- kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
+ kvm_arm_emulate_wfe(vcpu);
} else {
trace_kvm_wfx(*vcpu_pc(vcpu), false);
- vcpu->stat.wfi_exit_stat++;
- kvm_vcpu_block(vcpu);
- kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+ kvm_arm_emulate_wfi(vcpu);
}
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 25545a87de11..a774f6b30474 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -332,6 +332,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
void kvm_arm_halt_guest(struct kvm *kvm);
void kvm_arm_resume_guest(struct kvm *kvm);
+void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
+void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
u64 __kvm_call_hyp(void *hypfn, ...);
#define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 7debb74843a0..7ba50a296d10 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -74,22 +74,18 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
* @vcpu: the vcpu pointer
*
* WFE: Yield the CPU and come back to this vcpu when the scheduler
- * decides to.
- * WFI: Simply call kvm_vcpu_block(), which will halt execution of
- * world-switches and schedule other host processes until there is an
- * incoming IRQ or FIQ to the VM.
+ * decides to.
+ * WFI: Halt execution of world-switches and schedule other host
+ * processes until there is an incoming IRQ or FIQ to the VM.
*/
static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
- vcpu->stat.wfe_exit_stat++;
- kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
+ kvm_arm_emulate_wfe(vcpu);
} else {
trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false);
- vcpu->stat.wfi_exit_stat++;
- kvm_vcpu_block(vcpu);
- kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+ kvm_arm_emulate_wfi(vcpu);
}
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 954e77608d29..220a3dbda76c 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -573,6 +573,19 @@ void kvm_arm_resume_guest(struct kvm *kvm)
}
}
+void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
+{
+ vcpu->stat.wfe_exit_stat++;
+ kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
+}
+
+void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
+{
+ vcpu->stat.wfi_exit_stat++;
+ kvm_vcpu_block(vcpu);
+ kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+}
+
static void vcpu_req_sleep(struct kvm_vcpu *vcpu)
{
struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index a7bf152f1247..755c415304ea 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -50,20 +50,19 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
* This means for KVM the wakeup events are interrupts and
* this is consistent with intended use of StateID as described
* in section 5.4.1 of PSCI v0.2 specification (ARM DEN 0022A).
- *
- * Further, we also treat power-down request to be same as
- * stand-by request as-per section 5.4.2 clause 3 of PSCI v0.2
- * specification (ARM DEN 0022A). This means all suspend states
- * for KVM will preserve the register state.
*/
- kvm_vcpu_block(vcpu);
- kvm_clear_request(KVM_REQ_UNHALT, vcpu);
-
+ kvm_arm_emulate_wfi(vcpu);
return PSCI_RET_SUCCESS;
}
static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
{
+ /*
+ * We treat a power-off request the same as a stand-by request,
+ * as-per section 5.4.2 clause 3 of PSCI v0.2* specification
+ * (ARM DEN 0022A). This means all suspend states for KVM will
+ * preserve the register state.
+ */
kvm_arm_vcpu_power_off(vcpu);
}
--
2.13.5
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable
2017-09-29 11:30 [PATCH 0/5] kvm_arch_vcpu_runnable related improvements Andrew Jones
` (2 preceding siblings ...)
2017-09-29 11:30 ` [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code Andrew Jones
@ 2017-09-29 11:30 ` Andrew Jones
2017-10-05 9:19 ` Marc Zyngier
2017-10-14 19:13 ` Christoffer Dall
2017-09-29 11:30 ` [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs Andrew Jones
2017-10-02 8:31 ` [PATCH 6/5] KVM: arm/arm64: make kvm_vgic_vcpu_pending_irq static Andrew Jones
5 siblings, 2 replies; 26+ messages in thread
From: Andrew Jones @ 2017-09-29 11:30 UTC (permalink / raw)
To: kvmarm; +Cc: marc.zyngier, cdall
Conceptually, kvm_arch_vcpu_runnable() should be "not waiting,
or waiting for interrupts and interrupts are pending",
!waiting-uninterruptable &&
(!waiting-for-interrupts || interrupts-pending)
but the implementation was only
!waiting-uninterruptable && interrupts-pending
Thanks to the context of the two callers there wasn't an issue,
however, to future-proof the function, this patch implements the
conceptual condition by applying mp_state to track waiting-
uninterruptable vs. waiting-for-interrupts.
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
Documentation/virtual/kvm/api.txt | 10 ++++++----
virt/kvm/arm/arm.c | 13 +++++++++----
2 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index e63a35fafef0..7c0bb1ae10a2 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1071,7 +1071,7 @@ Possible values are:
- KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
now ready for a SIPI [x86]
- KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
- is waiting for an interrupt [x86]
+ is waiting for an interrupt [x86,arm/arm64]
- KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
accessible via KVM_GET_VCPU_EVENTS) [x86]
- KVM_MP_STATE_STOPPED: the vcpu is stopped [s390,arm/arm64]
@@ -1087,8 +1087,9 @@ these architectures.
For arm/arm64:
-The only states that are valid are KVM_MP_STATE_STOPPED and
-KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not.
+The only states that are valid are KVM_MP_STATE_STOPPED, KVM_MP_STATE_HALTED,
+and KVM_MP_STATE_RUNNABLE which reflect if the vcpu is powered-off, waiting
+for interrupts, or powered-on and not waiting for interrupts.
4.39 KVM_SET_MP_STATE
@@ -1108,7 +1109,8 @@ these architectures.
For arm/arm64:
The only states that are valid are KVM_MP_STATE_STOPPED and
-KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not.
+KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be
+powered-off or not.
4.40 KVM_SET_IDENTITY_MAP_ADDR
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 220a3dbda76c..5bc9b0d2fd0f 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -414,13 +414,16 @@ static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
* kvm_arch_vcpu_runnable - determine if the vcpu can be scheduled
* @vcpu: The VCPU pointer
*
- * If the guest CPU is not waiting for interrupts or an interrupt line is
- * asserted, the CPU is by definition runnable.
+ * If the VCPU is not waiting at all (including sleeping, which is waiting
+ * uninterruptably), or it's waiting for interrupts but interrupts are
+ * pending, then it is by definition runnable.
*/
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
{
- return (!!vcpu->arch.irq_lines || kvm_vgic_vcpu_pending_irq(vcpu))
- && !vcpu_should_sleep(vcpu);
+ return !vcpu_should_sleep(vcpu) &&
+ (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
+ (!!vcpu->arch.irq_lines ||
+ kvm_vgic_vcpu_pending_irq(vcpu)));
}
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
@@ -582,7 +585,9 @@ void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
{
vcpu->stat.wfi_exit_stat++;
+ vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
kvm_vcpu_block(vcpu);
+ vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
kvm_clear_request(KVM_REQ_UNHALT, vcpu);
}
--
2.13.5
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-09-29 11:30 [PATCH 0/5] kvm_arch_vcpu_runnable related improvements Andrew Jones
` (3 preceding siblings ...)
2017-09-29 11:30 ` [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable Andrew Jones
@ 2017-09-29 11:30 ` Andrew Jones
2017-10-05 9:37 ` Marc Zyngier
2017-10-14 19:13 ` Christoffer Dall
2017-10-02 8:31 ` [PATCH 6/5] KVM: arm/arm64: make kvm_vgic_vcpu_pending_irq static Andrew Jones
5 siblings, 2 replies; 26+ messages in thread
From: Andrew Jones @ 2017-09-29 11:30 UTC (permalink / raw)
To: kvmarm; +Cc: marc.zyngier, cdall
When the vPMU is in use if a VCPU's perf event overflow handler
were to fire after the VCPU started waiting, then the wake up
done by the kvm_vcpu_kick() call in the handler would do nothing,
as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
Fix this by checking the IRQ_PENDING VCPU request in runnable().
Checking the request also sufficiently covers all the cases that
kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
virt/kvm/arm/arm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 5bc9b0d2fd0f..725527f491e4 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
return !vcpu_should_sleep(vcpu) &&
(vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
(!!vcpu->arch.irq_lines ||
- kvm_vgic_vcpu_pending_irq(vcpu)));
+ kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
}
bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
--
2.13.5
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 6/5] KVM: arm/arm64: make kvm_vgic_vcpu_pending_irq static
2017-09-29 11:30 [PATCH 0/5] kvm_arch_vcpu_runnable related improvements Andrew Jones
` (4 preceding siblings ...)
2017-09-29 11:30 ` [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs Andrew Jones
@ 2017-10-02 8:31 ` Andrew Jones
2017-10-05 9:37 ` Marc Zyngier
5 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2017-10-02 8:31 UTC (permalink / raw)
To: kvmarm; +Cc: marc.zyngier, cdall
Now that the only caller of kvm_vgic_vcpu_pending_irq() is in the
vgic code, let's make it private in order to discourage it's use
outside, as checking the IRQ_PENDING VCPU request is likely the
right thing for non-vgic code to do. Also, remove a vgic prototype
that was mistakenly introduced with 0919e84c0fc1, and along for the
ride ever since.
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
include/kvm/arm_vgic.h | 3 ---
virt/kvm/arm/vgic/vgic.c | 2 +-
2 files changed, 1 insertion(+), 4 deletions(-)
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 34dba516ef24..aade132d2a30 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -311,8 +311,6 @@ int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq);
int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int virt_irq);
bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int virt_irq);
-int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
-
void kvm_vgic_load(struct kvm_vcpu *vcpu);
void kvm_vgic_put(struct kvm_vcpu *vcpu);
@@ -322,7 +320,6 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu);
#define vgic_valid_spi(k, i) (((i) >= VGIC_NR_PRIVATE_IRQS) && \
((i) < (k)->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS))
-bool kvm_vcpu_has_pending_irqs(struct kvm_vcpu *vcpu);
void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index fed717e07938..72a16259beec 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -730,7 +730,7 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu)
vgic_v3_put(vcpu);
}
-int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
+static int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
{
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
struct vgic_irq *irq;
--
2.13.5
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 1/5] KVM: arm/arm64: tidy 'should sleep' conditions
2017-09-29 11:30 ` [PATCH 1/5] KVM: arm/arm64: tidy 'should sleep' conditions Andrew Jones
@ 2017-10-05 8:13 ` Marc Zyngier
0 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2017-10-05 8:13 UTC (permalink / raw)
To: Andrew Jones, kvmarm; +Cc: cdall
On 29/09/17 12:30, Andrew Jones wrote:
> Introduce vcpu_should_sleep() in order to tidy several places up
> that the compound 'power_off || pause' condition is used.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> virt/kvm/arm/arm.c | 18 +++++++++++-------
> 1 file changed, 11 insertions(+), 7 deletions(-)
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index b9f68e4add71..e4bec508ee5b 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -403,17 +403,22 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> return 0;
> }
>
> +static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> +{
> + return vcpu->arch.power_off || vcpu->arch.pause;
> +}
> +
> /**
> * kvm_arch_vcpu_runnable - determine if the vcpu can be scheduled
> - * @v: The VCPU pointer
> + * @vcpu: The VCPU pointer
> *
> * If the guest CPU is not waiting for interrupts or an interrupt line is
> * asserted, the CPU is by definition runnable.
> */
> -int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
> +int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> {
> - return ((!!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v))
> - && !v->arch.power_off && !v->arch.pause);
> + return (!!vcpu->arch.irq_lines || kvm_vgic_vcpu_pending_irq(vcpu))
> + && !vcpu_should_sleep(vcpu);
> }
>
> bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> @@ -570,10 +575,9 @@ static void vcpu_req_sleep(struct kvm_vcpu *vcpu)
> {
> struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
>
> - swait_event_interruptible(*wq, ((!vcpu->arch.power_off) &&
> - (!vcpu->arch.pause)));
> + swait_event_interruptible(*wq, !vcpu_should_sleep(vcpu));
>
> - if (vcpu->arch.power_off || vcpu->arch.pause) {
> + if (vcpu_should_sleep(vcpu)) {
> /* Awaken to handle a signal, request we sleep again later. */
> kvm_make_request(KVM_REQ_SLEEP, vcpu);
> }
>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED
2017-09-29 11:30 ` [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED Andrew Jones
@ 2017-10-05 8:32 ` Marc Zyngier
2017-10-10 13:26 ` Andrew Jones
2017-10-14 19:12 ` Christoffer Dall
1 sibling, 1 reply; 26+ messages in thread
From: Marc Zyngier @ 2017-10-05 8:32 UTC (permalink / raw)
To: Andrew Jones, kvmarm; +Cc: cdall
On 29/09/17 12:30, Andrew Jones wrote:
> This prepares for more MP states to be used.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> arch/arm/include/asm/kvm_host.h | 6 ++++--
> arch/arm64/include/asm/kvm_host.h | 6 ++++--
> virt/kvm/arm/arm.c | 26 ++++++++++++++------------
> virt/kvm/arm/psci.c | 17 +++++------------
> 4 files changed, 27 insertions(+), 28 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 4a879f6ff13b..85f3c20b9759 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -171,8 +171,8 @@ struct kvm_vcpu_arch {
> * here.
> */
>
> - /* vcpu power-off state */
> - bool power_off;
> + /* Current VCPU MP state */
> + u32 mp_state;
What is the rational for not using a struct kvm_mp_state directly here?
It is the same thing (a u32), but I certainly find it more explicit than
a bare u32.
>
> /* Don't run the guest (internal implementation need) */
> bool pause;
> @@ -227,6 +227,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>
> struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e923b58606e2..25545a87de11 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -255,8 +255,8 @@ struct kvm_vcpu_arch {
> u32 mdscr_el1;
> } guest_debug_preserved;
>
> - /* vcpu power-off state */
> - bool power_off;
> + /* Current VCPU MP state */
> + u32 mp_state;
>
> /* Don't run the guest (internal implementation need) */
> bool pause;
> @@ -328,6 +328,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>
> struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index e4bec508ee5b..954e77608d29 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -368,21 +368,23 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> kvm_timer_vcpu_put(vcpu);
> }
>
> -static void vcpu_power_off(struct kvm_vcpu *vcpu)
> +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
> {
> - vcpu->arch.power_off = true;
> + vcpu->arch.mp_state = KVM_MP_STATE_STOPPED;
> kvm_make_request(KVM_REQ_SLEEP, vcpu);
> kvm_vcpu_kick(vcpu);
> }
>
> +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu)
> +{
> + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
> + kvm_vcpu_kick(vcpu);
> +}
> +
> int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
> struct kvm_mp_state *mp_state)
> {
> - if (vcpu->arch.power_off)
> - mp_state->mp_state = KVM_MP_STATE_STOPPED;
> - else
> - mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
> -
> + mp_state->mp_state = vcpu->arch.mp_state;
> return 0;
> }
>
> @@ -391,10 +393,10 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> {
> switch (mp_state->mp_state) {
> case KVM_MP_STATE_RUNNABLE:
> - vcpu->arch.power_off = false;
> + kvm_arm_vcpu_power_on(vcpu);
> break;
> case KVM_MP_STATE_STOPPED:
> - vcpu_power_off(vcpu);
> + kvm_arm_vcpu_power_off(vcpu);
> break;
> default:
> return -EINVAL;
> @@ -405,7 +407,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
>
> static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> {
> - return vcpu->arch.power_off || vcpu->arch.pause;
> + return vcpu->arch.mp_state == KVM_MP_STATE_STOPPED || vcpu->arch.pause;
> }
>
> /**
> @@ -919,9 +921,9 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> * Handle the "start in power-off" case.
> */
> if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> - vcpu_power_off(vcpu);
> + kvm_arm_vcpu_power_off(vcpu);
> else
> - vcpu->arch.power_off = false;
> + kvm_arm_vcpu_power_on(vcpu);
>
> return 0;
> }
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index f1e363bab5e8..a7bf152f1247 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -64,16 +64,13 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
>
> static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
> {
> - vcpu->arch.power_off = true;
> - kvm_make_request(KVM_REQ_SLEEP, vcpu);
> - kvm_vcpu_kick(vcpu);
> + kvm_arm_vcpu_power_off(vcpu);
> }
>
> static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> {
> struct kvm *kvm = source_vcpu->kvm;
> struct kvm_vcpu *vcpu = NULL;
> - struct swait_queue_head *wq;
> unsigned long cpu_id;
> unsigned long context_id;
> phys_addr_t target_pc;
> @@ -90,7 +87,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> */
> if (!vcpu)
> return PSCI_RET_INVALID_PARAMS;
> - if (!vcpu->arch.power_off) {
> + if (vcpu->arch.mp_state != KVM_MP_STATE_STOPPED) {
Would it make sense to have a new helper to wrap that condition (also
used below)?
> if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
> return PSCI_RET_ALREADY_ON;
> else
> @@ -118,11 +115,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> * the general puspose registers are undefined upon CPU_ON.
> */
> vcpu_set_reg(vcpu, 0, context_id);
> - vcpu->arch.power_off = false;
> - smp_mb(); /* Make sure the above is visible */
> -
> - wq = kvm_arch_vcpu_wq(vcpu);
> - swake_up(wq);
> + kvm_arm_vcpu_power_on(vcpu);
What guarantees that the mp_state update is now visible to other agents?
Or was the smp_mb() here superfluous? This should anyhow deserve a
mention in the commit message.
>
> return PSCI_RET_SUCCESS;
> }
> @@ -156,7 +149,7 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
> mpidr = kvm_vcpu_get_mpidr_aff(tmp);
> if ((mpidr & target_affinity_mask) == target_affinity) {
> matching_cpus++;
> - if (!tmp->arch.power_off)
> + if (tmp->arch.mp_state != KVM_MP_STATE_STOPPED)
> return PSCI_0_2_AFFINITY_LEVEL_ON;
> }
> }
> @@ -182,7 +175,7 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
> * re-initialized.
> */
> kvm_for_each_vcpu(i, tmp, vcpu->kvm)
> - tmp->arch.power_off = true;
> + tmp->arch.mp_state = KVM_MP_STATE_STOPPED;
Ah, shame we can't get rid of this with a power_off...
> kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
>
> memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code
2017-09-29 11:30 ` [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code Andrew Jones
@ 2017-10-05 8:36 ` Marc Zyngier
2017-10-14 19:13 ` Christoffer Dall
1 sibling, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2017-10-05 8:36 UTC (permalink / raw)
To: Andrew Jones, kvmarm; +Cc: cdall
On 29/09/17 12:30, Andrew Jones wrote:
> Before we add more common code to the wfi emulation, let's first
> factor out everything common.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> arch/arm/include/asm/kvm_host.h | 2 ++
> arch/arm/kvm/handle_exit.c | 14 +++++---------
> arch/arm64/include/asm/kvm_host.h | 2 ++
> arch/arm64/kvm/handle_exit.c | 14 +++++---------
> virt/kvm/arm/arm.c | 13 +++++++++++++
> virt/kvm/arm/psci.c | 15 +++++++--------
> 6 files changed, 34 insertions(+), 26 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 85f3c20b9759..964320825372 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -231,6 +231,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
> +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
> +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
>
> int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index cf8bf6bf87c4..e40466577c87 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -57,22 +57,18 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
> * @run: the kvm_run structure pointer
> *
> * WFE: Yield the CPU and come back to this vcpu when the scheduler
> - * decides to.
> - * WFI: Simply call kvm_vcpu_block(), which will halt execution of
> - * world-switches and schedule other host processes until there is an
> - * incoming IRQ or FIQ to the VM.
> + * decides to.
> + * WFI: Halt execution of world-switches and schedule other host
> + * processes until there is an incoming IRQ or FIQ to the VM.
> */
> static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
> {
> if (kvm_vcpu_get_hsr(vcpu) & HSR_WFI_IS_WFE) {
> trace_kvm_wfx(*vcpu_pc(vcpu), true);
> - vcpu->stat.wfe_exit_stat++;
> - kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> + kvm_arm_emulate_wfe(vcpu);
> } else {
> trace_kvm_wfx(*vcpu_pc(vcpu), false);
> - vcpu->stat.wfi_exit_stat++;
> - kvm_vcpu_block(vcpu);
> - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> + kvm_arm_emulate_wfi(vcpu);
> }
>
> kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 25545a87de11..a774f6b30474 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -332,6 +332,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
> +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
> +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
>
> u64 __kvm_call_hyp(void *hypfn, ...);
> #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 7debb74843a0..7ba50a296d10 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -74,22 +74,18 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
> * @vcpu: the vcpu pointer
> *
> * WFE: Yield the CPU and come back to this vcpu when the scheduler
> - * decides to.
> - * WFI: Simply call kvm_vcpu_block(), which will halt execution of
> - * world-switches and schedule other host processes until there is an
> - * incoming IRQ or FIQ to the VM.
> + * decides to.
> + * WFI: Halt execution of world-switches and schedule other host
> + * processes until there is an incoming IRQ or FIQ to the VM.
> */
> static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
> {
> if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
> trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
> - vcpu->stat.wfe_exit_stat++;
> - kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> + kvm_arm_emulate_wfe(vcpu);
> } else {
> trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false);
> - vcpu->stat.wfi_exit_stat++;
> - kvm_vcpu_block(vcpu);
> - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> + kvm_arm_emulate_wfi(vcpu);
> }
>
> kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 954e77608d29..220a3dbda76c 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -573,6 +573,19 @@ void kvm_arm_resume_guest(struct kvm *kvm)
> }
> }
>
> +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
> +{
> + vcpu->stat.wfe_exit_stat++;
> + kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> +}
> +
> +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
> +{
> + vcpu->stat.wfi_exit_stat++;
> + kvm_vcpu_block(vcpu);
> + kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> +}
> +
> static void vcpu_req_sleep(struct kvm_vcpu *vcpu)
> {
> struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index a7bf152f1247..755c415304ea 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -50,20 +50,19 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
> * This means for KVM the wakeup events are interrupts and
> * this is consistent with intended use of StateID as described
> * in section 5.4.1 of PSCI v0.2 specification (ARM DEN 0022A).
> - *
> - * Further, we also treat power-down request to be same as
> - * stand-by request as-per section 5.4.2 clause 3 of PSCI v0.2
> - * specification (ARM DEN 0022A). This means all suspend states
> - * for KVM will preserve the register state.
> */
> - kvm_vcpu_block(vcpu);
> - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> -
> + kvm_arm_emulate_wfi(vcpu);
> return PSCI_RET_SUCCESS;
> }
>
> static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
> {
> + /*
> + * We treat a power-off request the same as a stand-by request,
> + * as-per section 5.4.2 clause 3 of PSCI v0.2* specification
> + * (ARM DEN 0022A). This means all suspend states for KVM will
> + * preserve the register state.
> + */
> kvm_arm_vcpu_power_off(vcpu);
> }
>
>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable
2017-09-29 11:30 ` [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable Andrew Jones
@ 2017-10-05 9:19 ` Marc Zyngier
2017-10-14 19:13 ` Christoffer Dall
1 sibling, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2017-10-05 9:19 UTC (permalink / raw)
To: Andrew Jones, kvmarm; +Cc: cdall
On 29/09/17 12:30, Andrew Jones wrote:
> Conceptually, kvm_arch_vcpu_runnable() should be "not waiting,
> or waiting for interrupts and interrupts are pending",
>
> !waiting-uninterruptable &&
> (!waiting-for-interrupts || interrupts-pending)
>
> but the implementation was only
>
> !waiting-uninterruptable && interrupts-pending
>
> Thanks to the context of the two callers there wasn't an issue,
> however, to future-proof the function, this patch implements the
> conceptual condition by applying mp_state to track waiting-
> uninterruptable vs. waiting-for-interrupts.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> Documentation/virtual/kvm/api.txt | 10 ++++++----
> virt/kvm/arm/arm.c | 13 +++++++++----
> 2 files changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index e63a35fafef0..7c0bb1ae10a2 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1071,7 +1071,7 @@ Possible values are:
> - KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
> now ready for a SIPI [x86]
> - KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
> - is waiting for an interrupt [x86]
> + is waiting for an interrupt [x86,arm/arm64]
> - KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
> accessible via KVM_GET_VCPU_EVENTS) [x86]
> - KVM_MP_STATE_STOPPED: the vcpu is stopped [s390,arm/arm64]
> @@ -1087,8 +1087,9 @@ these architectures.
>
> For arm/arm64:
>
> -The only states that are valid are KVM_MP_STATE_STOPPED and
> -KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not.
> +The only states that are valid are KVM_MP_STATE_STOPPED, KVM_MP_STATE_HALTED,
> +and KVM_MP_STATE_RUNNABLE which reflect if the vcpu is powered-off, waiting
> +for interrupts, or powered-on and not waiting for interrupts.
>
> 4.39 KVM_SET_MP_STATE
>
> @@ -1108,7 +1109,8 @@ these architectures.
> For arm/arm64:
>
> The only states that are valid are KVM_MP_STATE_STOPPED and
> -KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not.
> +KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be
> +powered-off or not.
>
> 4.40 KVM_SET_IDENTITY_MAP_ADDR
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 220a3dbda76c..5bc9b0d2fd0f 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -414,13 +414,16 @@ static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> * kvm_arch_vcpu_runnable - determine if the vcpu can be scheduled
> * @vcpu: The VCPU pointer
> *
> - * If the guest CPU is not waiting for interrupts or an interrupt line is
> - * asserted, the CPU is by definition runnable.
> + * If the VCPU is not waiting at all (including sleeping, which is waiting
> + * uninterruptably), or it's waiting for interrupts but interrupts are
> + * pending, then it is by definition runnable.
> */
> int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> {
> - return (!!vcpu->arch.irq_lines || kvm_vgic_vcpu_pending_irq(vcpu))
> - && !vcpu_should_sleep(vcpu);
> + return !vcpu_should_sleep(vcpu) &&
> + (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> + (!!vcpu->arch.irq_lines ||
> + kvm_vgic_vcpu_pending_irq(vcpu)));
> }
>
> bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> @@ -582,7 +585,9 @@ void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
> void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
> {
> vcpu->stat.wfi_exit_stat++;
> + vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
> kvm_vcpu_block(vcpu);
> + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
> kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> }
>
>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-09-29 11:30 ` [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs Andrew Jones
@ 2017-10-05 9:37 ` Marc Zyngier
2017-10-10 13:28 ` Andrew Jones
2017-10-14 19:13 ` Christoffer Dall
1 sibling, 1 reply; 26+ messages in thread
From: Marc Zyngier @ 2017-10-05 9:37 UTC (permalink / raw)
To: Andrew Jones, kvmarm; +Cc: cdall
On 29/09/17 12:30, Andrew Jones wrote:
> When the vPMU is in use if a VCPU's perf event overflow handler
> were to fire after the VCPU started waiting, then the wake up
> done by the kvm_vcpu_kick() call in the handler would do nothing,
> as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
> Fix this by checking the IRQ_PENDING VCPU request in runnable().
> Checking the request also sufficiently covers all the cases that
> kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> virt/kvm/arm/arm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 5bc9b0d2fd0f..725527f491e4 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> return !vcpu_should_sleep(vcpu) &&
> (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> (!!vcpu->arch.irq_lines ||
> - kvm_vgic_vcpu_pending_irq(vcpu)));
> + kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
> }
>
> bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
On a side note, I just had a look at our usage of KVM_REQ_IRQ_PENDING,
and we always seem to have a make_request/kick pair (which definitely
makes sense). Maybe there is room for a bit of consolidation there too.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 6/5] KVM: arm/arm64: make kvm_vgic_vcpu_pending_irq static
2017-10-02 8:31 ` [PATCH 6/5] KVM: arm/arm64: make kvm_vgic_vcpu_pending_irq static Andrew Jones
@ 2017-10-05 9:37 ` Marc Zyngier
0 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2017-10-05 9:37 UTC (permalink / raw)
To: Andrew Jones, kvmarm; +Cc: cdall
On 02/10/17 09:31, Andrew Jones wrote:
> Now that the only caller of kvm_vgic_vcpu_pending_irq() is in the
> vgic code, let's make it private in order to discourage it's use
> outside, as checking the IRQ_PENDING VCPU request is likely the
> right thing for non-vgic code to do. Also, remove a vgic prototype
> that was mistakenly introduced with 0919e84c0fc1, and along for the
> ride ever since.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> include/kvm/arm_vgic.h | 3 ---
> virt/kvm/arm/vgic/vgic.c | 2 +-
> 2 files changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> index 34dba516ef24..aade132d2a30 100644
> --- a/include/kvm/arm_vgic.h
> +++ b/include/kvm/arm_vgic.h
> @@ -311,8 +311,6 @@ int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, u32 virt_irq, u32 phys_irq);
> int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int virt_irq);
> bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int virt_irq);
>
> -int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
> -
> void kvm_vgic_load(struct kvm_vcpu *vcpu);
> void kvm_vgic_put(struct kvm_vcpu *vcpu);
>
> @@ -322,7 +320,6 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu);
> #define vgic_valid_spi(k, i) (((i) >= VGIC_NR_PRIVATE_IRQS) && \
> ((i) < (k)->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS))
>
> -bool kvm_vcpu_has_pending_irqs(struct kvm_vcpu *vcpu);
> void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
> void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
>
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index fed717e07938..72a16259beec 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -730,7 +730,7 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu)
> vgic_v3_put(vcpu);
> }
>
> -int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
> +static int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
> {
> struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> struct vgic_irq *irq;
>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED
2017-10-05 8:32 ` Marc Zyngier
@ 2017-10-10 13:26 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2017-10-10 13:26 UTC (permalink / raw)
To: Marc Zyngier; +Cc: cdall, kvmarm
On Thu, Oct 05, 2017 at 09:32:02AM +0100, Marc Zyngier wrote:
> On 29/09/17 12:30, Andrew Jones wrote:
> > This prepares for more MP states to be used.
> >
> > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > ---
> > arch/arm/include/asm/kvm_host.h | 6 ++++--
> > arch/arm64/include/asm/kvm_host.h | 6 ++++--
> > virt/kvm/arm/arm.c | 26 ++++++++++++++------------
> > virt/kvm/arm/psci.c | 17 +++++------------
> > 4 files changed, 27 insertions(+), 28 deletions(-)
> >
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index 4a879f6ff13b..85f3c20b9759 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -171,8 +171,8 @@ struct kvm_vcpu_arch {
> > * here.
> > */
> >
> > - /* vcpu power-off state */
> > - bool power_off;
> > + /* Current VCPU MP state */
> > + u32 mp_state;
>
> What is the rational for not using a struct kvm_mp_state directly here?
> It is the same thing (a u32), but I certainly find it more explicit than
> a bare u32.
No good rationale, only inspiration from reading x86 code. I agree using
the struct makes more sense, though, so I'll change it.
>
> >
> > /* Don't run the guest (internal implementation need) */
> > bool pause;
> > @@ -227,6 +227,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> >
> > struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> > void kvm_arm_halt_guest(struct kvm *kvm);
> > void kvm_arm_resume_guest(struct kvm *kvm);
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index e923b58606e2..25545a87de11 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -255,8 +255,8 @@ struct kvm_vcpu_arch {
> > u32 mdscr_el1;
> > } guest_debug_preserved;
> >
> > - /* vcpu power-off state */
> > - bool power_off;
> > + /* Current VCPU MP state */
> > + u32 mp_state;
> >
> > /* Don't run the guest (internal implementation need) */
> > bool pause;
> > @@ -328,6 +328,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> >
> > struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> > void kvm_arm_halt_guest(struct kvm *kvm);
> > void kvm_arm_resume_guest(struct kvm *kvm);
> >
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index e4bec508ee5b..954e77608d29 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -368,21 +368,23 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> > kvm_timer_vcpu_put(vcpu);
> > }
> >
> > -static void vcpu_power_off(struct kvm_vcpu *vcpu)
> > +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
> > {
> > - vcpu->arch.power_off = true;
> > + vcpu->arch.mp_state = KVM_MP_STATE_STOPPED;
> > kvm_make_request(KVM_REQ_SLEEP, vcpu);
> > kvm_vcpu_kick(vcpu);
> > }
> >
> > +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu)
> > +{
> > + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
> > + kvm_vcpu_kick(vcpu);
> > +}
> > +
> > int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
> > struct kvm_mp_state *mp_state)
> > {
> > - if (vcpu->arch.power_off)
> > - mp_state->mp_state = KVM_MP_STATE_STOPPED;
> > - else
> > - mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
> > -
> > + mp_state->mp_state = vcpu->arch.mp_state;
> > return 0;
> > }
> >
> > @@ -391,10 +393,10 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> > {
> > switch (mp_state->mp_state) {
> > case KVM_MP_STATE_RUNNABLE:
> > - vcpu->arch.power_off = false;
> > + kvm_arm_vcpu_power_on(vcpu);
> > break;
> > case KVM_MP_STATE_STOPPED:
> > - vcpu_power_off(vcpu);
> > + kvm_arm_vcpu_power_off(vcpu);
> > break;
> > default:
> > return -EINVAL;
> > @@ -405,7 +407,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> >
> > static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> > {
> > - return vcpu->arch.power_off || vcpu->arch.pause;
> > + return vcpu->arch.mp_state == KVM_MP_STATE_STOPPED || vcpu->arch.pause;
> > }
> >
> > /**
> > @@ -919,9 +921,9 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> > * Handle the "start in power-off" case.
> > */
> > if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> > - vcpu_power_off(vcpu);
> > + kvm_arm_vcpu_power_off(vcpu);
> > else
> > - vcpu->arch.power_off = false;
> > + kvm_arm_vcpu_power_on(vcpu);
> >
> > return 0;
> > }
> > diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> > index f1e363bab5e8..a7bf152f1247 100644
> > --- a/virt/kvm/arm/psci.c
> > +++ b/virt/kvm/arm/psci.c
> > @@ -64,16 +64,13 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
> >
> > static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
> > {
> > - vcpu->arch.power_off = true;
> > - kvm_make_request(KVM_REQ_SLEEP, vcpu);
> > - kvm_vcpu_kick(vcpu);
> > + kvm_arm_vcpu_power_off(vcpu);
> > }
> >
> > static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> > {
> > struct kvm *kvm = source_vcpu->kvm;
> > struct kvm_vcpu *vcpu = NULL;
> > - struct swait_queue_head *wq;
> > unsigned long cpu_id;
> > unsigned long context_id;
> > phys_addr_t target_pc;
> > @@ -90,7 +87,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> > */
> > if (!vcpu)
> > return PSCI_RET_INVALID_PARAMS;
> > - if (!vcpu->arch.power_off) {
> > + if (vcpu->arch.mp_state != KVM_MP_STATE_STOPPED) {
>
> Would it make sense to have a new helper to wrap that condition (also
> used below)?
I'll see how it looks.
>
> > if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
> > return PSCI_RET_ALREADY_ON;
> > else
> > @@ -118,11 +115,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> > * the general puspose registers are undefined upon CPU_ON.
> > */
> > vcpu_set_reg(vcpu, 0, context_id);
> > - vcpu->arch.power_off = false;
> > - smp_mb(); /* Make sure the above is visible */
> > -
> > - wq = kvm_arch_vcpu_wq(vcpu);
> > - swake_up(wq);
> > + kvm_arm_vcpu_power_on(vcpu);
>
> What guarantees that the mp_state update is now visible to other agents?
> Or was the smp_mb() here superfluous? This should anyhow deserve a
> mention in the commit message.
The smp_mb() was superfluous, as this has the same pattern as other
uses of prepare_to_swait/swake_up. I'll update the commit message
with more detail.
>
> >
> > return PSCI_RET_SUCCESS;
> > }
> > @@ -156,7 +149,7 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
> > mpidr = kvm_vcpu_get_mpidr_aff(tmp);
> > if ((mpidr & target_affinity_mask) == target_affinity) {
> > matching_cpus++;
> > - if (!tmp->arch.power_off)
> > + if (tmp->arch.mp_state != KVM_MP_STATE_STOPPED)
> > return PSCI_0_2_AFFINITY_LEVEL_ON;
> > }
> > }
> > @@ -182,7 +175,7 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
> > * re-initialized.
> > */
> > kvm_for_each_vcpu(i, tmp, vcpu->kvm)
> > - tmp->arch.power_off = true;
> > + tmp->arch.mp_state = KVM_MP_STATE_STOPPED;
>
> Ah, shame we can't get rid of this with a power_off...
>
> > kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
> >
> > memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
> >
>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...
Thanks for the review!
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-10-05 9:37 ` Marc Zyngier
@ 2017-10-10 13:28 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2017-10-10 13:28 UTC (permalink / raw)
To: Marc Zyngier; +Cc: cdall, kvmarm
On Thu, Oct 05, 2017 at 10:37:32AM +0100, Marc Zyngier wrote:
> On 29/09/17 12:30, Andrew Jones wrote:
> > When the vPMU is in use if a VCPU's perf event overflow handler
> > were to fire after the VCPU started waiting, then the wake up
> > done by the kvm_vcpu_kick() call in the handler would do nothing,
> > as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
> > Fix this by checking the IRQ_PENDING VCPU request in runnable().
> > Checking the request also sufficiently covers all the cases that
> > kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
> >
> > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > ---
> > virt/kvm/arm/arm.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 5bc9b0d2fd0f..725527f491e4 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> > return !vcpu_should_sleep(vcpu) &&
> > (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> > (!!vcpu->arch.irq_lines ||
> > - kvm_vgic_vcpu_pending_irq(vcpu)));
> > + kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
> > }
> >
> > bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> >
>
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>
> On a side note, I just had a look at our usage of KVM_REQ_IRQ_PENDING,
> and we always seem to have a make_request/kick pair (which definitely
> makes sense). Maybe there is room for a bit of consolidation there too.
I think Radim had a plan to do that with some more VCPU request cleanups.
Thanks,
drew
>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED
2017-09-29 11:30 ` [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED Andrew Jones
2017-10-05 8:32 ` Marc Zyngier
@ 2017-10-14 19:12 ` Christoffer Dall
2017-10-18 12:04 ` Andrew Jones
1 sibling, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2017-10-14 19:12 UTC (permalink / raw)
To: Andrew Jones; +Cc: marc.zyngier, kvmarm
On Fri, Sep 29, 2017 at 01:30:38PM +0200, Andrew Jones wrote:
> This prepares for more MP states to be used.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> arch/arm/include/asm/kvm_host.h | 6 ++++--
> arch/arm64/include/asm/kvm_host.h | 6 ++++--
> virt/kvm/arm/arm.c | 26 ++++++++++++++------------
> virt/kvm/arm/psci.c | 17 +++++------------
> 4 files changed, 27 insertions(+), 28 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 4a879f6ff13b..85f3c20b9759 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -171,8 +171,8 @@ struct kvm_vcpu_arch {
> * here.
> */
>
> - /* vcpu power-off state */
> - bool power_off;
> + /* Current VCPU MP state */
> + u32 mp_state;
>
> /* Don't run the guest (internal implementation need) */
> bool pause;
> @@ -227,6 +227,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>
> struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e923b58606e2..25545a87de11 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -255,8 +255,8 @@ struct kvm_vcpu_arch {
> u32 mdscr_el1;
> } guest_debug_preserved;
>
> - /* vcpu power-off state */
> - bool power_off;
> + /* Current VCPU MP state */
> + u32 mp_state;
>
> /* Don't run the guest (internal implementation need) */
> bool pause;
> @@ -328,6 +328,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>
> struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index e4bec508ee5b..954e77608d29 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -368,21 +368,23 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> kvm_timer_vcpu_put(vcpu);
> }
>
> -static void vcpu_power_off(struct kvm_vcpu *vcpu)
> +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
> {
> - vcpu->arch.power_off = true;
> + vcpu->arch.mp_state = KVM_MP_STATE_STOPPED;
> kvm_make_request(KVM_REQ_SLEEP, vcpu);
> kvm_vcpu_kick(vcpu);
> }
>
> +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu)
> +{
> + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
> + kvm_vcpu_kick(vcpu);
Given your rant about request-less kicks, shouldn't this be a
kvm_vcpu_wake_up() ?
> +}
> +
> int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
> struct kvm_mp_state *mp_state)
> {
> - if (vcpu->arch.power_off)
> - mp_state->mp_state = KVM_MP_STATE_STOPPED;
> - else
> - mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
> -
> + mp_state->mp_state = vcpu->arch.mp_state;
> return 0;
> }
>
> @@ -391,10 +393,10 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> {
> switch (mp_state->mp_state) {
> case KVM_MP_STATE_RUNNABLE:
> - vcpu->arch.power_off = false;
> + kvm_arm_vcpu_power_on(vcpu);
> break;
> case KVM_MP_STATE_STOPPED:
> - vcpu_power_off(vcpu);
> + kvm_arm_vcpu_power_off(vcpu);
> break;
> default:
> return -EINVAL;
> @@ -405,7 +407,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
>
> static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> {
> - return vcpu->arch.power_off || vcpu->arch.pause;
> + return vcpu->arch.mp_state == KVM_MP_STATE_STOPPED || vcpu->arch.pause;
> }
>
> /**
> @@ -919,9 +921,9 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> * Handle the "start in power-off" case.
> */
> if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> - vcpu_power_off(vcpu);
> + kvm_arm_vcpu_power_off(vcpu);
> else
> - vcpu->arch.power_off = false;
> + kvm_arm_vcpu_power_on(vcpu);
>
> return 0;
> }
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index f1e363bab5e8..a7bf152f1247 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -64,16 +64,13 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
>
> static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
> {
> - vcpu->arch.power_off = true;
> - kvm_make_request(KVM_REQ_SLEEP, vcpu);
> - kvm_vcpu_kick(vcpu);
> + kvm_arm_vcpu_power_off(vcpu);
> }
>
> static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> {
> struct kvm *kvm = source_vcpu->kvm;
> struct kvm_vcpu *vcpu = NULL;
> - struct swait_queue_head *wq;
> unsigned long cpu_id;
> unsigned long context_id;
> phys_addr_t target_pc;
> @@ -90,7 +87,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> */
> if (!vcpu)
> return PSCI_RET_INVALID_PARAMS;
> - if (!vcpu->arch.power_off) {
> + if (vcpu->arch.mp_state != KVM_MP_STATE_STOPPED) {
> if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
> return PSCI_RET_ALREADY_ON;
> else
> @@ -118,11 +115,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> * the general puspose registers are undefined upon CPU_ON.
> */
> vcpu_set_reg(vcpu, 0, context_id);
> - vcpu->arch.power_off = false;
> - smp_mb(); /* Make sure the above is visible */
> -
> - wq = kvm_arch_vcpu_wq(vcpu);
> - swake_up(wq);
> + kvm_arm_vcpu_power_on(vcpu);
>
> return PSCI_RET_SUCCESS;
> }
> @@ -156,7 +149,7 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
> mpidr = kvm_vcpu_get_mpidr_aff(tmp);
> if ((mpidr & target_affinity_mask) == target_affinity) {
> matching_cpus++;
> - if (!tmp->arch.power_off)
> + if (tmp->arch.mp_state != KVM_MP_STATE_STOPPED)
> return PSCI_0_2_AFFINITY_LEVEL_ON;
> }
> }
> @@ -182,7 +175,7 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
> * re-initialized.
> */
> kvm_for_each_vcpu(i, tmp, vcpu->kvm)
> - tmp->arch.power_off = true;
> + tmp->arch.mp_state = KVM_MP_STATE_STOPPED;
> kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
>
> memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
> --
> 2.13.5
>
Otherwise looks good to me:
Reviewed-by: Christoffer Dall <cdall@linaro.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code
2017-09-29 11:30 ` [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code Andrew Jones
2017-10-05 8:36 ` Marc Zyngier
@ 2017-10-14 19:13 ` Christoffer Dall
2017-10-18 12:06 ` Andrew Jones
1 sibling, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2017-10-14 19:13 UTC (permalink / raw)
To: Andrew Jones; +Cc: marc.zyngier, kvmarm
On Fri, Sep 29, 2017 at 01:30:39PM +0200, Andrew Jones wrote:
> Before we add more common code to the wfi emulation, let's first
> factor out everything common.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> arch/arm/include/asm/kvm_host.h | 2 ++
> arch/arm/kvm/handle_exit.c | 14 +++++---------
> arch/arm64/include/asm/kvm_host.h | 2 ++
> arch/arm64/kvm/handle_exit.c | 14 +++++---------
> virt/kvm/arm/arm.c | 13 +++++++++++++
> virt/kvm/arm/psci.c | 15 +++++++--------
> 6 files changed, 34 insertions(+), 26 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 85f3c20b9759..964320825372 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -231,6 +231,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
> +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
> +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
>
> int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index cf8bf6bf87c4..e40466577c87 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -57,22 +57,18 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
> * @run: the kvm_run structure pointer
> *
> * WFE: Yield the CPU and come back to this vcpu when the scheduler
> - * decides to.
> - * WFI: Simply call kvm_vcpu_block(), which will halt execution of
> - * world-switches and schedule other host processes until there is an
> - * incoming IRQ or FIQ to the VM.
> + * decides to.
> + * WFI: Halt execution of world-switches and schedule other host
> + * processes until there is an incoming IRQ or FIQ to the VM.
s/VM/VCPU/
> */
> static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
> {
> if (kvm_vcpu_get_hsr(vcpu) & HSR_WFI_IS_WFE) {
> trace_kvm_wfx(*vcpu_pc(vcpu), true);
> - vcpu->stat.wfe_exit_stat++;
> - kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> + kvm_arm_emulate_wfe(vcpu);
> } else {
> trace_kvm_wfx(*vcpu_pc(vcpu), false);
> - vcpu->stat.wfi_exit_stat++;
> - kvm_vcpu_block(vcpu);
> - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> + kvm_arm_emulate_wfi(vcpu);
> }
>
> kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 25545a87de11..a774f6b30474 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -332,6 +332,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> void kvm_arm_halt_guest(struct kvm *kvm);
> void kvm_arm_resume_guest(struct kvm *kvm);
> +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
> +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
>
> u64 __kvm_call_hyp(void *hypfn, ...);
> #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 7debb74843a0..7ba50a296d10 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -74,22 +74,18 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
> * @vcpu: the vcpu pointer
> *
> * WFE: Yield the CPU and come back to this vcpu when the scheduler
> - * decides to.
> - * WFI: Simply call kvm_vcpu_block(), which will halt execution of
> - * world-switches and schedule other host processes until there is an
> - * incoming IRQ or FIQ to the VM.
> + * decides to.
> + * WFI: Halt execution of world-switches and schedule other host
> + * processes until there is an incoming IRQ or FIQ to the VM.
> */
> static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
> {
> if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
> trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
> - vcpu->stat.wfe_exit_stat++;
> - kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> + kvm_arm_emulate_wfe(vcpu);
> } else {
> trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false);
> - vcpu->stat.wfi_exit_stat++;
> - kvm_vcpu_block(vcpu);
> - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> + kvm_arm_emulate_wfi(vcpu);
> }
>
> kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 954e77608d29..220a3dbda76c 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -573,6 +573,19 @@ void kvm_arm_resume_guest(struct kvm *kvm)
> }
> }
>
> +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
> +{
> + vcpu->stat.wfe_exit_stat++;
> + kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> +}
> +
> +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
> +{
> + vcpu->stat.wfi_exit_stat++;
> + kvm_vcpu_block(vcpu);
> + kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> +}
> +
> static void vcpu_req_sleep(struct kvm_vcpu *vcpu)
> {
> struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index a7bf152f1247..755c415304ea 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -50,20 +50,19 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
> * This means for KVM the wakeup events are interrupts and
> * this is consistent with intended use of StateID as described
> * in section 5.4.1 of PSCI v0.2 specification (ARM DEN 0022A).
> - *
> - * Further, we also treat power-down request to be same as
> - * stand-by request as-per section 5.4.2 clause 3 of PSCI v0.2
> - * specification (ARM DEN 0022A). This means all suspend states
> - * for KVM will preserve the register state.
> */
> - kvm_vcpu_block(vcpu);
> - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> -
> + kvm_arm_emulate_wfi(vcpu);
> return PSCI_RET_SUCCESS;
> }
>
> static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
> {
> + /*
> + * We treat a power-off request the same as a stand-by request,
> + * as-per section 5.4.2 clause 3 of PSCI v0.2* specification
> + * (ARM DEN 0022A). This means all suspend states for KVM will
> + * preserve the register state.
> + */
I'm not actually convinced that this part of the comment was about
power-off. I think what it was trying to say was simply that a suspend
operation should preverse the register state, and therefore the
commentary doesn't belong in this function. I agree that the comment is
potentially more confusing (I lost the exact version of the document
referred to in the original comment so can't be sure if there was
something unclear in the original document or if the comment is just
vaguely written), than it helps. Therefore, I think you should either
(a) keep the comment as it is, (b) rewrite it to make sense, or (c) just
delete it.
> kvm_arm_vcpu_power_off(vcpu);
> }
>
> --
> 2.13.5
>
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable
2017-09-29 11:30 ` [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable Andrew Jones
2017-10-05 9:19 ` Marc Zyngier
@ 2017-10-14 19:13 ` Christoffer Dall
2017-10-18 12:09 ` Andrew Jones
1 sibling, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2017-10-14 19:13 UTC (permalink / raw)
To: Andrew Jones; +Cc: marc.zyngier, kvmarm
On Fri, Sep 29, 2017 at 01:30:40PM +0200, Andrew Jones wrote:
> Conceptually, kvm_arch_vcpu_runnable() should be "not waiting,
> or waiting for interrupts and interrupts are pending",
>
> !waiting-uninterruptable &&
> (!waiting-for-interrupts || interrupts-pending)
>
> but the implementation was only
>
> !waiting-uninterruptable && interrupts-pending
>
> Thanks to the context of the two callers there wasn't an issue,
> however, to future-proof the function, this patch implements the
> conceptual condition by applying mp_state to track waiting-
> uninterruptable vs. waiting-for-interrupts.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> Documentation/virtual/kvm/api.txt | 10 ++++++----
> virt/kvm/arm/arm.c | 13 +++++++++----
> 2 files changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index e63a35fafef0..7c0bb1ae10a2 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1071,7 +1071,7 @@ Possible values are:
> - KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
> now ready for a SIPI [x86]
> - KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
> - is waiting for an interrupt [x86]
> + is waiting for an interrupt [x86,arm/arm64]
> - KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
> accessible via KVM_GET_VCPU_EVENTS) [x86]
> - KVM_MP_STATE_STOPPED: the vcpu is stopped [s390,arm/arm64]
> @@ -1087,8 +1087,9 @@ these architectures.
>
> For arm/arm64:
>
> -The only states that are valid are KVM_MP_STATE_STOPPED and
> -KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not.
> +The only states that are valid are KVM_MP_STATE_STOPPED, KVM_MP_STATE_HALTED,
> +and KVM_MP_STATE_RUNNABLE which reflect if the vcpu is powered-off, waiting
> +for interrupts, or powered-on and not waiting for interrupts.
>
Is it valid to introduce another value after we've publicly declared
that it's not valid? What if userspace has an assert(state ==
KVM_MP_STATE_STOPPED || state == KVM_MP_STATE_RUNNABLE) ?
At least we should make that is not the case.
> 4.39 KVM_SET_MP_STATE
>
> @@ -1108,7 +1109,8 @@ these architectures.
> For arm/arm64:
>
> The only states that are valid are KVM_MP_STATE_STOPPED and
> -KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not.
> +KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be
> +powered-off or not.
>
It looks pretty dodgy to me that we can return an MP state which we
cannot set again. Feels like this could break migration.
> 4.40 KVM_SET_IDENTITY_MAP_ADDR
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 220a3dbda76c..5bc9b0d2fd0f 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -414,13 +414,16 @@ static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> * kvm_arch_vcpu_runnable - determine if the vcpu can be scheduled
> * @vcpu: The VCPU pointer
> *
> - * If the guest CPU is not waiting for interrupts or an interrupt line is
> - * asserted, the CPU is by definition runnable.
> + * If the VCPU is not waiting at all (including sleeping, which is waiting
> + * uninterruptably), or it's waiting for interrupts but interrupts are
> + * pending, then it is by definition runnable.
> */
> int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> {
> - return (!!vcpu->arch.irq_lines || kvm_vgic_vcpu_pending_irq(vcpu))
> - && !vcpu_should_sleep(vcpu);
> + return !vcpu_should_sleep(vcpu) &&
> + (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> + (!!vcpu->arch.irq_lines ||
> + kvm_vgic_vcpu_pending_irq(vcpu)));
This is hard to read. How about:
bool irq_pending = !!vcpu->arch.irq_lines ||
kvm_vgic_vcpu_pending_irq(vcpu);
if (vcpu_should_sleep(vcpu)
return false;
else if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED && !irq_pending)
return false;
return true;
> }
>
> bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> @@ -582,7 +585,9 @@ void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
> void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
> {
> vcpu->stat.wfi_exit_stat++;
> + vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
> kvm_vcpu_block(vcpu);
> + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
> kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> }
>
> --
> 2.13.5
>
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-09-29 11:30 ` [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs Andrew Jones
2017-10-05 9:37 ` Marc Zyngier
@ 2017-10-14 19:13 ` Christoffer Dall
2017-10-18 12:13 ` Andrew Jones
1 sibling, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2017-10-14 19:13 UTC (permalink / raw)
To: Andrew Jones; +Cc: marc.zyngier, kvmarm
On Fri, Sep 29, 2017 at 01:30:41PM +0200, Andrew Jones wrote:
> When the vPMU is in use if a VCPU's perf event overflow handler
> were to fire after the VCPU started waiting, then the wake up
> done by the kvm_vcpu_kick() call in the handler would do nothing,
> as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
> Fix this by checking the IRQ_PENDING VCPU request in runnable().
> Checking the request also sufficiently covers all the cases that
> kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
> virt/kvm/arm/arm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 5bc9b0d2fd0f..725527f491e4 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> return !vcpu_should_sleep(vcpu) &&
> (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> (!!vcpu->arch.irq_lines ||
> - kvm_vgic_vcpu_pending_irq(vcpu)));
> + kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
So what if a VCPU blocks, a device raises an IRQ, the VCPU loops around,
clears the KVM_REQ_IRQ_PENDING flag, enters the VM again, which does
another WFI (for fun), and you end up here again with a pending IRQ but
no KVM_REQ_IRQ_PENDING flag anymore. Doesn't this end up incorrectly
stalling the VCPU?
I don't think that a transient flag will work for a persistent binary
state here.
> }
>
> bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> --
> 2.13.5
>
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED
2017-10-14 19:12 ` Christoffer Dall
@ 2017-10-18 12:04 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2017-10-18 12:04 UTC (permalink / raw)
To: Christoffer Dall; +Cc: marc.zyngier, kvmarm
On Sat, Oct 14, 2017 at 09:12:54PM +0200, Christoffer Dall wrote:
> On Fri, Sep 29, 2017 at 01:30:38PM +0200, Andrew Jones wrote:
> > This prepares for more MP states to be used.
> >
> > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > ---
> > arch/arm/include/asm/kvm_host.h | 6 ++++--
> > arch/arm64/include/asm/kvm_host.h | 6 ++++--
> > virt/kvm/arm/arm.c | 26 ++++++++++++++------------
> > virt/kvm/arm/psci.c | 17 +++++------------
> > 4 files changed, 27 insertions(+), 28 deletions(-)
> >
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index 4a879f6ff13b..85f3c20b9759 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -171,8 +171,8 @@ struct kvm_vcpu_arch {
> > * here.
> > */
> >
> > - /* vcpu power-off state */
> > - bool power_off;
> > + /* Current VCPU MP state */
> > + u32 mp_state;
> >
> > /* Don't run the guest (internal implementation need) */
> > bool pause;
> > @@ -227,6 +227,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> >
> > struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> > void kvm_arm_halt_guest(struct kvm *kvm);
> > void kvm_arm_resume_guest(struct kvm *kvm);
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index e923b58606e2..25545a87de11 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -255,8 +255,8 @@ struct kvm_vcpu_arch {
> > u32 mdscr_el1;
> > } guest_debug_preserved;
> >
> > - /* vcpu power-off state */
> > - bool power_off;
> > + /* Current VCPU MP state */
> > + u32 mp_state;
> >
> > /* Don't run the guest (internal implementation need) */
> > bool pause;
> > @@ -328,6 +328,8 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> >
> > struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> > void kvm_arm_halt_guest(struct kvm *kvm);
> > void kvm_arm_resume_guest(struct kvm *kvm);
> >
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index e4bec508ee5b..954e77608d29 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -368,21 +368,23 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> > kvm_timer_vcpu_put(vcpu);
> > }
> >
> > -static void vcpu_power_off(struct kvm_vcpu *vcpu)
> > +void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
> > {
> > - vcpu->arch.power_off = true;
> > + vcpu->arch.mp_state = KVM_MP_STATE_STOPPED;
> > kvm_make_request(KVM_REQ_SLEEP, vcpu);
> > kvm_vcpu_kick(vcpu);
> > }
> >
> > +void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu)
> > +{
> > + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
> > + kvm_vcpu_kick(vcpu);
>
> Given your rant about request-less kicks, shouldn't this be a
> kvm_vcpu_wake_up() ?
Wow, people actually listen to my rants? :-) Yes, I agree that
kvm_vcpu_wake_up() is more appropriate. I'll change that.
>
> > +}
> > +
> > int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
> > struct kvm_mp_state *mp_state)
> > {
> > - if (vcpu->arch.power_off)
> > - mp_state->mp_state = KVM_MP_STATE_STOPPED;
> > - else
> > - mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
> > -
> > + mp_state->mp_state = vcpu->arch.mp_state;
> > return 0;
> > }
> >
> > @@ -391,10 +393,10 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> > {
> > switch (mp_state->mp_state) {
> > case KVM_MP_STATE_RUNNABLE:
> > - vcpu->arch.power_off = false;
> > + kvm_arm_vcpu_power_on(vcpu);
> > break;
> > case KVM_MP_STATE_STOPPED:
> > - vcpu_power_off(vcpu);
> > + kvm_arm_vcpu_power_off(vcpu);
> > break;
> > default:
> > return -EINVAL;
> > @@ -405,7 +407,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> >
> > static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> > {
> > - return vcpu->arch.power_off || vcpu->arch.pause;
> > + return vcpu->arch.mp_state == KVM_MP_STATE_STOPPED || vcpu->arch.pause;
> > }
> >
> > /**
> > @@ -919,9 +921,9 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> > * Handle the "start in power-off" case.
> > */
> > if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
> > - vcpu_power_off(vcpu);
> > + kvm_arm_vcpu_power_off(vcpu);
> > else
> > - vcpu->arch.power_off = false;
> > + kvm_arm_vcpu_power_on(vcpu);
> >
> > return 0;
> > }
> > diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> > index f1e363bab5e8..a7bf152f1247 100644
> > --- a/virt/kvm/arm/psci.c
> > +++ b/virt/kvm/arm/psci.c
> > @@ -64,16 +64,13 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
> >
> > static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
> > {
> > - vcpu->arch.power_off = true;
> > - kvm_make_request(KVM_REQ_SLEEP, vcpu);
> > - kvm_vcpu_kick(vcpu);
> > + kvm_arm_vcpu_power_off(vcpu);
> > }
> >
> > static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> > {
> > struct kvm *kvm = source_vcpu->kvm;
> > struct kvm_vcpu *vcpu = NULL;
> > - struct swait_queue_head *wq;
> > unsigned long cpu_id;
> > unsigned long context_id;
> > phys_addr_t target_pc;
> > @@ -90,7 +87,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> > */
> > if (!vcpu)
> > return PSCI_RET_INVALID_PARAMS;
> > - if (!vcpu->arch.power_off) {
> > + if (vcpu->arch.mp_state != KVM_MP_STATE_STOPPED) {
> > if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
> > return PSCI_RET_ALREADY_ON;
> > else
> > @@ -118,11 +115,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
> > * the general puspose registers are undefined upon CPU_ON.
> > */
> > vcpu_set_reg(vcpu, 0, context_id);
> > - vcpu->arch.power_off = false;
> > - smp_mb(); /* Make sure the above is visible */
> > -
> > - wq = kvm_arch_vcpu_wq(vcpu);
> > - swake_up(wq);
> > + kvm_arm_vcpu_power_on(vcpu);
> >
> > return PSCI_RET_SUCCESS;
> > }
> > @@ -156,7 +149,7 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
> > mpidr = kvm_vcpu_get_mpidr_aff(tmp);
> > if ((mpidr & target_affinity_mask) == target_affinity) {
> > matching_cpus++;
> > - if (!tmp->arch.power_off)
> > + if (tmp->arch.mp_state != KVM_MP_STATE_STOPPED)
> > return PSCI_0_2_AFFINITY_LEVEL_ON;
> > }
> > }
> > @@ -182,7 +175,7 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
> > * re-initialized.
> > */
> > kvm_for_each_vcpu(i, tmp, vcpu->kvm)
> > - tmp->arch.power_off = true;
> > + tmp->arch.mp_state = KVM_MP_STATE_STOPPED;
> > kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
> >
> > memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
> > --
> > 2.13.5
> >
>
> Otherwise looks good to me:
>
> Reviewed-by: Christoffer Dall <cdall@linaro.org>
Thanks,
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code
2017-10-14 19:13 ` Christoffer Dall
@ 2017-10-18 12:06 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2017-10-18 12:06 UTC (permalink / raw)
To: Christoffer Dall; +Cc: marc.zyngier, kvmarm
On Sat, Oct 14, 2017 at 09:13:08PM +0200, Christoffer Dall wrote:
> On Fri, Sep 29, 2017 at 01:30:39PM +0200, Andrew Jones wrote:
> > Before we add more common code to the wfi emulation, let's first
> > factor out everything common.
> >
> > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > ---
> > arch/arm/include/asm/kvm_host.h | 2 ++
> > arch/arm/kvm/handle_exit.c | 14 +++++---------
> > arch/arm64/include/asm/kvm_host.h | 2 ++
> > arch/arm64/kvm/handle_exit.c | 14 +++++---------
> > virt/kvm/arm/arm.c | 13 +++++++++++++
> > virt/kvm/arm/psci.c | 15 +++++++--------
> > 6 files changed, 34 insertions(+), 26 deletions(-)
> >
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index 85f3c20b9759..964320825372 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -231,6 +231,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> > void kvm_arm_halt_guest(struct kvm *kvm);
> > void kvm_arm_resume_guest(struct kvm *kvm);
> > +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
> > +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
> >
> > int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> > index cf8bf6bf87c4..e40466577c87 100644
> > --- a/arch/arm/kvm/handle_exit.c
> > +++ b/arch/arm/kvm/handle_exit.c
> > @@ -57,22 +57,18 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > * @run: the kvm_run structure pointer
> > *
> > * WFE: Yield the CPU and come back to this vcpu when the scheduler
> > - * decides to.
> > - * WFI: Simply call kvm_vcpu_block(), which will halt execution of
> > - * world-switches and schedule other host processes until there is an
> > - * incoming IRQ or FIQ to the VM.
> > + * decides to.
> > + * WFI: Halt execution of world-switches and schedule other host
> > + * processes until there is an incoming IRQ or FIQ to the VM.
>
> s/VM/VCPU/
>
> > */
> > static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > {
> > if (kvm_vcpu_get_hsr(vcpu) & HSR_WFI_IS_WFE) {
> > trace_kvm_wfx(*vcpu_pc(vcpu), true);
> > - vcpu->stat.wfe_exit_stat++;
> > - kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> > + kvm_arm_emulate_wfe(vcpu);
> > } else {
> > trace_kvm_wfx(*vcpu_pc(vcpu), false);
> > - vcpu->stat.wfi_exit_stat++;
> > - kvm_vcpu_block(vcpu);
> > - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> > + kvm_arm_emulate_wfi(vcpu);
> > }
> >
> > kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 25545a87de11..a774f6b30474 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -332,6 +332,8 @@ void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > void kvm_arm_vcpu_power_on(struct kvm_vcpu *vcpu);
> > void kvm_arm_halt_guest(struct kvm *kvm);
> > void kvm_arm_resume_guest(struct kvm *kvm);
> > +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu);
> > +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu);
> >
> > u64 __kvm_call_hyp(void *hypfn, ...);
> > #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> > index 7debb74843a0..7ba50a296d10 100644
> > --- a/arch/arm64/kvm/handle_exit.c
> > +++ b/arch/arm64/kvm/handle_exit.c
> > @@ -74,22 +74,18 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > * @vcpu: the vcpu pointer
> > *
> > * WFE: Yield the CPU and come back to this vcpu when the scheduler
> > - * decides to.
> > - * WFI: Simply call kvm_vcpu_block(), which will halt execution of
> > - * world-switches and schedule other host processes until there is an
> > - * incoming IRQ or FIQ to the VM.
> > + * decides to.
> > + * WFI: Halt execution of world-switches and schedule other host
> > + * processes until there is an incoming IRQ or FIQ to the VM.
> > */
> > static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > {
> > if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
> > trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
> > - vcpu->stat.wfe_exit_stat++;
> > - kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> > + kvm_arm_emulate_wfe(vcpu);
> > } else {
> > trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false);
> > - vcpu->stat.wfi_exit_stat++;
> > - kvm_vcpu_block(vcpu);
> > - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> > + kvm_arm_emulate_wfi(vcpu);
> > }
> >
> > kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 954e77608d29..220a3dbda76c 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -573,6 +573,19 @@ void kvm_arm_resume_guest(struct kvm *kvm)
> > }
> > }
> >
> > +void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
> > +{
> > + vcpu->stat.wfe_exit_stat++;
> > + kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu));
> > +}
> > +
> > +void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
> > +{
> > + vcpu->stat.wfi_exit_stat++;
> > + kvm_vcpu_block(vcpu);
> > + kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> > +}
> > +
> > static void vcpu_req_sleep(struct kvm_vcpu *vcpu)
> > {
> > struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> > index a7bf152f1247..755c415304ea 100644
> > --- a/virt/kvm/arm/psci.c
> > +++ b/virt/kvm/arm/psci.c
> > @@ -50,20 +50,19 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
> > * This means for KVM the wakeup events are interrupts and
> > * this is consistent with intended use of StateID as described
> > * in section 5.4.1 of PSCI v0.2 specification (ARM DEN 0022A).
> > - *
> > - * Further, we also treat power-down request to be same as
> > - * stand-by request as-per section 5.4.2 clause 3 of PSCI v0.2
> > - * specification (ARM DEN 0022A). This means all suspend states
> > - * for KVM will preserve the register state.
> > */
> > - kvm_vcpu_block(vcpu);
> > - kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> > -
> > + kvm_arm_emulate_wfi(vcpu);
> > return PSCI_RET_SUCCESS;
> > }
> >
> > static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
> > {
> > + /*
> > + * We treat a power-off request the same as a stand-by request,
> > + * as-per section 5.4.2 clause 3 of PSCI v0.2* specification
> > + * (ARM DEN 0022A). This means all suspend states for KVM will
> > + * preserve the register state.
> > + */
>
> I'm not actually convinced that this part of the comment was about
> power-off. I think what it was trying to say was simply that a suspend
> operation should preverse the register state, and therefore the
> commentary doesn't belong in this function. I agree that the comment is
> potentially more confusing (I lost the exact version of the document
> referred to in the original comment so can't be sure if there was
> something unclear in the original document or if the comment is just
> vaguely written), than it helps. Therefore, I think you should either
> (a) keep the comment as it is, (b) rewrite it to make sense, or (c) just
> delete it.
I can't find version A either. I read over C just now to try and think of
a way to improve the comment, but I wasn't overly enlightened, so I'll
probably just opt to remove it.
>
> > kvm_arm_vcpu_power_off(vcpu);
> > }
> >
> > --
> > 2.13.5
> >
>
> Thanks,
> -Christoffer
Thanks,
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable
2017-10-14 19:13 ` Christoffer Dall
@ 2017-10-18 12:09 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2017-10-18 12:09 UTC (permalink / raw)
To: Christoffer Dall; +Cc: marc.zyngier, kvmarm
On Sat, Oct 14, 2017 at 09:13:25PM +0200, Christoffer Dall wrote:
> On Fri, Sep 29, 2017 at 01:30:40PM +0200, Andrew Jones wrote:
> > Conceptually, kvm_arch_vcpu_runnable() should be "not waiting,
> > or waiting for interrupts and interrupts are pending",
> >
> > !waiting-uninterruptable &&
> > (!waiting-for-interrupts || interrupts-pending)
> >
> > but the implementation was only
> >
> > !waiting-uninterruptable && interrupts-pending
> >
> > Thanks to the context of the two callers there wasn't an issue,
> > however, to future-proof the function, this patch implements the
> > conceptual condition by applying mp_state to track waiting-
> > uninterruptable vs. waiting-for-interrupts.
> >
> > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > ---
> > Documentation/virtual/kvm/api.txt | 10 ++++++----
> > virt/kvm/arm/arm.c | 13 +++++++++----
> > 2 files changed, 15 insertions(+), 8 deletions(-)
> >
> > diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> > index e63a35fafef0..7c0bb1ae10a2 100644
> > --- a/Documentation/virtual/kvm/api.txt
> > +++ b/Documentation/virtual/kvm/api.txt
> > @@ -1071,7 +1071,7 @@ Possible values are:
> > - KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
> > now ready for a SIPI [x86]
> > - KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
> > - is waiting for an interrupt [x86]
> > + is waiting for an interrupt [x86,arm/arm64]
> > - KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
> > accessible via KVM_GET_VCPU_EVENTS) [x86]
> > - KVM_MP_STATE_STOPPED: the vcpu is stopped [s390,arm/arm64]
> > @@ -1087,8 +1087,9 @@ these architectures.
> >
> > For arm/arm64:
> >
> > -The only states that are valid are KVM_MP_STATE_STOPPED and
> > -KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not.
> > +The only states that are valid are KVM_MP_STATE_STOPPED, KVM_MP_STATE_HALTED,
> > +and KVM_MP_STATE_RUNNABLE which reflect if the vcpu is powered-off, waiting
> > +for interrupts, or powered-on and not waiting for interrupts.
> >
>
> Is it valid to introduce another value after we've publicly declared
> that it's not valid? What if userspace has an assert(state ==
> KVM_MP_STATE_STOPPED || state == KVM_MP_STATE_RUNNABLE) ?
>
> At least we should make that is not the case.
The thought had crossed my mind that we might need a new VCPU feature, but
it didn't seem necessary after checking QEMU code, and I didn't like the
idea of adding one. But... if we consider some theoretical KVM userspace,
then maybe we need it?
>
> > 4.39 KVM_SET_MP_STATE
> >
> > @@ -1108,7 +1109,8 @@ these architectures.
> > For arm/arm64:
> >
> > The only states that are valid are KVM_MP_STATE_STOPPED and
> > -KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not.
> > +KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be
> > +powered-off or not.
> >
>
> It looks pretty dodgy to me that we can return an MP state which we
> cannot set again. Feels like this could break migration.
While respinning for Marc's changes, I had some second thoughts about this
too. I was going to do some testing with QEMU and maybe change it and QEMU
so that it would save/restore whatever the state is, including
KVM_MP_STATE_HALTED.
I'll do that checking and experimenting.
>
> > 4.40 KVM_SET_IDENTITY_MAP_ADDR
> >
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 220a3dbda76c..5bc9b0d2fd0f 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -414,13 +414,16 @@ static bool vcpu_should_sleep(struct kvm_vcpu *vcpu)
> > * kvm_arch_vcpu_runnable - determine if the vcpu can be scheduled
> > * @vcpu: The VCPU pointer
> > *
> > - * If the guest CPU is not waiting for interrupts or an interrupt line is
> > - * asserted, the CPU is by definition runnable.
> > + * If the VCPU is not waiting at all (including sleeping, which is waiting
> > + * uninterruptably), or it's waiting for interrupts but interrupts are
> > + * pending, then it is by definition runnable.
> > */
> > int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> > {
> > - return (!!vcpu->arch.irq_lines || kvm_vgic_vcpu_pending_irq(vcpu))
> > - && !vcpu_should_sleep(vcpu);
> > + return !vcpu_should_sleep(vcpu) &&
> > + (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> > + (!!vcpu->arch.irq_lines ||
> > + kvm_vgic_vcpu_pending_irq(vcpu)));
>
> This is hard to read. How about:
>
> bool irq_pending = !!vcpu->arch.irq_lines ||
> kvm_vgic_vcpu_pending_irq(vcpu);
>
> if (vcpu_should_sleep(vcpu)
> return false;
> else if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED && !irq_pending)
> return false;
> return true;
Using local variables does improve this, but I think I prefer
bool halted = vcpu->arch.mp_state == KVM_MP_STATE_HALTED;
bool irq_pending = !!vcpu->arch.irq_lines ||
kvm_vgic_vcpu_pending_irq(vcpu);
return !vcpu_should_sleep(vcpu) && (!halted || irq_pending);
>
> > }
> >
> > bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> > @@ -582,7 +585,9 @@ void kvm_arm_emulate_wfe(struct kvm_vcpu *vcpu)
> > void kvm_arm_emulate_wfi(struct kvm_vcpu *vcpu)
> > {
> > vcpu->stat.wfi_exit_stat++;
> > + vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
> > kvm_vcpu_block(vcpu);
> > + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
> > kvm_clear_request(KVM_REQ_UNHALT, vcpu);
> > }
> >
> > --
> > 2.13.5
> >
>
> Thanks,
> -Christoffer
Thanks,
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-10-14 19:13 ` Christoffer Dall
@ 2017-10-18 12:13 ` Andrew Jones
2017-10-18 13:18 ` Christoffer Dall
0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2017-10-18 12:13 UTC (permalink / raw)
To: Christoffer Dall; +Cc: marc.zyngier, kvmarm
On Sat, Oct 14, 2017 at 09:13:35PM +0200, Christoffer Dall wrote:
> On Fri, Sep 29, 2017 at 01:30:41PM +0200, Andrew Jones wrote:
> > When the vPMU is in use if a VCPU's perf event overflow handler
> > were to fire after the VCPU started waiting, then the wake up
> > done by the kvm_vcpu_kick() call in the handler would do nothing,
> > as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
> > Fix this by checking the IRQ_PENDING VCPU request in runnable().
> > Checking the request also sufficiently covers all the cases that
> > kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
> >
> > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > ---
> > virt/kvm/arm/arm.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 5bc9b0d2fd0f..725527f491e4 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> > return !vcpu_should_sleep(vcpu) &&
> > (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> > (!!vcpu->arch.irq_lines ||
> > - kvm_vgic_vcpu_pending_irq(vcpu)));
> > + kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
>
> So what if a VCPU blocks, a device raises an IRQ, the VCPU loops around,
> clears the KVM_REQ_IRQ_PENDING flag, enters the VM again, which does
> another WFI (for fun), and you end up here again with a pending IRQ but
> no KVM_REQ_IRQ_PENDING flag anymore. Doesn't this end up incorrectly
> stalling the VCPU?
Hmm, I see what you mean. I'm sorry I missed that.
>
> I don't think that a transient flag will work for a persistent binary
> state here.
I think we can fix it by adding
if (kvm_vgic_vcpu_pending_irq(vcpu))
kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
to kvm_vgic_sync_hwstate(), if the additional overhead would be
acceptable. Otherwise we need to find some other way to ensure
vPMU irqs unblock the VCPU.
>
>
> > }
> >
> > bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> > --
> > 2.13.5
> >
>
>
> Thanks,
> -Christoffer
Thanks,
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-10-18 12:13 ` Andrew Jones
@ 2017-10-18 13:18 ` Christoffer Dall
2017-10-18 13:55 ` Andrew Jones
0 siblings, 1 reply; 26+ messages in thread
From: Christoffer Dall @ 2017-10-18 13:18 UTC (permalink / raw)
To: Andrew Jones; +Cc: marc.zyngier, kvmarm
On Wed, Oct 18, 2017 at 02:13:05PM +0200, Andrew Jones wrote:
> On Sat, Oct 14, 2017 at 09:13:35PM +0200, Christoffer Dall wrote:
> > On Fri, Sep 29, 2017 at 01:30:41PM +0200, Andrew Jones wrote:
> > > When the vPMU is in use if a VCPU's perf event overflow handler
> > > were to fire after the VCPU started waiting, then the wake up
> > > done by the kvm_vcpu_kick() call in the handler would do nothing,
> > > as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
> > > Fix this by checking the IRQ_PENDING VCPU request in runnable().
> > > Checking the request also sufficiently covers all the cases that
> > > kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
> > >
> > > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > > ---
> > > virt/kvm/arm/arm.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > index 5bc9b0d2fd0f..725527f491e4 100644
> > > --- a/virt/kvm/arm/arm.c
> > > +++ b/virt/kvm/arm/arm.c
> > > @@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> > > return !vcpu_should_sleep(vcpu) &&
> > > (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> > > (!!vcpu->arch.irq_lines ||
> > > - kvm_vgic_vcpu_pending_irq(vcpu)));
> > > + kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
> >
> > So what if a VCPU blocks, a device raises an IRQ, the VCPU loops around,
> > clears the KVM_REQ_IRQ_PENDING flag, enters the VM again, which does
> > another WFI (for fun), and you end up here again with a pending IRQ but
> > no KVM_REQ_IRQ_PENDING flag anymore. Doesn't this end up incorrectly
> > stalling the VCPU?
>
> Hmm, I see what you mean. I'm sorry I missed that.
>
> >
> > I don't think that a transient flag will work for a persistent binary
> > state here.
>
> I think we can fix it by adding
>
> if (kvm_vgic_vcpu_pending_irq(vcpu))
> kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
>
> to kvm_vgic_sync_hwstate(), if the additional overhead would be
> acceptable. Otherwise we need to find some other way to ensure
> vPMU irqs unblock the VCPU.
>
I think you just need to check
kvm_vgic_vcpu_pending_irq() ||
kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)
Wouldn't that fix it?
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-10-18 13:18 ` Christoffer Dall
@ 2017-10-18 13:55 ` Andrew Jones
2017-10-18 14:14 ` Christoffer Dall
0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2017-10-18 13:55 UTC (permalink / raw)
To: Christoffer Dall; +Cc: marc.zyngier, kvmarm
On Wed, Oct 18, 2017 at 03:18:43PM +0200, Christoffer Dall wrote:
> On Wed, Oct 18, 2017 at 02:13:05PM +0200, Andrew Jones wrote:
> > On Sat, Oct 14, 2017 at 09:13:35PM +0200, Christoffer Dall wrote:
> > > On Fri, Sep 29, 2017 at 01:30:41PM +0200, Andrew Jones wrote:
> > > > When the vPMU is in use if a VCPU's perf event overflow handler
> > > > were to fire after the VCPU started waiting, then the wake up
> > > > done by the kvm_vcpu_kick() call in the handler would do nothing,
> > > > as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
> > > > Fix this by checking the IRQ_PENDING VCPU request in runnable().
> > > > Checking the request also sufficiently covers all the cases that
> > > > kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
> > > >
> > > > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > > > ---
> > > > virt/kvm/arm/arm.c | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > > index 5bc9b0d2fd0f..725527f491e4 100644
> > > > --- a/virt/kvm/arm/arm.c
> > > > +++ b/virt/kvm/arm/arm.c
> > > > @@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> > > > return !vcpu_should_sleep(vcpu) &&
> > > > (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
> > > > (!!vcpu->arch.irq_lines ||
> > > > - kvm_vgic_vcpu_pending_irq(vcpu)));
> > > > + kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
> > >
> > > So what if a VCPU blocks, a device raises an IRQ, the VCPU loops around,
> > > clears the KVM_REQ_IRQ_PENDING flag, enters the VM again, which does
> > > another WFI (for fun), and you end up here again with a pending IRQ but
> > > no KVM_REQ_IRQ_PENDING flag anymore. Doesn't this end up incorrectly
> > > stalling the VCPU?
> >
> > Hmm, I see what you mean. I'm sorry I missed that.
> >
> > >
> > > I don't think that a transient flag will work for a persistent binary
> > > state here.
> >
> > I think we can fix it by adding
> >
> > if (kvm_vgic_vcpu_pending_irq(vcpu))
> > kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
> >
> > to kvm_vgic_sync_hwstate(), if the additional overhead would be
> > acceptable. Otherwise we need to find some other way to ensure
> > vPMU irqs unblock the VCPU.
> >
>
> I think you just need to check
> kvm_vgic_vcpu_pending_irq() ||
> kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)
>
> Wouldn't that fix it?
It would. I was just hoping to eliminate the lock-taking
kvm_vgic_vcpu_pending_irq() call from kvm_arch_vcpu_runnable()
though, as kvm_arch_vcpu_runnable() may be called repeatedly
during halt polling.
Thanks,
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs
2017-10-18 13:55 ` Andrew Jones
@ 2017-10-18 14:14 ` Christoffer Dall
0 siblings, 0 replies; 26+ messages in thread
From: Christoffer Dall @ 2017-10-18 14:14 UTC (permalink / raw)
To: Andrew Jones; +Cc: marc.zyngier@arm.com, kvmarm@lists.cs.columbia.edu
On Wed, Oct 18, 2017 at 3:55 PM, Andrew Jones <drjones@redhat.com> wrote:
> On Wed, Oct 18, 2017 at 03:18:43PM +0200, Christoffer Dall wrote:
>> On Wed, Oct 18, 2017 at 02:13:05PM +0200, Andrew Jones wrote:
>> > On Sat, Oct 14, 2017 at 09:13:35PM +0200, Christoffer Dall wrote:
>> > > On Fri, Sep 29, 2017 at 01:30:41PM +0200, Andrew Jones wrote:
>> > > > When the vPMU is in use if a VCPU's perf event overflow handler
>> > > > were to fire after the VCPU started waiting, then the wake up
>> > > > done by the kvm_vcpu_kick() call in the handler would do nothing,
>> > > > as no "pmu overflow" state is checked in kvm_arch_vcpu_runnable().
>> > > > Fix this by checking the IRQ_PENDING VCPU request in runnable().
>> > > > Checking the request also sufficiently covers all the cases that
>> > > > kvm_vgic_vcpu_pending_irq() cover, so we can just replace that.
>> > > >
>> > > > Signed-off-by: Andrew Jones <drjones@redhat.com>
>> > > > ---
>> > > > virt/kvm/arm/arm.c | 2 +-
>> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
>> > > >
>> > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> > > > index 5bc9b0d2fd0f..725527f491e4 100644
>> > > > --- a/virt/kvm/arm/arm.c
>> > > > +++ b/virt/kvm/arm/arm.c
>> > > > @@ -423,7 +423,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
>> > > > return !vcpu_should_sleep(vcpu) &&
>> > > > (vcpu->arch.mp_state != KVM_MP_STATE_HALTED ||
>> > > > (!!vcpu->arch.irq_lines ||
>> > > > - kvm_vgic_vcpu_pending_irq(vcpu)));
>> > > > + kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)));
>> > >
>> > > So what if a VCPU blocks, a device raises an IRQ, the VCPU loops around,
>> > > clears the KVM_REQ_IRQ_PENDING flag, enters the VM again, which does
>> > > another WFI (for fun), and you end up here again with a pending IRQ but
>> > > no KVM_REQ_IRQ_PENDING flag anymore. Doesn't this end up incorrectly
>> > > stalling the VCPU?
>> >
>> > Hmm, I see what you mean. I'm sorry I missed that.
>> >
>> > >
>> > > I don't think that a transient flag will work for a persistent binary
>> > > state here.
>> >
>> > I think we can fix it by adding
>> >
>> > if (kvm_vgic_vcpu_pending_irq(vcpu))
>> > kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
>> >
>> > to kvm_vgic_sync_hwstate(), if the additional overhead would be
>> > acceptable. Otherwise we need to find some other way to ensure
>> > vPMU irqs unblock the VCPU.
>> >
>>
>> I think you just need to check
>> kvm_vgic_vcpu_pending_irq() ||
>> kvm_test_request(KVM_REQ_IRQ_PENDING, vcpu)
>>
>> Wouldn't that fix it?
>
> It would. I was just hoping to eliminate the lock-taking
> kvm_vgic_vcpu_pending_irq() call from kvm_arch_vcpu_runnable()
> though, as kvm_arch_vcpu_runnable() may be called repeatedly
> during halt polling.
>
Could we make it so that the KVM_REQ_IRQ_PENDING request only gets
cleared when there are really no more pending IRQs for the VCPU then?
Getting rid of the lock would be potentially good from a latency point
of view, but then again, it's slow to wake up from WFI, and the lock
is per-VCPU so not likely to be contended is it?
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2017-10-18 14:13 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-29 11:30 [PATCH 0/5] kvm_arch_vcpu_runnable related improvements Andrew Jones
2017-09-29 11:30 ` [PATCH 1/5] KVM: arm/arm64: tidy 'should sleep' conditions Andrew Jones
2017-10-05 8:13 ` Marc Zyngier
2017-09-29 11:30 ` [PATCH 2/5] KVM: arm/arm64: replace power_off with mp_state=STOPPED Andrew Jones
2017-10-05 8:32 ` Marc Zyngier
2017-10-10 13:26 ` Andrew Jones
2017-10-14 19:12 ` Christoffer Dall
2017-10-18 12:04 ` Andrew Jones
2017-09-29 11:30 ` [PATCH 3/5] KVM: arm/arm64: factor out common wfe/wfi emulation code Andrew Jones
2017-10-05 8:36 ` Marc Zyngier
2017-10-14 19:13 ` Christoffer Dall
2017-10-18 12:06 ` Andrew Jones
2017-09-29 11:30 ` [PATCH 4/5] KVM: arm/arm64: improve kvm_arch_vcpu_runnable Andrew Jones
2017-10-05 9:19 ` Marc Zyngier
2017-10-14 19:13 ` Christoffer Dall
2017-10-18 12:09 ` Andrew Jones
2017-09-29 11:30 ` [PATCH 5/5] KVM: arm/arm64: kvm_arch_vcpu_runnable: don't miss injected irqs Andrew Jones
2017-10-05 9:37 ` Marc Zyngier
2017-10-10 13:28 ` Andrew Jones
2017-10-14 19:13 ` Christoffer Dall
2017-10-18 12:13 ` Andrew Jones
2017-10-18 13:18 ` Christoffer Dall
2017-10-18 13:55 ` Andrew Jones
2017-10-18 14:14 ` Christoffer Dall
2017-10-02 8:31 ` [PATCH 6/5] KVM: arm/arm64: make kvm_vgic_vcpu_pending_irq static Andrew Jones
2017-10-05 9:37 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox