From: Marc Zyngier <maz@kernel.org>
To: "Paolo Bonzini" <pbonzini@redhat.com>,
"Radim Krčmář" <rkrcmar@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
Andrew Jones <drjones@redhat.com>,
kvm@vger.kernel.org, Eric Auger <eric.auger@redhat.com>,
Heinrich Schuchardt <xypron.glpk@gmx.de>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Christoffer Dall <christoffer.dall@arm.com>,
Steven Price <steven.price@arm.com>,
Christian Borntraeger <borntraeger@de.ibm.com>,
Julien Grall <julien.grall@arm.com>,
Alexander Graf <graf@amazon.com>,
linux-arm-kernel@lists.infradead.org,
Zenghui Yu <yuzenghui@huawei.com>,
James Morse <james.morse@arm.com>,
Thomas Gleixner <tglx@linutronix.de>,
Will Deacon <will@kernel.org>,
kvmarm@lists.cs.columbia.edu,
Julien Thierry <julien.thierry.kdev@gmail.com>
Subject: [PATCH 07/22] KVM: arm64: Support stolen time reporting via shared structure
Date: Wed, 20 Nov 2019 16:42:21 +0000 [thread overview]
Message-ID: <20191120164236.29359-8-maz@kernel.org> (raw)
In-Reply-To: <20191120164236.29359-1-maz@kernel.org>
From: Steven Price <steven.price@arm.com>
Implement the service call for configuring a shared structure between a
VCPU and the hypervisor in which the hypervisor can write the time
stolen from the VCPU's execution time by other tasks on the host.
User space allocates memory which is placed at an IPA also chosen by user
space. The hypervisor then updates the shared structure using
kvm_put_guest() to ensure single copy atomicity of the 64-bit value
reporting the stolen time in nanoseconds.
Whenever stolen time is enabled by the guest, the stolen time counter is
reset.
The stolen time itself is retrieved from the sched_info structure
maintained by the Linux scheduler code. We enable SCHEDSTATS when
selecting KVM Kconfig to ensure this value is meaningful.
Signed-off-by: Steven Price <steven.price@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm/include/asm/kvm_host.h | 19 +++++++++++
arch/arm64/include/asm/kvm_host.h | 20 ++++++++++++
arch/arm64/kvm/Kconfig | 1 +
include/linux/kvm_types.h | 2 ++
virt/kvm/arm/arm.c | 11 +++++++
virt/kvm/arm/hypercalls.c | 6 ++++
virt/kvm/arm/pvtime.c | 52 +++++++++++++++++++++++++++++++
7 files changed, 111 insertions(+)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 5a0c3569ebde..5a077f85813f 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -39,6 +39,7 @@
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
+#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
@@ -329,6 +330,24 @@ static inline long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu)
return SMCCC_RET_NOT_SUPPORTED;
}
+static inline gpa_t kvm_init_stolen_time(struct kvm_vcpu *vcpu)
+{
+ return GPA_INVALID;
+}
+
+static inline void kvm_update_stolen_time(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
+{
+}
+
+static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
+{
+ return false;
+}
+
void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 93b46d9526d0..75ef37f79633 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -44,6 +44,7 @@
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
+#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)
DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
@@ -338,6 +339,13 @@ struct kvm_vcpu_arch {
/* True when deferrable sysregs are loaded on the physical CPU,
* see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */
bool sysregs_loaded_on_cpu;
+
+ /* Guest PV state */
+ struct {
+ u64 steal;
+ u64 last_steal;
+ gpa_t base;
+ } steal;
};
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
@@ -479,6 +487,18 @@ int kvm_perf_init(void);
int kvm_perf_teardown(void);
long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu);
+gpa_t kvm_init_stolen_time(struct kvm_vcpu *vcpu);
+void kvm_update_stolen_time(struct kvm_vcpu *vcpu);
+
+static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
+{
+ vcpu_arch->steal.base = GPA_INVALID;
+}
+
+static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
+{
+ return (vcpu_arch->steal.base != GPA_INVALID);
+}
void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a67121d419a2..d8b88e40d223 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -39,6 +39,7 @@ config KVM
select IRQ_BYPASS_MANAGER
select HAVE_KVM_IRQ_BYPASS
select HAVE_KVM_VCPU_RUN_PID_CHANGE
+ select SCHEDSTATS
---help---
Support hosting virtualized guest machines.
We don't support KVM with 16K page tables yet, due to the multiple
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index bde5374ae021..1c88e69db3d9 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -35,6 +35,8 @@ typedef unsigned long gva_t;
typedef u64 gpa_t;
typedef u64 gfn_t;
+#define GPA_INVALID (~(gpa_t)0)
+
typedef unsigned long hva_t;
typedef u64 hpa_t;
typedef u64 hfn_t;
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 86c6aa1cb58e..2aba375dfd13 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -40,6 +40,10 @@
#include <asm/kvm_coproc.h>
#include <asm/sections.h>
+#include <kvm/arm_hypercalls.h>
+#include <kvm/arm_pmu.h>
+#include <kvm/arm_psci.h>
+
#ifdef REQUIRES_VIRT
__asm__(".arch_extension virt");
#endif
@@ -351,6 +355,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
kvm_arm_reset_debug_ptr(vcpu);
+ kvm_arm_pvtime_vcpu_init(&vcpu->arch);
+
return kvm_vgic_vcpu_init(vcpu);
}
@@ -380,6 +386,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_vcpu_load_sysregs(vcpu);
kvm_arch_vcpu_load_fp(vcpu);
kvm_vcpu_pmu_restore_guest(vcpu);
+ if (kvm_arm_is_pvtime_enabled(&vcpu->arch))
+ kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu);
if (single_task_running())
vcpu_clear_wfe_traps(vcpu);
@@ -645,6 +653,9 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
* that a VCPU sees new virtual interrupts.
*/
kvm_check_request(KVM_REQ_IRQ_PENDING, vcpu);
+
+ if (kvm_check_request(KVM_REQ_RECORD_STEAL, vcpu))
+ kvm_update_stolen_time(vcpu);
}
}
diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
index 97ea8b133e77..550dfa3e53cd 100644
--- a/virt/kvm/arm/hypercalls.c
+++ b/virt/kvm/arm/hypercalls.c
@@ -14,6 +14,7 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
u32 func_id = smccc_get_function(vcpu);
long val = SMCCC_RET_NOT_SUPPORTED;
u32 feature;
+ gpa_t gpa;
switch (func_id) {
case ARM_SMCCC_VERSION_FUNC_ID:
@@ -56,6 +57,11 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
case ARM_SMCCC_HV_PV_TIME_FEATURES:
val = kvm_hypercall_pv_features(vcpu);
break;
+ case ARM_SMCCC_HV_PV_TIME_ST:
+ gpa = kvm_init_stolen_time(vcpu);
+ if (gpa != GPA_INVALID)
+ val = gpa;
+ break;
default:
return kvm_psci_call(vcpu);
}
diff --git a/virt/kvm/arm/pvtime.c b/virt/kvm/arm/pvtime.c
index 9fc69fc2d683..b90b3a7bea85 100644
--- a/virt/kvm/arm/pvtime.c
+++ b/virt/kvm/arm/pvtime.c
@@ -3,8 +3,35 @@
#include <linux/arm-smccc.h>
+#include <asm/pvclock-abi.h>
+
#include <kvm/arm_hypercalls.h>
+void kvm_update_stolen_time(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ u64 steal;
+ __le64 steal_le;
+ u64 offset;
+ int idx;
+ u64 base = vcpu->arch.steal.base;
+
+ if (base == GPA_INVALID)
+ return;
+
+ /* Let's do the local bookkeeping */
+ steal = vcpu->arch.steal.steal;
+ steal += current->sched_info.run_delay - vcpu->arch.steal.last_steal;
+ vcpu->arch.steal.last_steal = current->sched_info.run_delay;
+ vcpu->arch.steal.steal = steal;
+
+ steal_le = cpu_to_le64(steal);
+ idx = srcu_read_lock(&kvm->srcu);
+ offset = offsetof(struct pvclock_vcpu_stolen_time, stolen_time);
+ kvm_put_guest(kvm, base + offset, steal_le, u64);
+ srcu_read_unlock(&kvm->srcu, idx);
+}
+
long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu)
{
u32 feature = smccc_get_arg1(vcpu);
@@ -12,9 +39,34 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu)
switch (feature) {
case ARM_SMCCC_HV_PV_TIME_FEATURES:
+ case ARM_SMCCC_HV_PV_TIME_ST:
val = SMCCC_RET_SUCCESS;
break;
}
return val;
}
+
+gpa_t kvm_init_stolen_time(struct kvm_vcpu *vcpu)
+{
+ struct pvclock_vcpu_stolen_time init_values = {};
+ struct kvm *kvm = vcpu->kvm;
+ u64 base = vcpu->arch.steal.base;
+ int idx;
+
+ if (base == GPA_INVALID)
+ return base;
+
+ /*
+ * Start counting stolen time from the time the guest requests
+ * the feature enabled.
+ */
+ vcpu->arch.steal.steal = 0;
+ vcpu->arch.steal.last_steal = current->sched_info.run_delay;
+
+ idx = srcu_read_lock(&kvm->srcu);
+ kvm_write_guest(kvm, base, &init_values, sizeof(init_values));
+ srcu_read_unlock(&kvm->srcu, idx);
+
+ return base;
+}
--
2.20.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-11-20 16:45 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-20 16:42 [GIT PULL] KVM/arm updates for 5.5 Marc Zyngier
2019-11-20 16:42 ` [PATCH 01/22] KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace Marc Zyngier
2019-11-20 16:42 ` [PATCH 02/22] KVM: arm/arm64: Allow user injection of external data aborts Marc Zyngier
2019-11-20 16:42 ` [PATCH 03/22] KVM: arm64: Document PV-time interface Marc Zyngier
2019-11-20 16:42 ` [PATCH 04/22] KVM: arm/arm64: Factor out hypercall handling from PSCI code Marc Zyngier
2019-11-20 16:42 ` [PATCH 05/22] KVM: arm64: Implement PV_TIME_FEATURES call Marc Zyngier
2019-11-20 16:42 ` [PATCH 06/22] KVM: Implement kvm_put_guest() Marc Zyngier
2019-11-20 16:42 ` Marc Zyngier [this message]
2019-11-20 16:42 ` [PATCH 08/22] KVM: Allow kvm_device_ops to be const Marc Zyngier
2019-11-20 16:42 ` [PATCH 09/22] KVM: arm64: Provide VCPU attributes for stolen time Marc Zyngier
2019-11-20 16:42 ` [PATCH 10/22] arm/arm64: Provide a wrapper for SMCCC 1.1 calls Marc Zyngier
2019-11-20 16:42 ` [PATCH 11/22] arm/arm64: Make use of the SMCCC 1.1 wrapper Marc Zyngier
2019-11-20 16:42 ` [PATCH 12/22] arm64: Retrieve stolen time as paravirtualized guest Marc Zyngier
2019-11-20 16:42 ` [PATCH 13/22] KVM: arm64: Select TASK_DELAY_ACCT+TASKSTATS rather than SCHEDSTATS Marc Zyngier
2019-11-20 16:42 ` [PATCH 14/22] KVM: arm/arm64: Show halt poll counters in debugfs Marc Zyngier
2019-11-20 16:42 ` [PATCH 15/22] KVM: arm64: Don't set HCR_EL2.TVM when S2FWB is supported Marc Zyngier
2019-11-20 16:42 ` [PATCH 16/22] KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put Marc Zyngier
2019-11-20 16:42 ` [PATCH 17/22] KVM: arm/arm64: vgic: Remove the declaration of kvm_send_userspace_msi() Marc Zyngier
2019-11-20 16:42 ` [PATCH 18/22] KVM: arm/arm64: vgic: Fix some comments typo Marc Zyngier
2019-11-20 16:42 ` [PATCH 19/22] KVM: arm/arm64: vgic: Don't rely on the wrong pending table Marc Zyngier
2019-11-20 16:42 ` [PATCH 20/22] KVM: arm/arm64: Let the timer expire in hardirq context on RT Marc Zyngier
2019-11-20 16:42 ` [PATCH 21/22] KVM: vgic-v4: Track the number of VLPIs per vcpu Marc Zyngier
2019-11-20 16:42 ` [PATCH 22/22] KVM: arm64: Opportunistically turn off WFI trapping when using direct LPI injection Marc Zyngier
2019-11-21 8:58 ` [GIT PULL] KVM/arm updates for 5.5 Paolo Bonzini
2019-11-21 9:06 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191120164236.29359-8-maz@kernel.org \
--to=maz@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=borntraeger@de.ibm.com \
--cc=christoffer.dall@arm.com \
--cc=drjones@redhat.com \
--cc=eric.auger@redhat.com \
--cc=graf@amazon.com \
--cc=james.morse@arm.com \
--cc=julien.grall@arm.com \
--cc=julien.thierry.kdev@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=mark.rutland@arm.com \
--cc=pbonzini@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=xypron.glpk@gmx.de \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).