* [PATCH v2 1/6] KVM: Add a flag to track if a loaded vCPU is scheduled out
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
@ 2024-05-22 1:40 ` Sean Christopherson
2024-05-22 15:16 ` Oliver Upton
2024-05-22 1:40 ` [PATCH v2 2/6] KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load() Sean Christopherson
` (6 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2024-05-22 1:40 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
Add a kvm_vcpu.scheduled_out flag to track if a vCPU is in the process of
being scheduled out (vCPU put path), or if the vCPU is being reloaded
after being scheduled out (vCPU load path). In the short term, this will
allow dropping kvm_arch_sched_in(), as arch code can query scheduled_out
during kvm_arch_vcpu_load().
Longer term, scheduled_out opens up other potential optimizations, without
creating subtle/brittle dependencies. E.g. it allows KVM to keep guest
state (that is managed via kvm_arch_vcpu_{load,put}()) loaded across
kvm_sched_{out,in}(), if KVM knows the state isn't accessed by the host
kernel. Forcing arch code to coordinate between kvm_arch_sched_{in,out}()
and kvm_arch_vcpu_{load,put}() is awkward, not reusable, and relies on the
exact ordering of calls into arch code.
Adding scheduled_out also obviates the need for a kvm_arch_sched_out()
hook, e.g. if arch code needs to do something novel when putting vCPU
state.
And even if KVM never uses scheduled_out for anything beyond dropping
kvm_arch_sched_in(), just being able to remove all of the arch stubs makes
it worth adding the flag.
Link: https://lore.kernel.org/all/20240430224431.490139-1-seanjc@google.com
Cc: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 4 ++++
2 files changed, 5 insertions(+)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 7b57878c8c18..bde69f74b031 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -380,6 +380,7 @@ struct kvm_vcpu {
#endif
bool preempted;
bool ready;
+ bool scheduled_out;
struct kvm_vcpu_arch arch;
struct kvm_vcpu_stat stat;
char stats_id[KVM_STATS_NAME_SIZE];
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a1756d5077ee..7ecea573d121 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -6288,6 +6288,8 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu)
__this_cpu_write(kvm_running_vcpu, vcpu);
kvm_arch_sched_in(vcpu, cpu);
kvm_arch_vcpu_load(vcpu, cpu);
+
+ WRITE_ONCE(vcpu->scheduled_out, false);
}
static void kvm_sched_out(struct preempt_notifier *pn,
@@ -6295,6 +6297,8 @@ static void kvm_sched_out(struct preempt_notifier *pn,
{
struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
+ WRITE_ONCE(vcpu->scheduled_out, true);
+
if (current->on_rq) {
WRITE_ONCE(vcpu->preempted, true);
WRITE_ONCE(vcpu->ready, true);
--
2.45.0.215.g3402c0e53f-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 1/6] KVM: Add a flag to track if a loaded vCPU is scheduled out
2024-05-22 1:40 ` [PATCH v2 1/6] KVM: Add a flag to track if a loaded vCPU is scheduled out Sean Christopherson
@ 2024-05-22 15:16 ` Oliver Upton
0 siblings, 0 replies; 16+ messages in thread
From: Oliver Upton @ 2024-05-22 15:16 UTC (permalink / raw)
To: Sean Christopherson
Cc: Marc Zyngier, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Paolo Bonzini, linux-arm-kernel, kvmarm, kvm, loongarch,
linux-mips, linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
On Tue, May 21, 2024 at 06:40:08PM -0700, Sean Christopherson wrote:
> Add a kvm_vcpu.scheduled_out flag to track if a vCPU is in the process of
> being scheduled out (vCPU put path), or if the vCPU is being reloaded
> after being scheduled out (vCPU load path). In the short term, this will
> allow dropping kvm_arch_sched_in(), as arch code can query scheduled_out
> during kvm_arch_vcpu_load().
>
> Longer term, scheduled_out opens up other potential optimizations, without
> creating subtle/brittle dependencies. E.g. it allows KVM to keep guest
> state (that is managed via kvm_arch_vcpu_{load,put}()) loaded across
> kvm_sched_{out,in}(), if KVM knows the state isn't accessed by the host
> kernel. Forcing arch code to coordinate between kvm_arch_sched_{in,out}()
> and kvm_arch_vcpu_{load,put}() is awkward, not reusable, and relies on the
> exact ordering of calls into arch code.
>
> Adding scheduled_out also obviates the need for a kvm_arch_sched_out()
> hook, e.g. if arch code needs to do something novel when putting vCPU
> state.
>
> And even if KVM never uses scheduled_out for anything beyond dropping
> kvm_arch_sched_in(), just being able to remove all of the arch stubs makes
> it worth adding the flag.
>
> Link: https://lore.kernel.org/all/20240430224431.490139-1-seanjc@google.com
> Cc: Oliver Upton <oliver.upton@linux.dev>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 2/6] KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load()
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
2024-05-22 1:40 ` [PATCH v2 1/6] KVM: Add a flag to track if a loaded vCPU is scheduled out Sean Christopherson
@ 2024-05-22 1:40 ` Sean Christopherson
2024-05-22 1:40 ` [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
` (5 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2024-05-22 1:40 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
Move VMX's {grow,shrink}_ple_window() above vmx_vcpu_load() in preparation
of moving the sched_in logic, which handles shrinking the PLE window, into
vmx_vcpu_load().
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/vmx.c | 64 +++++++++++++++++++++---------------------
1 file changed, 32 insertions(+), 32 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 51b2cd13250a..07a4d6a3a43e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1410,6 +1410,38 @@ static void vmx_write_guest_kernel_gs_base(struct vcpu_vmx *vmx, u64 data)
}
#endif
+static void grow_ple_window(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ unsigned int old = vmx->ple_window;
+
+ vmx->ple_window = __grow_ple_window(old, ple_window,
+ ple_window_grow,
+ ple_window_max);
+
+ if (vmx->ple_window != old) {
+ vmx->ple_window_dirty = true;
+ trace_kvm_ple_window_update(vcpu->vcpu_id,
+ vmx->ple_window, old);
+ }
+}
+
+static void shrink_ple_window(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ unsigned int old = vmx->ple_window;
+
+ vmx->ple_window = __shrink_ple_window(old, ple_window,
+ ple_window_shrink,
+ ple_window);
+
+ if (vmx->ple_window != old) {
+ vmx->ple_window_dirty = true;
+ trace_kvm_ple_window_update(vcpu->vcpu_id,
+ vmx->ple_window, old);
+ }
+}
+
void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
struct loaded_vmcs *buddy)
{
@@ -5889,38 +5921,6 @@ int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu)
return 1;
}
-static void grow_ple_window(struct kvm_vcpu *vcpu)
-{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
- unsigned int old = vmx->ple_window;
-
- vmx->ple_window = __grow_ple_window(old, ple_window,
- ple_window_grow,
- ple_window_max);
-
- if (vmx->ple_window != old) {
- vmx->ple_window_dirty = true;
- trace_kvm_ple_window_update(vcpu->vcpu_id,
- vmx->ple_window, old);
- }
-}
-
-static void shrink_ple_window(struct kvm_vcpu *vcpu)
-{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
- unsigned int old = vmx->ple_window;
-
- vmx->ple_window = __shrink_ple_window(old, ple_window,
- ple_window_shrink,
- ple_window);
-
- if (vmx->ple_window != old) {
- vmx->ple_window_dirty = true;
- trace_kvm_ple_window_update(vcpu->vcpu_id,
- vmx->ple_window, old);
- }
-}
-
/*
* Indicate a busy-waiting vcpu in spinlock. We do not enable the PAUSE
* exiting, so only get here on cpu with PAUSE-Loop-Exiting.
--
2.45.0.215.g3402c0e53f-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
2024-05-22 1:40 ` [PATCH v2 1/6] KVM: Add a flag to track if a loaded vCPU is scheduled out Sean Christopherson
2024-05-22 1:40 ` [PATCH v2 2/6] KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load() Sean Christopherson
@ 2024-05-22 1:40 ` Sean Christopherson
2024-05-23 22:47 ` Huang, Kai
2024-05-22 1:40 ` [PATCH v2 4/6] KVM: Delete the now unused kvm_arch_sched_in() Sean Christopherson
` (4 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2024-05-22 1:40 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
Fold the guts of kvm_arch_sched_in() into kvm_arch_vcpu_load(), keying
off the recently added kvm_vcpu.scheduled_out as appropriate.
Note, there is a very slight functional change, as PLE shrink updates will
now happen after blasting WBINVD, but that is quite uninteresting as the
two operations do not interact in any way.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 -
arch/x86/include/asm/kvm_host.h | 2 --
arch/x86/kvm/svm/svm.c | 11 +++--------
arch/x86/kvm/vmx/main.c | 2 --
arch/x86/kvm/vmx/vmx.c | 9 +++------
arch/x86/kvm/vmx/x86_ops.h | 1 -
arch/x86/kvm/x86.c | 17 ++++++++++-------
7 files changed, 16 insertions(+), 27 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 566d19b02483..5a8b74c2e6c4 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -103,7 +103,6 @@ KVM_X86_OP(write_tsc_multiplier)
KVM_X86_OP(get_exit_info)
KVM_X86_OP(check_intercept)
KVM_X86_OP(handle_exit_irqoff)
-KVM_X86_OP(sched_in)
KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging)
KVM_X86_OP_OPTIONAL(vcpu_blocking)
KVM_X86_OP_OPTIONAL(vcpu_unblocking)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index aabf1648a56a..0df4d14db896 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1750,8 +1750,6 @@ struct kvm_x86_ops {
struct x86_exception *exception);
void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu);
- void (*sched_in)(struct kvm_vcpu *vcpu, int cpu);
-
/*
* Size of the CPU's dirty log buffer, i.e. VMX's PML buffer. A zero
* value indicates CPU dirty logging is unsupported or disabled.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3d0549ca246f..51a5eb31aee5 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1548,6 +1548,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
struct vcpu_svm *svm = to_svm(vcpu);
struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
+ if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
+ shrink_ple_window(vcpu);
+
if (sd->current_vmcb != svm->vmcb) {
sd->current_vmcb = svm->vmcb;
@@ -4572,12 +4575,6 @@ static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu)
vcpu->arch.at_instruction_boundary = true;
}
-static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu)
-{
- if (!kvm_pause_in_guest(vcpu->kvm))
- shrink_ple_window(vcpu);
-}
-
static void svm_setup_mce(struct kvm_vcpu *vcpu)
{
/* [63:9] are reserved. */
@@ -5046,8 +5043,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.check_intercept = svm_check_intercept,
.handle_exit_irqoff = svm_handle_exit_irqoff,
- .sched_in = svm_sched_in,
-
.nested_ops = &svm_nested_ops,
.deliver_interrupt = svm_deliver_interrupt,
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 7c546ad3e4c9..4fee9a8cc5a1 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -121,8 +121,6 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.check_intercept = vmx_check_intercept,
.handle_exit_irqoff = vmx_handle_exit_irqoff,
- .sched_in = vmx_sched_in,
-
.cpu_dirty_log_size = PML_ENTITY_NUM,
.update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 07a4d6a3a43e..da2f95385a12 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1517,6 +1517,9 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
+ if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
+ shrink_ple_window(vcpu);
+
vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
vmx_vcpu_pi_load(vcpu, cpu);
@@ -8171,12 +8174,6 @@ void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
}
#endif
-void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
-{
- if (!kvm_pause_in_guest(vcpu->kvm))
- shrink_ple_window(vcpu);
-}
-
void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 502704596c83..3cb0be94e779 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -112,7 +112,6 @@ u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
void vmx_write_tsc_offset(struct kvm_vcpu *vcpu);
void vmx_write_tsc_multiplier(struct kvm_vcpu *vcpu);
void vmx_request_immediate_exit(struct kvm_vcpu *vcpu);
-void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu);
void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
#ifdef CONFIG_X86_64
int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d750546ec934..e924d1c51e31 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5004,6 +5004,16 @@ static bool need_emulate_wbinvd(struct kvm_vcpu *vcpu)
void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
+ struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+
+ if (vcpu->scheduled_out) {
+ vcpu->arch.l1tf_flush_l1d = true;
+ if (pmu->version && unlikely(pmu->event_count)) {
+ pmu->need_cleanup = true;
+ kvm_make_request(KVM_REQ_PMU, vcpu);
+ }
+ }
+
/* Address WBINVD may be executed by guest */
if (need_emulate_wbinvd(vcpu)) {
if (static_call(kvm_x86_has_wbinvd_exit)())
@@ -12578,14 +12588,7 @@ bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu)
void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
{
- struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
- vcpu->arch.l1tf_flush_l1d = true;
- if (pmu->version && unlikely(pmu->event_count)) {
- pmu->need_cleanup = true;
- kvm_make_request(KVM_REQ_PMU, vcpu);
- }
- static_call(kvm_x86_sched_in)(vcpu, cpu);
}
void kvm_arch_free_vm(struct kvm *kvm)
--
2.45.0.215.g3402c0e53f-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
2024-05-22 1:40 ` [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
@ 2024-05-23 22:47 ` Huang, Kai
2024-05-28 19:16 ` Sean Christopherson
0 siblings, 1 reply; 16+ messages in thread
From: Huang, Kai @ 2024-05-23 22:47 UTC (permalink / raw)
To: Sean Christopherson, Marc Zyngier, Oliver Upton, Tianrui Zhao,
Bibo Mao, Huacai Chen, Michael Ellerman, Anup Patel,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
On 22/05/2024 1:40 pm, Sean Christopherson wrote:
> Fold the guts of kvm_arch_sched_in() into kvm_arch_vcpu_load(), keying
> off the recently added kvm_vcpu.scheduled_out as appropriate.
>
> Note, there is a very slight functional change, as PLE shrink updates will
> now happen after blasting WBINVD, but that is quite uninteresting as the
> two operations do not interact in any way.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
Acked-by: Kai Huang <kai.huang@intel.com>
[...]
> @@ -1548,6 +1548,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> struct vcpu_svm *svm = to_svm(vcpu);
> struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
>
> + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> + shrink_ple_window(vcpu);
> +
[...]
> @@ -1517,6 +1517,9 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> + shrink_ple_window(vcpu);
> +
Nit: Perhaps we need a kvm_x86_ops::shrink_ple_window()? :-)
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
2024-05-23 22:47 ` Huang, Kai
@ 2024-05-28 19:16 ` Sean Christopherson
2024-05-29 10:50 ` Huang, Kai
0 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2024-05-28 19:16 UTC (permalink / raw)
To: Kai Huang
Cc: Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Paolo Bonzini, linux-arm-kernel, kvmarm, kvm, loongarch,
linux-mips, linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
On Fri, May 24, 2024, Kai Huang wrote:
> > @@ -1548,6 +1548,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> > struct vcpu_svm *svm = to_svm(vcpu);
> > struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
> > + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> > + shrink_ple_window(vcpu);
> > +
>
> [...]
>
> > @@ -1517,6 +1517,9 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> > {
> > struct vcpu_vmx *vmx = to_vmx(vcpu);
> > + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> > + shrink_ple_window(vcpu);
> > +
>
> Nit: Perhaps we need a kvm_x86_ops::shrink_ple_window()? :-)
Heh, that duplicate code annoys me too. The problem is the "old" window value
comes from the VMCS/VMCB, so either we'd end up with multiple kvm_x86_ops, or
we'd only be able to consolidate the scheduled_out + kvm_pause_in_guest() code,
which isn't all that interesting.
Aha! Actually, VMX already open codes the functionality provided by VCPU_EXREG_*,
e.g. has vmx->ple_window_dirty. If we add VCPU_EXREG_PLE_WINDOW, then the info
get be made available to common x86 code without having to add new hooks. And
that would also allow moving the guts of handle_pause()/pause_interception() to
common code, i.e. will also allow deduplicating the "grow" side of things.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
2024-05-28 19:16 ` Sean Christopherson
@ 2024-05-29 10:50 ` Huang, Kai
2024-05-29 12:54 ` Sean Christopherson
0 siblings, 1 reply; 16+ messages in thread
From: Huang, Kai @ 2024-05-29 10:50 UTC (permalink / raw)
To: seanjc@google.com
Cc: chenhuacai@kernel.org, kvm@vger.kernel.org, maz@kernel.org,
frankja@linux.ibm.com, borntraeger@linux.ibm.com,
mpe@ellerman.id.au, aou@eecs.berkeley.edu, palmer@dabbelt.com,
linux-kernel@vger.kernel.org, maobibo@loongson.cn,
pbonzini@redhat.com, loongarch@lists.linux.dev,
paul.walmsley@sifive.com, kvmarm@lists.linux.dev,
imbrenda@linux.ibm.com, kvm-riscv@lists.infradead.org,
zhaotianrui@loongson.cn, linuxppc-dev@lists.ozlabs.org,
linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
anup@brainfault.org, oliver.upton@linux.dev,
linux-riscv@lists.infradead.org
On Tue, 2024-05-28 at 12:16 -0700, Sean Christopherson wrote:
> On Fri, May 24, 2024, Kai Huang wrote:
> > > @@ -1548,6 +1548,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> > > struct vcpu_svm *svm = to_svm(vcpu);
> > > struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
> > > + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> > > + shrink_ple_window(vcpu);
> > > +
> >
> > [...]
> >
> > > @@ -1517,6 +1517,9 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> > > {
> > > struct vcpu_vmx *vmx = to_vmx(vcpu);
> > > + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> > > + shrink_ple_window(vcpu);
> > > +
> >
> > Nit: Perhaps we need a kvm_x86_ops::shrink_ple_window()? :-)
>
> Heh, that duplicate code annoys me too. The problem is the "old" window value
> comes from the VMCS/VMCB, so either we'd end up with multiple kvm_x86_ops, or
> we'd only be able to consolidate the scheduled_out + kvm_pause_in_guest() code,
> which isn't all that interesting.
Agreed only consolidating scheduled_out + kvm_pause_in_guest() isn't quite
interesting.
>
> Aha! Actually, VMX already open codes the functionality provided by VCPU_EXREG_*,
> e.g. has vmx->ple_window_dirty. If we add VCPU_EXREG_PLE_WINDOW, then the info
> get be made available to common x86 code without having to add new hooks. And
> that would also allow moving the guts of handle_pause()/pause_interception() to
> common code, i.e. will also allow deduplicating the "grow" side of things.
Sounds feasible. I am not sure whether we should use
VCPU_EXREG_PLE_WINDOW, though. We can just have "ple_window" +
"ple_window_dirty" concept in the vcpu:
vcpu->ple_window;
vcpu->ple_window_dirty;
I.e., kinda make current VMX's version of {grow|shrink}_ple_window() as
common code.
I am not familiar with SVM, but it seems the relevant parts are:
control->pause_filter_count;
vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
And it seems they are directly related to programming the hardware, i.e.,
they got automatically loaded to hardware during VMRUN.
They need to be updated in the SVM specific code when @ple_window_dirty is
true in the relevant code path.
Anyway, even it is feasible and worth to do, we should do in a separate
patchset.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
2024-05-29 10:50 ` Huang, Kai
@ 2024-05-29 12:54 ` Sean Christopherson
0 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2024-05-29 12:54 UTC (permalink / raw)
To: Kai Huang
Cc: chenhuacai@kernel.org, kvm@vger.kernel.org, maz@kernel.org,
frankja@linux.ibm.com, borntraeger@linux.ibm.com,
mpe@ellerman.id.au, aou@eecs.berkeley.edu, palmer@dabbelt.com,
linux-kernel@vger.kernel.org, maobibo@loongson.cn,
pbonzini@redhat.com, loongarch@lists.linux.dev,
paul.walmsley@sifive.com, kvmarm@lists.linux.dev,
imbrenda@linux.ibm.com, kvm-riscv@lists.infradead.org,
zhaotianrui@loongson.cn, linuxppc-dev@lists.ozlabs.org,
linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
anup@brainfault.org, oliver.upton@linux.dev,
linux-riscv@lists.infradead.org
On Wed, May 29, 2024, Kai Huang wrote:
> I am not familiar with SVM, but it seems the relevant parts are:
>
> control->pause_filter_count;
> vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
>
> And it seems they are directly related to programming the hardware, i.e.,
> they got automatically loaded to hardware during VMRUN.
"control" is the control area of the VMCB, i.e. the above pause_filter_count is
equivalent to a VMCS field.
> They need to be updated in the SVM specific code when @ple_window_dirty is
> true in the relevant code path.
>
> Anyway, even it is feasible and worth to do, we should do in a separate
> patchset.
Ya.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 4/6] KVM: Delete the now unused kvm_arch_sched_in()
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
` (2 preceding siblings ...)
2024-05-22 1:40 ` [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
@ 2024-05-22 1:40 ` Sean Christopherson
2024-05-24 1:50 ` maobibo
2024-05-22 1:40 ` [PATCH v2 5/6] KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load Sean Christopherson
` (3 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2024-05-22 1:40 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
Delete kvm_arch_sched_in() now that all implementations are nops.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/arm64/include/asm/kvm_host.h | 1 -
arch/loongarch/include/asm/kvm_host.h | 1 -
arch/mips/include/asm/kvm_host.h | 1 -
arch/powerpc/include/asm/kvm_host.h | 1 -
arch/riscv/include/asm/kvm_host.h | 1 -
arch/s390/include/asm/kvm_host.h | 1 -
arch/x86/kvm/pmu.c | 6 +++---
arch/x86/kvm/x86.c | 5 -----
include/linux/kvm_host.h | 2 --
virt/kvm/kvm_main.c | 1 -
10 files changed, 3 insertions(+), 17 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8170c04fde91..615e7a2e5590 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1225,7 +1225,6 @@ static inline bool kvm_system_needs_idmapped_vectors(void)
}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
void kvm_arm_init_debug(void);
void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu);
diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h
index c87b6ea0ec47..4162a252cdf6 100644
--- a/arch/loongarch/include/asm/kvm_host.h
+++ b/arch/loongarch/include/asm/kvm_host.h
@@ -261,7 +261,6 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 179f320cc231..6743a57c1ab4 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -890,7 +890,6 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_free_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 8abac532146e..c4fb6a27fb92 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -897,7 +897,6 @@ struct kvm_vcpu_arch {
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index d96281278586..dd77c2db6819 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -286,7 +286,6 @@ struct kvm_vcpu_arch {
};
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 95990461888f..e9fcaf4607a6 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -1045,7 +1045,6 @@ extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc);
extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc);
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_free_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index a593b03c9aed..f9149c9fc275 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -521,9 +521,9 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
}
/*
- * Unused perf_events are only released if the corresponding MSRs
- * weren't accessed during the last vCPU time slice. kvm_arch_sched_in
- * triggers KVM_REQ_PMU if cleanup is needed.
+ * Release unused perf_events if the corresponding guest MSRs weren't
+ * accessed during the last vCPU time slice (need_cleanup is set when
+ * the vCPU is scheduled back in).
*/
if (unlikely(pmu->need_cleanup))
kvm_pmu_cleanup(vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e924d1c51e31..59aa772af755 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12586,11 +12586,6 @@ bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu)
return (vcpu->arch.apic_base & MSR_IA32_APICBASE_BSP) != 0;
}
-void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
-{
-
-}
-
void kvm_arch_free_vm(struct kvm *kvm)
{
#if IS_ENABLED(CONFIG_HYPERV)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index bde69f74b031..c404c428a866 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1495,8 +1495,6 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
struct kvm_guest_debug *dbg);
int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu);
-void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu);
-
void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu);
int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7ecea573d121..b312d0cbe60b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -6286,7 +6286,6 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu)
WRITE_ONCE(vcpu->ready, false);
__this_cpu_write(kvm_running_vcpu, vcpu);
- kvm_arch_sched_in(vcpu, cpu);
kvm_arch_vcpu_load(vcpu, cpu);
WRITE_ONCE(vcpu->scheduled_out, false);
--
2.45.0.215.g3402c0e53f-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 4/6] KVM: Delete the now unused kvm_arch_sched_in()
2024-05-22 1:40 ` [PATCH v2 4/6] KVM: Delete the now unused kvm_arch_sched_in() Sean Christopherson
@ 2024-05-24 1:50 ` maobibo
0 siblings, 0 replies; 16+ messages in thread
From: maobibo @ 2024-05-24 1:50 UTC (permalink / raw)
To: Sean Christopherson, Marc Zyngier, Oliver Upton, Tianrui Zhao,
Huacai Chen, Michael Ellerman, Anup Patel, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Christian Borntraeger, Janosch Frank,
Claudio Imbrenda, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
On 2024/5/22 上午9:40, Sean Christopherson wrote:
> Delete kvm_arch_sched_in() now that all implementations are nops.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 1 -
> arch/loongarch/include/asm/kvm_host.h | 1 -
> arch/mips/include/asm/kvm_host.h | 1 -
> arch/powerpc/include/asm/kvm_host.h | 1 -
> arch/riscv/include/asm/kvm_host.h | 1 -
> arch/s390/include/asm/kvm_host.h | 1 -
> arch/x86/kvm/pmu.c | 6 +++---
> arch/x86/kvm/x86.c | 5 -----
> include/linux/kvm_host.h | 2 --
> virt/kvm/kvm_main.c | 1 -
> 10 files changed, 3 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 8170c04fde91..615e7a2e5590 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1225,7 +1225,6 @@ static inline bool kvm_system_needs_idmapped_vectors(void)
> }
>
> static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>
> void kvm_arm_init_debug(void);
> void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu);
> diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h
> index c87b6ea0ec47..4162a252cdf6 100644
> --- a/arch/loongarch/include/asm/kvm_host.h
> +++ b/arch/loongarch/include/asm/kvm_host.h
> @@ -261,7 +261,6 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
> static inline void kvm_arch_hardware_unsetup(void) {}
> static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
> static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
> static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index 179f320cc231..6743a57c1ab4 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -890,7 +890,6 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> static inline void kvm_arch_free_memslot(struct kvm *kvm,
> struct kvm_memory_slot *slot) {}
> static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
> static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
>
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index 8abac532146e..c4fb6a27fb92 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -897,7 +897,6 @@ struct kvm_vcpu_arch {
> static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
> static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
> static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
>
> diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> index d96281278586..dd77c2db6819 100644
> --- a/arch/riscv/include/asm/kvm_host.h
> +++ b/arch/riscv/include/asm/kvm_host.h
> @@ -286,7 +286,6 @@ struct kvm_vcpu_arch {
> };
>
> static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>
> #define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12
>
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 95990461888f..e9fcaf4607a6 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -1045,7 +1045,6 @@ extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc);
> extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc);
>
> static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> -static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> static inline void kvm_arch_free_memslot(struct kvm *kvm,
> struct kvm_memory_slot *slot) {}
> static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index a593b03c9aed..f9149c9fc275 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -521,9 +521,9 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
> }
>
> /*
> - * Unused perf_events are only released if the corresponding MSRs
> - * weren't accessed during the last vCPU time slice. kvm_arch_sched_in
> - * triggers KVM_REQ_PMU if cleanup is needed.
> + * Release unused perf_events if the corresponding guest MSRs weren't
> + * accessed during the last vCPU time slice (need_cleanup is set when
> + * the vCPU is scheduled back in).
> */
> if (unlikely(pmu->need_cleanup))
> kvm_pmu_cleanup(vcpu);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e924d1c51e31..59aa772af755 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12586,11 +12586,6 @@ bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu)
> return (vcpu->arch.apic_base & MSR_IA32_APICBASE_BSP) != 0;
> }
>
> -void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
> -{
> -
> -}
> -
> void kvm_arch_free_vm(struct kvm *kvm)
> {
> #if IS_ENABLED(CONFIG_HYPERV)
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index bde69f74b031..c404c428a866 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1495,8 +1495,6 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
> struct kvm_guest_debug *dbg);
> int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu);
>
> -void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu);
> -
> void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
> void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu);
> int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 7ecea573d121..b312d0cbe60b 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -6286,7 +6286,6 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu)
> WRITE_ONCE(vcpu->ready, false);
>
> __this_cpu_write(kvm_running_vcpu, vcpu);
> - kvm_arch_sched_in(vcpu, cpu);
> kvm_arch_vcpu_load(vcpu, cpu);
>
> WRITE_ONCE(vcpu->scheduled_out, false);
>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 5/6] KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
` (3 preceding siblings ...)
2024-05-22 1:40 ` [PATCH v2 4/6] KVM: Delete the now unused kvm_arch_sched_in() Sean Christopherson
@ 2024-05-22 1:40 ` Sean Christopherson
2024-05-23 22:42 ` Huang, Kai
2024-05-22 1:40 ` [PATCH v2 6/6] KVM: x86: Drop now-superflous setting of l1tf_flush_l1d in vcpu_run() Sean Christopherson
` (2 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2024-05-22 1:40 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
Always set l1tf_flush_l1d during kvm_arch_vcpu_load() instead of setting
it only when the vCPU is being scheduled back in. The flag is processed
only when VM-Enter is imminent, and KVM obviously needs to load the vCPU
before VM-Enter, so attempting to precisely set l1tf_flush_l1d provides no
meaningful value. I.e. the flag _will_ be set either way, it's simply a
matter of when.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 59aa772af755..60fea297f91f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5006,12 +5006,11 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
- if (vcpu->scheduled_out) {
- vcpu->arch.l1tf_flush_l1d = true;
- if (pmu->version && unlikely(pmu->event_count)) {
- pmu->need_cleanup = true;
- kvm_make_request(KVM_REQ_PMU, vcpu);
- }
+ vcpu->arch.l1tf_flush_l1d = true;
+
+ if (vcpu->scheduled_out && pmu->version && pmu->event_count) {
+ pmu->need_cleanup = true;
+ kvm_make_request(KVM_REQ_PMU, vcpu);
}
/* Address WBINVD may be executed by guest */
--
2.45.0.215.g3402c0e53f-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 5/6] KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load
2024-05-22 1:40 ` [PATCH v2 5/6] KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load Sean Christopherson
@ 2024-05-23 22:42 ` Huang, Kai
0 siblings, 0 replies; 16+ messages in thread
From: Huang, Kai @ 2024-05-23 22:42 UTC (permalink / raw)
To: Sean Christopherson, Marc Zyngier, Oliver Upton, Tianrui Zhao,
Bibo Mao, Huacai Chen, Michael Ellerman, Anup Patel,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
On 22/05/2024 1:40 pm, Sean Christopherson wrote:
> Always set l1tf_flush_l1d during kvm_arch_vcpu_load() instead of setting
> it only when the vCPU is being scheduled back in. The flag is processed
> only when VM-Enter is imminent, and KVM obviously needs to load the vCPU
> before VM-Enter, so attempting to precisely set l1tf_flush_l1d provides no
> meaningful value. I.e. the flag _will_ be set either way, it's simply a
> matter of when.
Seems reasonable.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
Acked-by: Kai Huang <kai.huang@intel.com>
> arch/x86/kvm/x86.c | 11 +++++------
> 1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 59aa772af755..60fea297f91f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5006,12 +5006,11 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> {
> struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
>
> - if (vcpu->scheduled_out) {
> - vcpu->arch.l1tf_flush_l1d = true;
> - if (pmu->version && unlikely(pmu->event_count)) {
> - pmu->need_cleanup = true;
> - kvm_make_request(KVM_REQ_PMU, vcpu);
> - }
> + vcpu->arch.l1tf_flush_l1d = true;
> +
> + if (vcpu->scheduled_out && pmu->version && pmu->event_count) {
> + pmu->need_cleanup = true;
> + kvm_make_request(KVM_REQ_PMU, vcpu);
> }
Nit, the unlikely() is lost, but I guess it is OK?
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 6/6] KVM: x86: Drop now-superflous setting of l1tf_flush_l1d in vcpu_run()
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
` (4 preceding siblings ...)
2024-05-22 1:40 ` [PATCH v2 5/6] KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load Sean Christopherson
@ 2024-05-22 1:40 ` Sean Christopherson
2024-05-23 22:48 ` [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Huang, Kai
2024-06-12 1:18 ` Sean Christopherson
7 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2024-05-22 1:40 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao, Huacai Chen,
Michael Ellerman, Anup Patel, Paul Walmsley, Palmer Dabbelt,
Albert Ou, Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
Now that KVM unconditionally sets l1tf_flush_l1d in kvm_arch_vcpu_load(),
drop the redundant store from vcpu_run(). The flag is cleared only when
VM-Enter is imminent, deep below vcpu_run(), i.e. barring a KVM bug, it's
impossible for l1tf_flush_l1d to be cleared between loading the vCPU and
calling vcpu_run().
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/vmx.c | 7 ++++---
arch/x86/kvm/x86.c | 1 -
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index da2f95385a12..552b6a9887a5 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6672,9 +6672,10 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu)
bool flush_l1d;
/*
- * Clear the per-vcpu flush bit, it gets set again
- * either from vcpu_run() or from one of the unsafe
- * VMEXIT handlers.
+ * Clear the per-vcpu flush bit, it gets set again if the vCPU
+ * is reloaded, i.e. if the vCPU is scheduled out or if KVM
+ * exits to userspace, or if KVM reaches one of the unsafe
+ * VMEXIT handlers, e.g. if KVM calls into the emulator.
*/
flush_l1d = vcpu->arch.l1tf_flush_l1d;
vcpu->arch.l1tf_flush_l1d = false;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 60fea297f91f..86ae7392cc59 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11264,7 +11264,6 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
int r;
vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
- vcpu->arch.l1tf_flush_l1d = true;
for (;;) {
/*
--
2.45.0.215.g3402c0e53f-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
` (5 preceding siblings ...)
2024-05-22 1:40 ` [PATCH v2 6/6] KVM: x86: Drop now-superflous setting of l1tf_flush_l1d in vcpu_run() Sean Christopherson
@ 2024-05-23 22:48 ` Huang, Kai
2024-06-12 1:18 ` Sean Christopherson
7 siblings, 0 replies; 16+ messages in thread
From: Huang, Kai @ 2024-05-23 22:48 UTC (permalink / raw)
To: Sean Christopherson, Marc Zyngier, Oliver Upton, Tianrui Zhao,
Bibo Mao, Huacai Chen, Michael Ellerman, Anup Patel,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
On 22/05/2024 1:40 pm, Sean Christopherson wrote:
> Drop kvm_arch_sched_in() and instead add and use kvm_vcpu.scheduled_out
> to communicate to kvm_arch_vcpu_load() that the vCPU is being scheduling
> back in.
>
For this series,
Acked-by: Kai Huang <kai.huang@intel.com>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
2024-05-22 1:40 [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Sean Christopherson
` (6 preceding siblings ...)
2024-05-23 22:48 ` [PATCH v2 0/6] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() Huang, Kai
@ 2024-06-12 1:18 ` Sean Christopherson
7 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2024-06-12 1:18 UTC (permalink / raw)
To: Sean Christopherson, Marc Zyngier, Oliver Upton, Tianrui Zhao,
Bibo Mao, Huacai Chen, Michael Ellerman, Anup Patel,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, Paolo Bonzini
Cc: linux-arm-kernel, kvmarm, kvm, loongarch, linux-mips,
linuxppc-dev, kvm-riscv, linux-riscv, linux-kernel
On Tue, 21 May 2024 18:40:07 -0700, Sean Christopherson wrote:
> Drop kvm_arch_sched_in() and instead add and use kvm_vcpu.scheduled_out
> to communicate to kvm_arch_vcpu_load() that the vCPU is being scheduling
> back in.
>
> While fiddling with an idea for optimizing state management on AMD CPUs,
> I wanted to skip re-saving certain host state when a vCPU is scheduled back
> in, as the state (theoretically) shouldn't change for the task while it's
> scheduled out. Actually doing that was annoying and unnecessarily brittle
> due to having a separate API for the kvm_sched_in() case (the state save
> needed to be in kvm_arch_vcpu_load() for the common path).
>
> [...]
Applied to kvm-x86 generic, thanks!
[1/6] KVM: Add a flag to track if a loaded vCPU is scheduled out
https://github.com/kvm-x86/linux/commit/d1ae567fb8b5
[2/6] KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load()
https://github.com/kvm-x86/linux/commit/5d9c07febb86
[3/6] KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load()
https://github.com/kvm-x86/linux/commit/8fbb696a8f53
[4/6] KVM: Delete the now unused kvm_arch_sched_in()
https://github.com/kvm-x86/linux/commit/2a27c4314007
[5/6] KVM: x86: Unconditionally set l1tf_flush_l1d during vCPU load
https://github.com/kvm-x86/linux/commit/ef2e18ef3750
[6/6] KVM: x86: Drop now-superflous setting of l1tf_flush_l1d in vcpu_run()
https://github.com/kvm-x86/linux/commit/3dee3b187499
--
https://github.com/kvm-x86/linux/tree/next
^ permalink raw reply [flat|nested] 16+ messages in thread