* [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code
@ 2023-07-21 20:18 Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 01/19] x86/reboot: VMCLEAR active VMCSes before emergency reboot Sean Christopherson
` (19 more replies)
0 siblings, 20 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
If there are no objections, my plan is to take this through the KVM tree
for 6.6.
Instead of having the reboot code blindly try to disable virtualization
during an emergency, use the existing callback into KVM to disable virt
as "needed". In quotes because KVM still somewhat blindly attempts to
disable virt, e.g. if KVM is loaded but doesn't have active VMs and thus
hasn't enabled hardware. That could theoretically be "fixed", but due to
the callback being invoked from NMI context, I'm not convinced it would
be worth the complexity. E.g. false positives would still be possible,
and KVM would have to play games with the per-CPU hardware_enabled flag
to ensure there are no false negatives.
The callback is currently used only to VMCLEAR the per-CPU list of VMCSes,
but not using the callback to disable virt isn't intentional. Arguably, a
callback should have been used in the initial "disable virt" code added by
commit d176720d34c7 ("x86: disable VMX on all CPUs on reboot"). And the
kexec logic added (much later) by commit f23d1f4a1160 ("x86/kexec: VMCLEAR
VMCSs loaded on all cpus if necessary") simply missed the opportunity to
use the callback for all virtualization needs.
Once KVM handles disabling virt, move all of the helpers provided by
virtext.h into KVM proper.
There's one outlier patch, "Make KVM_AMD depend on CPU_SUP_AMD or
CPU_SUP_HYGON", that I included here because it felt weird to pull in the
"must be AMD or Hygon" check without KVM demanding that at build time.
v4:
- Collect reviews. [Kai]
- Skip VMCLEAR during reboot if CR4.VMXE=0. [Kai]
- Call out that disabling virtualization iff there's a callback also
avoids an unnecessary NMI shootdown. [Kai]
- Move "Disable virtualization during reboot iff callback is
registered" patch after "Hoist "disable virt" helpers above \"emergency
reboot\"" patch to fix an intermediate build error.
v3:
- https://lore.kernel.org/all/20230512235026.808058-1-seanjc@google.com
- Massage changelogs to avoid talking about out-of-tree hypervisors. [Kai]
- Move #ifdef "KVM" addition later. [Kai]
v2:
- https://lore.kernel.org/all/20230310214232.806108-1-seanjc@google.com
- Disable task migration when probing basic SVM and VMX support to avoid
logging misleading info (wrong CPU) if probing fails.
v1: https://lore.kernel.org/all/20221201232655.290720-1-seanjc@google.com
Sean Christopherson (19):
x86/reboot: VMCLEAR active VMCSes before emergency reboot
x86/reboot: Harden virtualization hooks for emergency reboot
x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback
x86/reboot: KVM: Disable SVM during reboot via virt/KVM reboot
callback
x86/reboot: Assert that IRQs are disabled when turning off
virtualization
x86/reboot: Hoist "disable virt" helpers above "emergency reboot" path
x86/reboot: Disable virtualization during reboot iff callback is
registered
x86/reboot: Expose VMCS crash hooks if and only if KVM_{INTEL,AMD} is
enabled
x86/virt: KVM: Open code cpu_has_vmx() in KVM VMX
x86/virt: KVM: Move VMXOFF helpers into KVM VMX
KVM: SVM: Make KVM_AMD depend on CPU_SUP_AMD or CPU_SUP_HYGON
x86/virt: Drop unnecessary check on extended CPUID level in
cpu_has_svm()
x86/virt: KVM: Open code cpu_has_svm() into kvm_is_svm_supported()
KVM: SVM: Check that the current CPU supports SVM in
kvm_is_svm_supported()
KVM: VMX: Ensure CPU is stable when probing basic VMX support
x86/virt: KVM: Move "disable SVM" helper into KVM SVM
KVM: x86: Force kvm_rebooting=true during emergency reboot/crash
KVM: SVM: Use "standard" stgi() helper when disabling SVM
KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0
arch/x86/include/asm/kexec.h | 2 -
arch/x86/include/asm/reboot.h | 7 ++
arch/x86/include/asm/virtext.h | 154 ---------------------------------
arch/x86/kernel/crash.c | 31 -------
arch/x86/kernel/reboot.c | 66 ++++++++++----
arch/x86/kvm/Kconfig | 2 +-
arch/x86/kvm/svm/svm.c | 71 ++++++++++++---
arch/x86/kvm/vmx/vmx.c | 76 ++++++++++++----
8 files changed, 176 insertions(+), 233 deletions(-)
delete mode 100644 arch/x86/include/asm/virtext.h
base-commit: fdf0eaf11452d72945af31804e2a1048ee1b574c
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH v4 01/19] x86/reboot: VMCLEAR active VMCSes before emergency reboot
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 02/19] x86/reboot: Harden virtualization hooks for " Sean Christopherson
` (18 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
VMCLEAR active VMCSes before any emergency reboot, not just if the kernel
may kexec into a new kernel after a crash. Per Intel's SDM, the VMX
architecture doesn't require the CPU to flush the VMCS cache on INIT. If
an emergency reboot doesn't RESET CPUs, cached VMCSes could theoretically
be kept and only be written back to memory after the new kernel is booted,
i.e. could effectively corrupt memory after reboot.
Opportunistically remove the setting of the global pointer to NULL to make
checkpatch happy.
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kexec.h | 2 --
arch/x86/include/asm/reboot.h | 2 ++
arch/x86/kernel/crash.c | 31 -------------------------------
arch/x86/kernel/reboot.c | 22 ++++++++++++++++++++++
arch/x86/kvm/vmx/vmx.c | 10 +++-------
5 files changed, 27 insertions(+), 40 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 5b77bbc28f96..819046974b99 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -205,8 +205,6 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image);
#endif
#endif
-typedef void crash_vmclear_fn(void);
-extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
extern void kdump_nmi_shootdown_cpus(void);
#endif /* __ASSEMBLY__ */
diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h
index 9177b4354c3f..dc201724a643 100644
--- a/arch/x86/include/asm/reboot.h
+++ b/arch/x86/include/asm/reboot.h
@@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
#define MRR_BIOS 0
#define MRR_APM 1
+typedef void crash_vmclear_fn(void);
+extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
void cpu_emergency_disable_virtualization(void);
typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index cdd92ab43cda..54cd959cb316 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -48,38 +48,12 @@ struct crash_memmap_data {
unsigned int type;
};
-/*
- * This is used to VMCLEAR all VMCSs loaded on the
- * processor. And when loading kvm_intel module, the
- * callback function pointer will be assigned.
- *
- * protected by rcu.
- */
-crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss = NULL;
-EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss);
-
-static inline void cpu_crash_vmclear_loaded_vmcss(void)
-{
- crash_vmclear_fn *do_vmclear_operation = NULL;
-
- rcu_read_lock();
- do_vmclear_operation = rcu_dereference(crash_vmclear_loaded_vmcss);
- if (do_vmclear_operation)
- do_vmclear_operation();
- rcu_read_unlock();
-}
-
#if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
{
crash_save_cpu(regs, cpu);
- /*
- * VMCLEAR VMCSs loaded on all cpus if needed.
- */
- cpu_crash_vmclear_loaded_vmcss();
-
/*
* Disable Intel PT to stop its logging
*/
@@ -133,11 +107,6 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
crash_smp_send_stop();
- /*
- * VMCLEAR VMCSs loaded on this cpu if needed.
- */
- cpu_crash_vmclear_loaded_vmcss();
-
cpu_emergency_disable_virtualization();
/*
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 3adbe97015c1..3fa4c6717a1d 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -787,6 +787,26 @@ void machine_crash_shutdown(struct pt_regs *regs)
}
#endif
+/*
+ * This is used to VMCLEAR all VMCSs loaded on the
+ * processor. And when loading kvm_intel module, the
+ * callback function pointer will be assigned.
+ *
+ * protected by rcu.
+ */
+crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
+EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss);
+
+static inline void cpu_crash_vmclear_loaded_vmcss(void)
+{
+ crash_vmclear_fn *do_vmclear_operation = NULL;
+
+ rcu_read_lock();
+ do_vmclear_operation = rcu_dereference(crash_vmclear_loaded_vmcss);
+ if (do_vmclear_operation)
+ do_vmclear_operation();
+ rcu_read_unlock();
+}
/* This is the CPU performing the emergency shutdown work. */
int crashing_cpu = -1;
@@ -798,6 +818,8 @@ int crashing_cpu = -1;
*/
void cpu_emergency_disable_virtualization(void)
{
+ cpu_crash_vmclear_loaded_vmcss();
+
cpu_emergency_vmxoff();
cpu_emergency_svm_disable();
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0ecf4be2c6af..7f692d97a821 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -41,7 +41,7 @@
#include <asm/idtentry.h>
#include <asm/io.h>
#include <asm/irq_remapping.h>
-#include <asm/kexec.h>
+#include <asm/reboot.h>
#include <asm/perf_event.h>
#include <asm/mmu_context.h>
#include <asm/mshyperv.h>
@@ -744,7 +744,6 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx,
return ret;
}
-#ifdef CONFIG_KEXEC_CORE
static void crash_vmclear_local_loaded_vmcss(void)
{
int cpu = raw_smp_processor_id();
@@ -754,7 +753,6 @@ static void crash_vmclear_local_loaded_vmcss(void)
loaded_vmcss_on_cpu_link)
vmcs_clear(v->vmcs);
}
-#endif /* CONFIG_KEXEC_CORE */
static void __loaded_vmcs_clear(void *arg)
{
@@ -8592,10 +8590,9 @@ static void __vmx_exit(void)
{
allow_smaller_maxphyaddr = false;
-#ifdef CONFIG_KEXEC_CORE
RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
synchronize_rcu();
-#endif
+
vmx_cleanup_l1d_flush();
}
@@ -8644,10 +8641,9 @@ static int __init vmx_init(void)
pi_init_cpu(cpu);
}
-#ifdef CONFIG_KEXEC_CORE
rcu_assign_pointer(crash_vmclear_loaded_vmcss,
crash_vmclear_local_loaded_vmcss);
-#endif
+
vmx_check_vmcs12_offsets();
/*
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 02/19] x86/reboot: Harden virtualization hooks for emergency reboot
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 01/19] x86/reboot: VMCLEAR active VMCSes before emergency reboot Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 03/19] x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback Sean Christopherson
` (17 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Provide dedicated helpers to (un)register virt hooks used during an
emergency crash/reboot, and WARN if there is an attempt to overwrite
the registered callback, or an attempt to do an unpaired unregister.
Opportunsitically use rcu_assign_pointer() instead of RCU_INIT_POINTER(),
mainly so that the set/unset paths are more symmetrical, but also because
any performance gains from using RCU_INIT_POINTER() are meaningless for
this code.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/reboot.h | 5 +++--
arch/x86/kernel/reboot.c | 30 ++++++++++++++++++++++++------
arch/x86/kvm/vmx/vmx.c | 6 ++----
3 files changed, 29 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h
index dc201724a643..74c6a624d166 100644
--- a/arch/x86/include/asm/reboot.h
+++ b/arch/x86/include/asm/reboot.h
@@ -25,8 +25,9 @@ void __noreturn machine_real_restart(unsigned int type);
#define MRR_BIOS 0
#define MRR_APM 1
-typedef void crash_vmclear_fn(void);
-extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
+typedef void (cpu_emergency_virt_cb)(void);
+void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback);
+void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback);
void cpu_emergency_disable_virtualization(void);
typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 3fa4c6717a1d..62ccedeb5e2b 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -794,17 +794,35 @@ void machine_crash_shutdown(struct pt_regs *regs)
*
* protected by rcu.
*/
-crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
-EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss);
+static cpu_emergency_virt_cb __rcu *cpu_emergency_virt_callback;
+
+void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback)
+{
+ if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback)))
+ return;
+
+ rcu_assign_pointer(cpu_emergency_virt_callback, callback);
+}
+EXPORT_SYMBOL_GPL(cpu_emergency_register_virt_callback);
+
+void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback)
+{
+ if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback) != callback))
+ return;
+
+ rcu_assign_pointer(cpu_emergency_virt_callback, NULL);
+ synchronize_rcu();
+}
+EXPORT_SYMBOL_GPL(cpu_emergency_unregister_virt_callback);
static inline void cpu_crash_vmclear_loaded_vmcss(void)
{
- crash_vmclear_fn *do_vmclear_operation = NULL;
+ cpu_emergency_virt_cb *callback;
rcu_read_lock();
- do_vmclear_operation = rcu_dereference(crash_vmclear_loaded_vmcss);
- if (do_vmclear_operation)
- do_vmclear_operation();
+ callback = rcu_dereference(cpu_emergency_virt_callback);
+ if (callback)
+ callback();
rcu_read_unlock();
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 7f692d97a821..019cefc65142 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8590,8 +8590,7 @@ static void __vmx_exit(void)
{
allow_smaller_maxphyaddr = false;
- RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
- synchronize_rcu();
+ cpu_emergency_unregister_virt_callback(crash_vmclear_local_loaded_vmcss);
vmx_cleanup_l1d_flush();
}
@@ -8641,8 +8640,7 @@ static int __init vmx_init(void)
pi_init_cpu(cpu);
}
- rcu_assign_pointer(crash_vmclear_loaded_vmcss,
- crash_vmclear_local_loaded_vmcss);
+ cpu_emergency_register_virt_callback(crash_vmclear_local_loaded_vmcss);
vmx_check_vmcs12_offsets();
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 03/19] x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 01/19] x86/reboot: VMCLEAR active VMCSes before emergency reboot Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 02/19] x86/reboot: Harden virtualization hooks for " Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-24 23:57 ` Huang, Kai
2023-07-21 20:18 ` [PATCH v4 04/19] x86/reboot: KVM: Disable SVM during reboot via virt/KVM " Sean Christopherson
` (16 subsequent siblings)
19 siblings, 1 reply; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Use KVM VMX's reboot/crash callback to do VMXOFF in an emergency instead
of manually and blindly doing VMXOFF. There's no need to attempt VMXOFF
if a hypervisor, i.e. KVM, isn't loaded/active, i.e. if the CPU can't
possibly be post-VMXON.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/virtext.h | 10 ----------
arch/x86/kernel/reboot.c | 29 +++++++++--------------------
arch/x86/kvm/vmx/vmx.c | 8 +++++---
3 files changed, 14 insertions(+), 33 deletions(-)
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index 3b12e6b99412..5bc29fab15da 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -70,16 +70,6 @@ static inline void __cpu_emergency_vmxoff(void)
cpu_vmxoff();
}
-/** Disable VMX if it is supported and enabled on the current CPU
- */
-static inline void cpu_emergency_vmxoff(void)
-{
- if (cpu_has_vmx())
- __cpu_emergency_vmxoff();
-}
-
-
-
/*
* SVM functions:
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 62ccedeb5e2b..d2d0f2672a64 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -787,13 +787,7 @@ void machine_crash_shutdown(struct pt_regs *regs)
}
#endif
-/*
- * This is used to VMCLEAR all VMCSs loaded on the
- * processor. And when loading kvm_intel module, the
- * callback function pointer will be assigned.
- *
- * protected by rcu.
- */
+/* RCU-protected callback to disable virtualization prior to reboot. */
static cpu_emergency_virt_cb __rcu *cpu_emergency_virt_callback;
void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback)
@@ -815,17 +809,6 @@ void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback)
}
EXPORT_SYMBOL_GPL(cpu_emergency_unregister_virt_callback);
-static inline void cpu_crash_vmclear_loaded_vmcss(void)
-{
- cpu_emergency_virt_cb *callback;
-
- rcu_read_lock();
- callback = rcu_dereference(cpu_emergency_virt_callback);
- if (callback)
- callback();
- rcu_read_unlock();
-}
-
/* This is the CPU performing the emergency shutdown work. */
int crashing_cpu = -1;
@@ -836,9 +819,15 @@ int crashing_cpu = -1;
*/
void cpu_emergency_disable_virtualization(void)
{
- cpu_crash_vmclear_loaded_vmcss();
+ cpu_emergency_virt_cb *callback;
- cpu_emergency_vmxoff();
+ rcu_read_lock();
+ callback = rcu_dereference(cpu_emergency_virt_callback);
+ if (callback)
+ callback();
+ rcu_read_unlock();
+
+ /* KVM_AMD doesn't yet utilize the common callback. */
cpu_emergency_svm_disable();
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 019cefc65142..682c20b33a96 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -744,7 +744,7 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx,
return ret;
}
-static void crash_vmclear_local_loaded_vmcss(void)
+static void vmx_emergency_disable(void)
{
int cpu = raw_smp_processor_id();
struct loaded_vmcs *v;
@@ -752,6 +752,8 @@ static void crash_vmclear_local_loaded_vmcss(void)
list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
loaded_vmcss_on_cpu_link)
vmcs_clear(v->vmcs);
+
+ __cpu_emergency_vmxoff();
}
static void __loaded_vmcs_clear(void *arg)
@@ -8590,7 +8592,7 @@ static void __vmx_exit(void)
{
allow_smaller_maxphyaddr = false;
- cpu_emergency_unregister_virt_callback(crash_vmclear_local_loaded_vmcss);
+ cpu_emergency_unregister_virt_callback(vmx_emergency_disable);
vmx_cleanup_l1d_flush();
}
@@ -8640,7 +8642,7 @@ static int __init vmx_init(void)
pi_init_cpu(cpu);
}
- cpu_emergency_register_virt_callback(crash_vmclear_local_loaded_vmcss);
+ cpu_emergency_register_virt_callback(vmx_emergency_disable);
vmx_check_vmcs12_offsets();
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 04/19] x86/reboot: KVM: Disable SVM during reboot via virt/KVM reboot callback
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (2 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 03/19] x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization Sean Christopherson
` (15 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Use the virt callback to disable SVM (and set GIF=1) during an emergency
instead of blindly attempting to disable SVM. Like the VMX case, if a
hypervisor, i.e. KVM, isn't loaded/active, SVM can't be in use.
Acked-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/virtext.h | 8 --------
arch/x86/kernel/reboot.c | 3 ---
arch/x86/kvm/svm/svm.c | 19 +++++++++++++++++--
3 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index 5bc29fab15da..aaed66249ccf 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -133,12 +133,4 @@ static inline void cpu_svm_disable(void)
}
}
-/** Makes sure SVM is disabled, if it is supported on the CPU
- */
-static inline void cpu_emergency_svm_disable(void)
-{
- if (cpu_has_svm(NULL))
- cpu_svm_disable();
-}
-
#endif /* _ASM_X86_VIRTEX_H */
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index d2d0f2672a64..48ad2d1ff83d 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -826,9 +826,6 @@ void cpu_emergency_disable_virtualization(void)
if (callback)
callback();
rcu_read_unlock();
-
- /* KVM_AMD doesn't yet utilize the common callback. */
- cpu_emergency_svm_disable();
}
#if defined(CONFIG_SMP)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index d381ad424554..1ae9c2c7eacb 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -39,6 +39,7 @@
#include <asm/spec-ctrl.h>
#include <asm/cpu_device_id.h>
#include <asm/traps.h>
+#include <asm/reboot.h>
#include <asm/fpu/api.h>
#include <asm/virtext.h>
@@ -563,6 +564,11 @@ void __svm_write_tsc_multiplier(u64 multiplier)
preempt_enable();
}
+static void svm_emergency_disable(void)
+{
+ cpu_svm_disable();
+}
+
static void svm_hardware_disable(void)
{
/* Make sure we clean up behind us */
@@ -5209,6 +5215,13 @@ static struct kvm_x86_init_ops svm_init_ops __initdata = {
.pmu_ops = &amd_pmu_ops,
};
+static void __svm_exit(void)
+{
+ kvm_x86_vendor_exit();
+
+ cpu_emergency_unregister_virt_callback(svm_emergency_disable);
+}
+
static int __init svm_init(void)
{
int r;
@@ -5222,6 +5235,8 @@ static int __init svm_init(void)
if (r)
return r;
+ cpu_emergency_register_virt_callback(svm_emergency_disable);
+
/*
* Common KVM initialization _must_ come last, after this, /dev/kvm is
* exposed to userspace!
@@ -5234,14 +5249,14 @@ static int __init svm_init(void)
return 0;
err_kvm_init:
- kvm_x86_vendor_exit();
+ __svm_exit();
return r;
}
static void __exit svm_exit(void)
{
kvm_exit();
- kvm_x86_vendor_exit();
+ __svm_exit();
}
module_init(svm_init)
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (3 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 04/19] x86/reboot: KVM: Disable SVM during reboot via virt/KVM " Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-24 21:19 ` Peter Zijlstra
2023-07-21 20:18 ` [PATCH v4 06/19] x86/reboot: Hoist "disable virt" helpers above "emergency reboot" path Sean Christopherson
` (14 subsequent siblings)
19 siblings, 1 reply; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Assert that IRQs are disabled when turning off virtualization in an
emergency. KVM enables hardware via on_each_cpu(), i.e. could re-enable
hardware if a pending IPI were delivered after disabling virtualization.
Remove a misleading comment from emergency_reboot_disable_virtualization()
about "just" needing to guarantee the CPU is stable (see above).
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kernel/reboot.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 48ad2d1ff83d..4cad7183b89e 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -532,7 +532,6 @@ static inline void nmi_shootdown_cpus_on_restart(void);
static void emergency_reboot_disable_virtualization(void)
{
- /* Just make sure we won't change CPUs while doing this */
local_irq_disable();
/*
@@ -821,6 +820,13 @@ void cpu_emergency_disable_virtualization(void)
{
cpu_emergency_virt_cb *callback;
+ /*
+ * IRQs must be disabled as KVM enables virtualization in hardware via
+ * function call IPIs, i.e. IRQs need to be disabled to guarantee
+ * virtualization stays disabled.
+ */
+ lockdep_assert_irqs_disabled();
+
rcu_read_lock();
callback = rcu_dereference(cpu_emergency_virt_callback);
if (callback)
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 06/19] x86/reboot: Hoist "disable virt" helpers above "emergency reboot" path
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (4 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 07/19] x86/reboot: Disable virtualization during reboot iff callback is registered Sean Christopherson
` (13 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Move the various "disable virtualization" helpers above the emergency
reboot path so that emergency_reboot_disable_virtualization() can be
stubbed out in a future patch if neither KVM_INTEL nor KVM_AMD is enabled,
i.e. if there is no in-tree user of CPU virtualization.
No functional change intended.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kernel/reboot.c | 90 ++++++++++++++++++++--------------------
1 file changed, 45 insertions(+), 45 deletions(-)
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 4cad7183b89e..85cb2dfcb67b 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -530,6 +530,51 @@ static inline void kb_wait(void)
static inline void nmi_shootdown_cpus_on_restart(void);
+/* RCU-protected callback to disable virtualization prior to reboot. */
+static cpu_emergency_virt_cb __rcu *cpu_emergency_virt_callback;
+
+void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback)
+{
+ if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback)))
+ return;
+
+ rcu_assign_pointer(cpu_emergency_virt_callback, callback);
+}
+EXPORT_SYMBOL_GPL(cpu_emergency_register_virt_callback);
+
+void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback)
+{
+ if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback) != callback))
+ return;
+
+ rcu_assign_pointer(cpu_emergency_virt_callback, NULL);
+ synchronize_rcu();
+}
+EXPORT_SYMBOL_GPL(cpu_emergency_unregister_virt_callback);
+
+/*
+ * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized during
+ * reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT if
+ * GIF=0, i.e. if the crash occurred between CLGI and STGI.
+ */
+void cpu_emergency_disable_virtualization(void)
+{
+ cpu_emergency_virt_cb *callback;
+
+ /*
+ * IRQs must be disabled as KVM enables virtualization in hardware via
+ * function call IPIs, i.e. IRQs need to be disabled to guarantee
+ * virtualization stays disabled.
+ */
+ lockdep_assert_irqs_disabled();
+
+ rcu_read_lock();
+ callback = rcu_dereference(cpu_emergency_virt_callback);
+ if (callback)
+ callback();
+ rcu_read_unlock();
+}
+
static void emergency_reboot_disable_virtualization(void)
{
local_irq_disable();
@@ -786,54 +831,9 @@ void machine_crash_shutdown(struct pt_regs *regs)
}
#endif
-/* RCU-protected callback to disable virtualization prior to reboot. */
-static cpu_emergency_virt_cb __rcu *cpu_emergency_virt_callback;
-
-void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback)
-{
- if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback)))
- return;
-
- rcu_assign_pointer(cpu_emergency_virt_callback, callback);
-}
-EXPORT_SYMBOL_GPL(cpu_emergency_register_virt_callback);
-
-void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback)
-{
- if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback) != callback))
- return;
-
- rcu_assign_pointer(cpu_emergency_virt_callback, NULL);
- synchronize_rcu();
-}
-EXPORT_SYMBOL_GPL(cpu_emergency_unregister_virt_callback);
-
/* This is the CPU performing the emergency shutdown work. */
int crashing_cpu = -1;
-/*
- * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized during
- * reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT if
- * GIF=0, i.e. if the crash occurred between CLGI and STGI.
- */
-void cpu_emergency_disable_virtualization(void)
-{
- cpu_emergency_virt_cb *callback;
-
- /*
- * IRQs must be disabled as KVM enables virtualization in hardware via
- * function call IPIs, i.e. IRQs need to be disabled to guarantee
- * virtualization stays disabled.
- */
- lockdep_assert_irqs_disabled();
-
- rcu_read_lock();
- callback = rcu_dereference(cpu_emergency_virt_callback);
- if (callback)
- callback();
- rcu_read_unlock();
-}
-
#if defined(CONFIG_SMP)
static nmi_shootdown_cb shootdown_callback;
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 07/19] x86/reboot: Disable virtualization during reboot iff callback is registered
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (5 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 06/19] x86/reboot: Hoist "disable virt" helpers above "emergency reboot" path Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-24 23:57 ` Huang, Kai
2023-07-21 20:18 ` [PATCH v4 08/19] x86/reboot: Expose VMCS crash hooks if and only if KVM_{INTEL,AMD} is enabled Sean Christopherson
` (12 subsequent siblings)
19 siblings, 1 reply; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Attempt to disable virtualization during an emergency reboot if and only
if there is a registered virt callback, i.e. iff a hypervisor (KVM) is
active. If there's no active hypervisor, then the CPU can't be operating
with VMX or SVM enabled (barring an egregious bug).
Checking for a valid callback instead of simply for SVM or VMX support
can also eliminates spurious NMIs by avoiding the unecessary call to
nmi_shootdown_cpus_on_restart().
Note, IRQs are disabled, which prevents KVM from coming along and
enabling virtualization after the fact.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kernel/reboot.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 85cb2dfcb67b..98e5db3fd7f4 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -22,7 +22,6 @@
#include <asm/reboot_fixups.h>
#include <asm/reboot.h>
#include <asm/pci_x86.h>
-#include <asm/virtext.h>
#include <asm/cpu.h>
#include <asm/nmi.h>
#include <asm/smp.h>
@@ -589,7 +588,7 @@ static void emergency_reboot_disable_virtualization(void)
* Do the NMI shootdown even if virtualization is off on _this_ CPU, as
* other CPUs may have virtualization enabled.
*/
- if (cpu_has_vmx() || cpu_has_svm(NULL)) {
+ if (rcu_access_pointer(cpu_emergency_virt_callback)) {
/* Safely force _this_ CPU out of VMX/SVM operation. */
cpu_emergency_disable_virtualization();
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 08/19] x86/reboot: Expose VMCS crash hooks if and only if KVM_{INTEL,AMD} is enabled
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (6 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 07/19] x86/reboot: Disable virtualization during reboot iff callback is registered Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 09/19] x86/virt: KVM: Open code cpu_has_vmx() in KVM VMX Sean Christopherson
` (11 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Expose the crash/reboot hooks used by KVM to disable virtualization in
hardware and unblock INIT only if there's a potential in-tree user,
i.e. either KVM_INTEL or KVM_AMD is enabled.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/reboot.h | 4 ++++
arch/x86/kernel/reboot.c | 5 ++++-
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h
index 74c6a624d166..6536873f8fc0 100644
--- a/arch/x86/include/asm/reboot.h
+++ b/arch/x86/include/asm/reboot.h
@@ -25,10 +25,14 @@ void __noreturn machine_real_restart(unsigned int type);
#define MRR_BIOS 0
#define MRR_APM 1
+#if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD)
typedef void (cpu_emergency_virt_cb)(void);
void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback);
void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callback);
void cpu_emergency_disable_virtualization(void);
+#else
+static inline void cpu_emergency_disable_virtualization(void) {}
+#endif /* CONFIG_KVM_INTEL || CONFIG_KVM_AMD */
typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
void nmi_shootdown_cpus(nmi_shootdown_cb callback);
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 98e5db3fd7f4..830425e6d38e 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -529,6 +529,7 @@ static inline void kb_wait(void)
static inline void nmi_shootdown_cpus_on_restart(void);
+#if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD)
/* RCU-protected callback to disable virtualization prior to reboot. */
static cpu_emergency_virt_cb __rcu *cpu_emergency_virt_callback;
@@ -596,7 +597,9 @@ static void emergency_reboot_disable_virtualization(void)
nmi_shootdown_cpus_on_restart();
}
}
-
+#else
+static void emergency_reboot_disable_virtualization(void) { }
+#endif /* CONFIG_KVM_INTEL || CONFIG_KVM_AMD */
void __attribute__((weak)) mach_reboot_fixups(void)
{
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 09/19] x86/virt: KVM: Open code cpu_has_vmx() in KVM VMX
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (7 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 08/19] x86/reboot: Expose VMCS crash hooks if and only if KVM_{INTEL,AMD} is enabled Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into " Sean Christopherson
` (10 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Fold the raw CPUID check for VMX into kvm_is_vmx_supported(), its sole
user. Keep the check even though KVM also checks X86_FEATURE_VMX, as the
intent is to provide a unique error message if VMX is unsupported by
hardware, whereas X86_FEATURE_VMX may be clear due to firmware and/or
kernel actions.
No functional change intended.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/virtext.h | 10 ----------
arch/x86/kvm/vmx/vmx.c | 2 +-
2 files changed, 1 insertion(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index aaed66249ccf..b1171a5ad452 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -22,14 +22,6 @@
/*
* VMX functions:
*/
-
-static inline int cpu_has_vmx(void)
-{
- unsigned long ecx = cpuid_ecx(1);
- return test_bit(5, &ecx); /* CPUID.1:ECX.VMX[bit 5] -> VT */
-}
-
-
/**
* cpu_vmxoff() - Disable VMX on the current CPU
*
@@ -61,8 +53,6 @@ static inline int cpu_vmx_enabled(void)
}
/** Disable VMX if it is enabled on the current CPU
- *
- * You shouldn't call this if cpu_has_vmx() returns 0.
*/
static inline void __cpu_emergency_vmxoff(void)
{
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 682c20b33a96..71571cd9adbb 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2718,7 +2718,7 @@ static bool kvm_is_vmx_supported(void)
{
int cpu = raw_smp_processor_id();
- if (!cpu_has_vmx()) {
+ if (!(cpuid_ecx(1) & feature_bit(VMX))) {
pr_err("VMX not supported by CPU %d\n", cpu);
return false;
}
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into KVM VMX
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (8 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 09/19] x86/virt: KVM: Open code cpu_has_vmx() in KVM VMX Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-28 9:08 ` Xu Yilun
2023-07-21 20:18 ` [PATCH v4 11/19] KVM: SVM: Make KVM_AMD depend on CPU_SUP_AMD or CPU_SUP_HYGON Sean Christopherson
` (9 subsequent siblings)
19 siblings, 1 reply; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Now that VMX is disabled in emergencies via the virt callbacks, move the
VMXOFF helpers into KVM, the only remaining user.
No functional change intended.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/virtext.h | 42 ----------------------------------
arch/x86/kvm/vmx/vmx.c | 29 ++++++++++++++++++++---
2 files changed, 26 insertions(+), 45 deletions(-)
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index b1171a5ad452..a27801f2bc71 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -19,48 +19,6 @@
#include <asm/svm.h>
#include <asm/tlbflush.h>
-/*
- * VMX functions:
- */
-/**
- * cpu_vmxoff() - Disable VMX on the current CPU
- *
- * Disable VMX and clear CR4.VMXE (even if VMXOFF faults)
- *
- * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to
- * atomically track post-VMXON state, e.g. this may be called in NMI context.
- * Eat all faults as all other faults on VMXOFF faults are mode related, i.e.
- * faults are guaranteed to be due to the !post-VMXON check unless the CPU is
- * magically in RM, VM86, compat mode, or at CPL>0.
- */
-static inline int cpu_vmxoff(void)
-{
- asm_volatile_goto("1: vmxoff\n\t"
- _ASM_EXTABLE(1b, %l[fault])
- ::: "cc", "memory" : fault);
-
- cr4_clear_bits(X86_CR4_VMXE);
- return 0;
-
-fault:
- cr4_clear_bits(X86_CR4_VMXE);
- return -EIO;
-}
-
-static inline int cpu_vmx_enabled(void)
-{
- return __read_cr4() & X86_CR4_VMXE;
-}
-
-/** Disable VMX if it is enabled on the current CPU
- */
-static inline void __cpu_emergency_vmxoff(void)
-{
- if (cpu_vmx_enabled())
- cpu_vmxoff();
-}
-
-
/*
* SVM functions:
*/
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 71571cd9adbb..6f4fcd82fa6e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -47,7 +47,6 @@
#include <asm/mshyperv.h>
#include <asm/mwait.h>
#include <asm/spec-ctrl.h>
-#include <asm/virtext.h>
#include <asm/vmx.h>
#include "capabilities.h"
@@ -744,6 +743,29 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx,
return ret;
}
+/*
+ * Disable VMX and clear CR4.VMXE (even if VMXOFF faults)
+ *
+ * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to
+ * atomically track post-VMXON state, e.g. this may be called in NMI context.
+ * Eat all faults as all other faults on VMXOFF faults are mode related, i.e.
+ * faults are guaranteed to be due to the !post-VMXON check unless the CPU is
+ * magically in RM, VM86, compat mode, or at CPL>0.
+ */
+static int kvm_cpu_vmxoff(void)
+{
+ asm_volatile_goto("1: vmxoff\n\t"
+ _ASM_EXTABLE(1b, %l[fault])
+ ::: "cc", "memory" : fault);
+
+ cr4_clear_bits(X86_CR4_VMXE);
+ return 0;
+
+fault:
+ cr4_clear_bits(X86_CR4_VMXE);
+ return -EIO;
+}
+
static void vmx_emergency_disable(void)
{
int cpu = raw_smp_processor_id();
@@ -753,7 +775,8 @@ static void vmx_emergency_disable(void)
loaded_vmcss_on_cpu_link)
vmcs_clear(v->vmcs);
- __cpu_emergency_vmxoff();
+ if (__read_cr4() & X86_CR4_VMXE)
+ kvm_cpu_vmxoff();
}
static void __loaded_vmcs_clear(void *arg)
@@ -2818,7 +2841,7 @@ static void vmx_hardware_disable(void)
{
vmclear_local_loaded_vmcss();
- if (cpu_vmxoff())
+ if (kvm_cpu_vmxoff())
kvm_spurious_fault();
hv_reset_evmcs();
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 11/19] KVM: SVM: Make KVM_AMD depend on CPU_SUP_AMD or CPU_SUP_HYGON
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (9 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into " Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 12/19] x86/virt: Drop unnecessary check on extended CPUID level in cpu_has_svm() Sean Christopherson
` (8 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Make building KVM SVM support depend on support for AMD or Hygon. KVM
already effectively restricts SVM support to AMD and Hygon by virtue of
the vendor string checks in cpu_has_svm(), and KVM VMX supports depends
on one of its three known vendors (Intel, Centaur, or Zhaoxin).
Add the CPU_SUP_HYGON clause even though CPU_SUP_HYGON selects CPU_SUP_AMD
to document that KVM SVM support isn't just for AMD CPUs, and to prevent
breakage should Hygon support ever become a standalone thing.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 89ca7f4c1464..66dbd1f4d57d 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -101,7 +101,7 @@ config X86_SGX_KVM
config KVM_AMD
tristate "KVM for AMD processors support"
- depends on KVM
+ depends on KVM && (CPU_SUP_AMD || CPU_SUP_HYGON)
help
Provides support for KVM on AMD processors equipped with the AMD-V
(SVM) extensions.
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 12/19] x86/virt: Drop unnecessary check on extended CPUID level in cpu_has_svm()
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (10 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 11/19] KVM: SVM: Make KVM_AMD depend on CPU_SUP_AMD or CPU_SUP_HYGON Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 13/19] x86/virt: KVM: Open code cpu_has_svm() into kvm_is_svm_supported() Sean Christopherson
` (7 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Drop the explicit check on the extended CPUID level in cpu_has_svm(), the
kernel's cached CPUID info will leave the entire SVM leaf unset if said
leaf is not supported by hardware. Prior to using cached information,
the check was needed to avoid false positives due to Intel's rather crazy
CPUID behavior of returning the values of the maximum supported leaf if
the specified leaf is unsupported.
Fixes: 682a8108872f ("x86/kvm/svm: Simplify cpu_has_svm()")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/virtext.h | 6 ------
1 file changed, 6 deletions(-)
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index a27801f2bc71..be50c414efe4 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -39,12 +39,6 @@ static inline int cpu_has_svm(const char **msg)
return 0;
}
- if (boot_cpu_data.extended_cpuid_level < SVM_CPUID_FUNC) {
- if (msg)
- *msg = "can't execute cpuid_8000000a";
- return 0;
- }
-
if (!boot_cpu_has(X86_FEATURE_SVM)) {
if (msg)
*msg = "svm not available";
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 13/19] x86/virt: KVM: Open code cpu_has_svm() into kvm_is_svm_supported()
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (11 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 12/19] x86/virt: Drop unnecessary check on extended CPUID level in cpu_has_svm() Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported() Sean Christopherson
` (6 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Fold the guts of cpu_has_svm() into kvm_is_svm_supported(), its sole
remaining user.
No functional change intended.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/virtext.h | 28 ----------------------------
arch/x86/kvm/svm/svm.c | 11 ++++++++---
2 files changed, 8 insertions(+), 31 deletions(-)
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index be50c414efe4..632575e257d8 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -22,35 +22,7 @@
/*
* SVM functions:
*/
-
-/** Check if the CPU has SVM support
- *
- * You can use the 'msg' arg to get a message describing the problem,
- * if the function returns zero. Simply pass NULL if you are not interested
- * on the messages; gcc should take care of not generating code for
- * the messages on this case.
- */
-static inline int cpu_has_svm(const char **msg)
-{
- if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
- boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) {
- if (msg)
- *msg = "not amd or hygon";
- return 0;
- }
-
- if (!boot_cpu_has(X86_FEATURE_SVM)) {
- if (msg)
- *msg = "svm not available";
- return 0;
- }
- return 1;
-}
-
-
/** Disable SVM on the current CPU
- *
- * You should call this only if cpu_has_svm() returned true.
*/
static inline void cpu_svm_disable(void)
{
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1ae9c2c7eacb..ff6c769aafb2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -521,11 +521,16 @@ static void svm_init_osvw(struct kvm_vcpu *vcpu)
static bool kvm_is_svm_supported(void)
{
int cpu = raw_smp_processor_id();
- const char *msg;
u64 vm_cr;
- if (!cpu_has_svm(&msg)) {
- pr_err("SVM not supported by CPU %d, %s\n", cpu, msg);
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) {
+ pr_err("CPU %d isn't AMD or Hygon\n", cpu);
+ return false;
+ }
+
+ if (!boot_cpu_has(X86_FEATURE_SVM)) {
+ pr_err("SVM not supported by CPU %d\n", cpu);
return false;
}
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (12 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 13/19] x86/virt: KVM: Open code cpu_has_svm() into kvm_is_svm_supported() Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-24 21:21 ` Peter Zijlstra
2023-07-24 22:29 ` Dmitry Torokhov
2023-07-21 20:18 ` [PATCH v4 15/19] KVM: VMX: Ensure CPU is stable when probing basic VMX support Sean Christopherson
` (5 subsequent siblings)
19 siblings, 2 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Check "this" CPU instead of the boot CPU when querying SVM support so that
the per-CPU checks done during hardware enabling actually function as
intended, i.e. will detect issues where SVM isn't support on all CPUs.
Disable migration for the use from svm_init() mostly so that the standard
accessors for the per-CPU data can be used without getting yelled at by
CONFIG_DEBUG_PREEMPT=y sanity checks. Preventing the "disabled by BIOS"
error message from reporting the wrong CPU is largely a bonus, as ensuring
a stable CPU during module load is a non-goal for KVM.
Link: https://lore.kernel.org/all/ZAdxNgv0M6P63odE@google.com
Cc: Kai Huang <kai.huang@intel.com>
Cc: Chao Gao <chao.gao@intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 25 +++++++++++++++++++------
1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ff6c769aafb2..9e449167e71b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -518,18 +518,20 @@ static void svm_init_osvw(struct kvm_vcpu *vcpu)
vcpu->arch.osvw.status |= 1;
}
-static bool kvm_is_svm_supported(void)
+static bool __kvm_is_svm_supported(void)
{
- int cpu = raw_smp_processor_id();
+ int cpu = smp_processor_id();
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
+
u64 vm_cr;
- if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
- boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) {
+ if (c->x86_vendor != X86_VENDOR_AMD &&
+ c->x86_vendor != X86_VENDOR_HYGON) {
pr_err("CPU %d isn't AMD or Hygon\n", cpu);
return false;
}
- if (!boot_cpu_has(X86_FEATURE_SVM)) {
+ if (!cpu_has(c, X86_FEATURE_SVM)) {
pr_err("SVM not supported by CPU %d\n", cpu);
return false;
}
@@ -548,9 +550,20 @@ static bool kvm_is_svm_supported(void)
return true;
}
+static bool kvm_is_svm_supported(void)
+{
+ bool supported;
+
+ migrate_disable();
+ supported = __kvm_is_svm_supported();
+ migrate_enable();
+
+ return supported;
+}
+
static int svm_check_processor_compat(void)
{
- if (!kvm_is_svm_supported())
+ if (!__kvm_is_svm_supported())
return -EIO;
return 0;
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 15/19] KVM: VMX: Ensure CPU is stable when probing basic VMX support
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (13 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported() Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 16/19] x86/virt: KVM: Move "disable SVM" helper into KVM SVM Sean Christopherson
` (4 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Disable migration when probing VMX support during module load to ensure
the CPU is stable, mostly to match similar SVM logic, where allowing
migration effective requires deliberately writing buggy code. As a bonus,
KVM won't report the wrong CPU to userspace if VMX is unsupported, but in
practice that is a very, very minor bonus as the only way that reporting
the wrong CPU would actually matter is if hardware is broken or if the
system is misconfigured, i.e. if KVM gets migrated from a CPU that _does_
support VMX to a CPU that does _not_ support VMX.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/vmx.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 6f4fcd82fa6e..0e1f3856a9be 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2737,9 +2737,9 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
return 0;
}
-static bool kvm_is_vmx_supported(void)
+static bool __kvm_is_vmx_supported(void)
{
- int cpu = raw_smp_processor_id();
+ int cpu = smp_processor_id();
if (!(cpuid_ecx(1) & feature_bit(VMX))) {
pr_err("VMX not supported by CPU %d\n", cpu);
@@ -2755,13 +2755,24 @@ static bool kvm_is_vmx_supported(void)
return true;
}
+static bool kvm_is_vmx_supported(void)
+{
+ bool supported;
+
+ migrate_disable();
+ supported = __kvm_is_vmx_supported();
+ migrate_enable();
+
+ return supported;
+}
+
static int vmx_check_processor_compat(void)
{
int cpu = raw_smp_processor_id();
struct vmcs_config vmcs_conf;
struct vmx_capability vmx_cap;
- if (!kvm_is_vmx_supported())
+ if (!__kvm_is_vmx_supported())
return -EIO;
if (setup_vmcs_config(&vmcs_conf, &vmx_cap) < 0) {
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 16/19] x86/virt: KVM: Move "disable SVM" helper into KVM SVM
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (14 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 15/19] KVM: VMX: Ensure CPU is stable when probing basic VMX support Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 17/19] KVM: x86: Force kvm_rebooting=true during emergency reboot/crash Sean Christopherson
` (3 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Move cpu_svm_disable() into KVM proper now that all hardware
virtualization management is routed through KVM. Remove the now-empty
virtext.h.
No functional change intended.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/virtext.h | 50 ----------------------------------
arch/x86/kvm/svm/svm.c | 29 +++++++++++++++++---
2 files changed, 25 insertions(+), 54 deletions(-)
delete mode 100644 arch/x86/include/asm/virtext.h
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
deleted file mode 100644
index 632575e257d8..000000000000
--- a/arch/x86/include/asm/virtext.h
+++ /dev/null
@@ -1,50 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/* CPU virtualization extensions handling
- *
- * This should carry the code for handling CPU virtualization extensions
- * that needs to live in the kernel core.
- *
- * Author: Eduardo Habkost <ehabkost@redhat.com>
- *
- * Copyright (C) 2008, Red Hat Inc.
- *
- * Contains code from KVM, Copyright (C) 2006 Qumranet, Inc.
- */
-#ifndef _ASM_X86_VIRTEX_H
-#define _ASM_X86_VIRTEX_H
-
-#include <asm/processor.h>
-
-#include <asm/vmx.h>
-#include <asm/svm.h>
-#include <asm/tlbflush.h>
-
-/*
- * SVM functions:
- */
-/** Disable SVM on the current CPU
- */
-static inline void cpu_svm_disable(void)
-{
- uint64_t efer;
-
- wrmsrl(MSR_VM_HSAVE_PA, 0);
- rdmsrl(MSR_EFER, efer);
- if (efer & EFER_SVME) {
- /*
- * Force GIF=1 prior to disabling SVM to ensure INIT and NMI
- * aren't blocked, e.g. if a fatal error occurred between CLGI
- * and STGI. Note, STGI may #UD if SVM is disabled from NMI
- * context between reading EFER and executing STGI. In that
- * case, GIF must already be set, otherwise the NMI would have
- * been blocked, so just eat the fault.
- */
- asm_volatile_goto("1: stgi\n\t"
- _ASM_EXTABLE(1b, %l[fault])
- ::: "memory" : fault);
-fault:
- wrmsrl(MSR_EFER, efer & ~EFER_SVME);
- }
-}
-
-#endif /* _ASM_X86_VIRTEX_H */
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9e449167e71b..47f9c7156609 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -42,8 +42,6 @@
#include <asm/reboot.h>
#include <asm/fpu/api.h>
-#include <asm/virtext.h>
-
#include <trace/events/ipi.h>
#include "trace.h"
@@ -582,9 +580,32 @@ void __svm_write_tsc_multiplier(u64 multiplier)
preempt_enable();
}
+static inline void kvm_cpu_svm_disable(void)
+{
+ uint64_t efer;
+
+ wrmsrl(MSR_VM_HSAVE_PA, 0);
+ rdmsrl(MSR_EFER, efer);
+ if (efer & EFER_SVME) {
+ /*
+ * Force GIF=1 prior to disabling SVM to ensure INIT and NMI
+ * aren't blocked, e.g. if a fatal error occurred between CLGI
+ * and STGI. Note, STGI may #UD if SVM is disabled from NMI
+ * context between reading EFER and executing STGI. In that
+ * case, GIF must already be set, otherwise the NMI would have
+ * been blocked, so just eat the fault.
+ */
+ asm_volatile_goto("1: stgi\n\t"
+ _ASM_EXTABLE(1b, %l[fault])
+ ::: "memory" : fault);
+fault:
+ wrmsrl(MSR_EFER, efer & ~EFER_SVME);
+ }
+}
+
static void svm_emergency_disable(void)
{
- cpu_svm_disable();
+ kvm_cpu_svm_disable();
}
static void svm_hardware_disable(void)
@@ -593,7 +614,7 @@ static void svm_hardware_disable(void)
if (tsc_scaling)
__svm_write_tsc_multiplier(SVM_TSC_RATIO_DEFAULT);
- cpu_svm_disable();
+ kvm_cpu_svm_disable();
amd_pmu_disable_virt();
}
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 17/19] KVM: x86: Force kvm_rebooting=true during emergency reboot/crash
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (15 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 16/19] x86/virt: KVM: Move "disable SVM" helper into KVM SVM Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 18/19] KVM: SVM: Use "standard" stgi() helper when disabling SVM Sean Christopherson
` (2 subsequent siblings)
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Set kvm_rebooting when virtualization is disabled in an emergency so that
KVM eats faults on virtualization instructions even if kvm_reboot() isn't
reached.
Reviewed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 2 ++
arch/x86/kvm/vmx/vmx.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 47f9c7156609..8d1b3c801629 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -605,6 +605,8 @@ static inline void kvm_cpu_svm_disable(void)
static void svm_emergency_disable(void)
{
+ kvm_rebooting = true;
+
kvm_cpu_svm_disable();
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0e1f3856a9be..5d21931842a5 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -771,6 +771,8 @@ static void vmx_emergency_disable(void)
int cpu = raw_smp_processor_id();
struct loaded_vmcs *v;
+ kvm_rebooting = true;
+
list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
loaded_vmcss_on_cpu_link)
vmcs_clear(v->vmcs);
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 18/19] KVM: SVM: Use "standard" stgi() helper when disabling SVM
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (16 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 17/19] KVM: x86: Force kvm_rebooting=true during emergency reboot/crash Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0 Sean Christopherson
2023-08-04 0:40 ` [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Now that kvm_rebooting is guaranteed to be true prior to disabling SVM
in an emergency, use the existing stgi() helper instead of open coding
STGI. In effect, eat faults on STGI if and only if kvm_rebooting==true.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 13 +++----------
1 file changed, 3 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8d1b3c801629..4785d780cce3 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -588,17 +588,10 @@ static inline void kvm_cpu_svm_disable(void)
rdmsrl(MSR_EFER, efer);
if (efer & EFER_SVME) {
/*
- * Force GIF=1 prior to disabling SVM to ensure INIT and NMI
- * aren't blocked, e.g. if a fatal error occurred between CLGI
- * and STGI. Note, STGI may #UD if SVM is disabled from NMI
- * context between reading EFER and executing STGI. In that
- * case, GIF must already be set, otherwise the NMI would have
- * been blocked, so just eat the fault.
+ * Force GIF=1 prior to disabling SVM, e.g. to ensure INIT and
+ * NMI aren't blocked.
*/
- asm_volatile_goto("1: stgi\n\t"
- _ASM_EXTABLE(1b, %l[fault])
- ::: "memory" : fault);
-fault:
+ stgi();
wrmsrl(MSR_EFER, efer & ~EFER_SVME);
}
}
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (17 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 18/19] KVM: SVM: Use "standard" stgi() helper when disabling SVM Sean Christopherson
@ 2023-07-21 20:18 ` Sean Christopherson
2023-07-25 3:51 ` Huang, Kai
2023-08-04 0:40 ` [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
19 siblings, 1 reply; 36+ messages in thread
From: Sean Christopherson @ 2023-07-21 20:18 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Sean Christopherson, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
Bail from vmx_emergency_disable() without processing the list of loaded
VMCSes if CR4.VMXE=0, i.e. if the CPU can't be post-VMXON. It should be
impossible for the list to have entries if VMX is already disabled, and
even if that invariant doesn't hold, VMCLEAR will #UD anyways, i.e.
processing the list is pointless even if it somehow isn't empty.
Assuming no existing KVM bugs, this should be a glorified nop. The
primary motivation for the change is to avoid having code that looks like
it does VMCLEAR, but then skips VMXON, which is nonsensical.
Suggested-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/vmx.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5d21931842a5..0ef5ede9cb7c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -773,12 +773,20 @@ static void vmx_emergency_disable(void)
kvm_rebooting = true;
+ /*
+ * Note, CR4.VMXE can be _cleared_ in NMI context, but it can only be
+ * set in task context. If this races with VMX is disabled by an NMI,
+ * VMCLEAR and VMXOFF may #UD, but KVM will eat those faults due to
+ * kvm_rebooting set.
+ */
+ if (!(__read_cr4() & X86_CR4_VMXE))
+ return;
+
list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
loaded_vmcss_on_cpu_link)
vmcs_clear(v->vmcs);
- if (__read_cr4() & X86_CR4_VMXE)
- kvm_cpu_vmxoff();
+ kvm_cpu_vmxoff();
}
static void __loaded_vmcs_clear(void *arg)
--
2.41.0.487.g6d72f3e995-goog
^ permalink raw reply related [flat|nested] 36+ messages in thread
* Re: [PATCH v4 05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization
2023-07-21 20:18 ` [PATCH v4 05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization Sean Christopherson
@ 2023-07-24 21:19 ` Peter Zijlstra
2023-07-24 21:41 ` Sean Christopherson
0 siblings, 1 reply; 36+ messages in thread
From: Peter Zijlstra @ 2023-07-24 21:19 UTC (permalink / raw)
To: Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On Fri, Jul 21, 2023 at 01:18:45PM -0700, Sean Christopherson wrote:
> Assert that IRQs are disabled when turning off virtualization in an
> emergency. KVM enables hardware via on_each_cpu(), i.e. could re-enable
> hardware if a pending IPI were delivered after disabling virtualization.
>
> Remove a misleading comment from emergency_reboot_disable_virtualization()
> about "just" needing to guarantee the CPU is stable (see above).
>
> Reviewed-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kernel/reboot.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
> index 48ad2d1ff83d..4cad7183b89e 100644
> --- a/arch/x86/kernel/reboot.c
> +++ b/arch/x86/kernel/reboot.c
> @@ -532,7 +532,6 @@ static inline void nmi_shootdown_cpus_on_restart(void);
>
> static void emergency_reboot_disable_virtualization(void)
> {
> - /* Just make sure we won't change CPUs while doing this */
> local_irq_disable();
>
> /*
> @@ -821,6 +820,13 @@ void cpu_emergency_disable_virtualization(void)
> {
> cpu_emergency_virt_cb *callback;
>
> + /*
> + * IRQs must be disabled as KVM enables virtualization in hardware via
> + * function call IPIs, i.e. IRQs need to be disabled to guarantee
> + * virtualization stays disabled.
> + */
> + lockdep_assert_irqs_disabled();
> +
> rcu_read_lock();
> callback = rcu_dereference(cpu_emergency_virt_callback);
> if (callback)
Strictly speaking you don't need rcu_read_lock() when IRQs are already
disabled, but since this is non-performance critical code, it might be
best to keep it super obvious. IOW, carry on.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
2023-07-21 20:18 ` [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported() Sean Christopherson
@ 2023-07-24 21:21 ` Peter Zijlstra
2023-07-24 21:40 ` Sean Christopherson
2023-07-24 22:29 ` Dmitry Torokhov
1 sibling, 1 reply; 36+ messages in thread
From: Peter Zijlstra @ 2023-07-24 21:21 UTC (permalink / raw)
To: Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On Fri, Jul 21, 2023 at 01:18:54PM -0700, Sean Christopherson wrote:
> Check "this" CPU instead of the boot CPU when querying SVM support so that
> the per-CPU checks done during hardware enabling actually function as
> intended, i.e. will detect issues where SVM isn't support on all CPUs.
Is that a realistic concern?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
2023-07-24 21:21 ` Peter Zijlstra
@ 2023-07-24 21:40 ` Sean Christopherson
2023-07-25 9:16 ` Peter Zijlstra
0 siblings, 1 reply; 36+ messages in thread
From: Sean Christopherson @ 2023-07-24 21:40 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On Mon, Jul 24, 2023, Peter Zijlstra wrote:
> On Fri, Jul 21, 2023 at 01:18:54PM -0700, Sean Christopherson wrote:
> > Check "this" CPU instead of the boot CPU when querying SVM support so that
> > the per-CPU checks done during hardware enabling actually function as
> > intended, i.e. will detect issues where SVM isn't support on all CPUs.
>
> Is that a realistic concern?
It's not a concern in the sense that it should never happen, but I know of at
least one example where VMX on Intel completely disappeared[1]. The "compatibility"
checks are really more about the entire VMX/SVM feature set, the base VMX/SVM
support check is just an easy and obvious precursor to the full compatibility
checks.
Of course, SVM doesn't currently have compatibility checks on the full SVM feature
set, but that's more due to lack of a forcing function than a desire to _not_ have
them. Intel CPUs have a pesky habit of bugs, ucode updates, and/or in-field errors
resulting in VMX features randomly appearing or disappearing. E.g. there's an
ongoing buzilla (sorry) issue[2] where a user is only able to load KVM *after* a
suspend+resume cycle, because TSC scaling only shows up on one socket immediately
after boot, which is then somehow resolved by suspend+resume.
[1] 009bce1df0bb ("x86/split_lock: Don't write MSR_TEST_CTRL on CPUs that aren't whitelisted")
[2] https://bugzilla.kernel.org/show_bug.cgi?id=217574
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization
2023-07-24 21:19 ` Peter Zijlstra
@ 2023-07-24 21:41 ` Sean Christopherson
0 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-24 21:41 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On Mon, Jul 24, 2023, Peter Zijlstra wrote:
> On Fri, Jul 21, 2023 at 01:18:45PM -0700, Sean Christopherson wrote:
> > Assert that IRQs are disabled when turning off virtualization in an
> > emergency. KVM enables hardware via on_each_cpu(), i.e. could re-enable
> > hardware if a pending IPI were delivered after disabling virtualization.
> >
> > Remove a misleading comment from emergency_reboot_disable_virtualization()
> > about "just" needing to guarantee the CPU is stable (see above).
> >
> > Reviewed-by: Kai Huang <kai.huang@intel.com>
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> > arch/x86/kernel/reboot.c | 8 +++++++-
> > 1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
> > index 48ad2d1ff83d..4cad7183b89e 100644
> > --- a/arch/x86/kernel/reboot.c
> > +++ b/arch/x86/kernel/reboot.c
> > @@ -532,7 +532,6 @@ static inline void nmi_shootdown_cpus_on_restart(void);
> >
> > static void emergency_reboot_disable_virtualization(void)
> > {
> > - /* Just make sure we won't change CPUs while doing this */
> > local_irq_disable();
> >
> > /*
> > @@ -821,6 +820,13 @@ void cpu_emergency_disable_virtualization(void)
> > {
> > cpu_emergency_virt_cb *callback;
> >
> > + /*
> > + * IRQs must be disabled as KVM enables virtualization in hardware via
> > + * function call IPIs, i.e. IRQs need to be disabled to guarantee
> > + * virtualization stays disabled.
> > + */
> > + lockdep_assert_irqs_disabled();
> > +
> > rcu_read_lock();
> > callback = rcu_dereference(cpu_emergency_virt_callback);
> > if (callback)
>
> Strictly speaking you don't need rcu_read_lock() when IRQs are already
> disabled, but since this is non-performance critical code, it might be
> best to keep it super obvious. IOW, carry on.
Ha! IIRC, I even had a patch to drop the explicit rcu_read_lock(), but decided
I didn't want to tie the use of cpu_emergency_virt_callback to KVM's behavior of
enabling virtualization via IPIs.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
2023-07-21 20:18 ` [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported() Sean Christopherson
2023-07-24 21:21 ` Peter Zijlstra
@ 2023-07-24 22:29 ` Dmitry Torokhov
2023-07-24 23:53 ` Sean Christopherson
1 sibling, 1 reply; 36+ messages in thread
From: Dmitry Torokhov @ 2023-07-24 22:29 UTC (permalink / raw)
To: Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
Hi Sean,
On Fri, Jul 21, 2023 at 01:18:54PM -0700, Sean Christopherson wrote:
> +static bool kvm_is_svm_supported(void)
> +{
> + bool supported;
> +
> + migrate_disable();
> + supported = __kvm_is_svm_supported();
> + migrate_enable();
I am typically very wary of the constructs like this, as the value
returned is obsolete the moment migrate_enable() happens. Is value of
"svm was supported at some time in the past but may or may not be
supported right now" useful and if it is then could you add comment why?
Thanks.
--
Dmitry
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
2023-07-24 22:29 ` Dmitry Torokhov
@ 2023-07-24 23:53 ` Sean Christopherson
0 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-24 23:53 UTC (permalink / raw)
To: Dmitry Torokhov
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On Mon, Jul 24, 2023, Dmitry Torokhov wrote:
> Hi Sean,
>
> On Fri, Jul 21, 2023 at 01:18:54PM -0700, Sean Christopherson wrote:
> > +static bool kvm_is_svm_supported(void)
> > +{
> > + bool supported;
> > +
> > + migrate_disable();
> > + supported = __kvm_is_svm_supported();
> > + migrate_enable();
>
> I am typically very wary of the constructs like this, as the value
> returned is obsolete the moment migrate_enable() happens.
Yeah, I don't like this code, but there's no great solution in this case. Or at
least, none that I've found. At some point KVM has to enable migration/preemption
before "is KVM supported?" is ultimately consumed, as the real consumer is
userspace.
> Is value of "svm was supported at some time in the past but may or may not be
> supported right now" useful and if it is then could you add comment why?
No, because barring fatal silicon/ucode/kernel bugs, SVM support isn't expected
to disappear or (re)appear).
KVM defends against the "disappear" case as much as can be reasonably expected.
It's ugly, but functionally ok (not perfect, but ok). KVM doesn't actually care
which CPU does the initial support check, because KVM will do fully protected support
checks on all CPUs before actually letting userspace create VMs. This is why the
changelog states that ensuring a stable CPU is a non-goal, and also why the inner
helpers don't use the raw accessors.
The "(re)appear" case doesn't need to be handled, because userspace could simply
retry if it really wanted to (but that would be quite insane/nonsensical, and
just asking for problems).
I didn't add a comment because VMX uses the exact same pattern, and I didn't
want to copy+paste a non-trivial comment. And this is a single use local helper,
so I'm not terribly concerned about it being misused.
That said, I'll see if I can find a common, intuitive location to document this.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 03/19] x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback
2023-07-21 20:18 ` [PATCH v4 03/19] x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback Sean Christopherson
@ 2023-07-24 23:57 ` Huang, Kai
0 siblings, 0 replies; 36+ messages in thread
From: Huang, Kai @ 2023-07-24 23:57 UTC (permalink / raw)
To: tglx@linutronix.de, x86@kernel.org, mingo@redhat.com,
pbonzini@redhat.com, Christopherson,, Sean, bp@alien8.de,
dave.hansen@linux.intel.com
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Gao, Chao,
andrew.cooper3@citrix.com
On Fri, 2023-07-21 at 13:18 -0700, Sean Christopherson wrote:
> Use KVM VMX's reboot/crash callback to do VMXOFF in an emergency instead
> of manually and blindly doing VMXOFF. There's no need to attempt VMXOFF
> if a hypervisor, i.e. KVM, isn't loaded/active, i.e. if the CPU can't
> possibly be post-VMXON.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
>
Reviewed-by: Kai Huang <kai.huang@intel.com>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 07/19] x86/reboot: Disable virtualization during reboot iff callback is registered
2023-07-21 20:18 ` [PATCH v4 07/19] x86/reboot: Disable virtualization during reboot iff callback is registered Sean Christopherson
@ 2023-07-24 23:57 ` Huang, Kai
0 siblings, 0 replies; 36+ messages in thread
From: Huang, Kai @ 2023-07-24 23:57 UTC (permalink / raw)
To: tglx@linutronix.de, x86@kernel.org, mingo@redhat.com,
pbonzini@redhat.com, Christopherson,, Sean, bp@alien8.de,
dave.hansen@linux.intel.com
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Gao, Chao,
andrew.cooper3@citrix.com
On Fri, 2023-07-21 at 13:18 -0700, Sean Christopherson wrote:
> Attempt to disable virtualization during an emergency reboot if and only
> if there is a registered virt callback, i.e. iff a hypervisor (KVM) is
> active. If there's no active hypervisor, then the CPU can't be operating
> with VMX or SVM enabled (barring an egregious bug).
>
> Checking for a valid callback instead of simply for SVM or VMX support
> can also eliminates spurious NMIs by avoiding the unecessary call to
> nmi_shootdown_cpus_on_restart().
>
> Note, IRQs are disabled, which prevents KVM from coming along and
> enabling virtualization after the fact.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
>
Reviewed-by: Kai Huang <kai.huang@intel.com>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0
2023-07-21 20:18 ` [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0 Sean Christopherson
@ 2023-07-25 3:51 ` Huang, Kai
2023-07-25 18:15 ` Sean Christopherson
0 siblings, 1 reply; 36+ messages in thread
From: Huang, Kai @ 2023-07-25 3:51 UTC (permalink / raw)
To: tglx@linutronix.de, x86@kernel.org, mingo@redhat.com,
pbonzini@redhat.com, Christopherson,, Sean, bp@alien8.de,
dave.hansen@linux.intel.com
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Gao, Chao,
andrew.cooper3@citrix.com
On Fri, 2023-07-21 at 13:18 -0700, Sean Christopherson wrote:
> Bail from vmx_emergency_disable() without processing the list of loaded
> VMCSes if CR4.VMXE=0, i.e. if the CPU can't be post-VMXON. It should be
> impossible for the list to have entries if VMX is already disabled, and
> even if that invariant doesn't hold, VMCLEAR will #UD anyways, i.e.
> processing the list is pointless even if it somehow isn't empty.
>
> Assuming no existing KVM bugs, this should be a glorified nop. The
> primary motivation for the change is to avoid having code that looks like
> it does VMCLEAR, but then skips VMXON, which is nonsensical.
>
> Suggested-by: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 5d21931842a5..0ef5ede9cb7c 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -773,12 +773,20 @@ static void vmx_emergency_disable(void)
>
> kvm_rebooting = true;
>
> + /*
> + * Note, CR4.VMXE can be _cleared_ in NMI context, but it can only be
> + * set in task context. If this races with VMX is disabled by an NMI,
> + * VMCLEAR and VMXOFF may #UD, but KVM will eat those faults due to
> + * kvm_rebooting set.
> + */
I am not quite following this comment. IIUC this code path is only called from
NMI context in case of emergency VMX disable. How can it race with "VMX is
disabled by an NMI"? It should be the normal vmx_hardware_disable() may race
with NMI, but not this one?
> + if (!(__read_cr4() & X86_CR4_VMXE))
> + return;
> +
> list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
> loaded_vmcss_on_cpu_link)
> vmcs_clear(v->vmcs);
>
> - if (__read_cr4() & X86_CR4_VMXE)
> - kvm_cpu_vmxoff();
> + kvm_cpu_vmxoff();
> }
>
> static void __loaded_vmcs_clear(void *arg)
Anyway, the actual code change LGTM:
Reviewed-by: Kai Huang <kai.huang@intel.com>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
2023-07-24 21:40 ` Sean Christopherson
@ 2023-07-25 9:16 ` Peter Zijlstra
2023-07-27 16:39 ` Sean Christopherson
0 siblings, 1 reply; 36+ messages in thread
From: Peter Zijlstra @ 2023-07-25 9:16 UTC (permalink / raw)
To: Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On Mon, Jul 24, 2023 at 02:40:03PM -0700, Sean Christopherson wrote:
> On Mon, Jul 24, 2023, Peter Zijlstra wrote:
> > On Fri, Jul 21, 2023 at 01:18:54PM -0700, Sean Christopherson wrote:
> > > Check "this" CPU instead of the boot CPU when querying SVM support so that
> > > the per-CPU checks done during hardware enabling actually function as
> > > intended, i.e. will detect issues where SVM isn't support on all CPUs.
> >
> > Is that a realistic concern?
>
> It's not a concern in the sense that it should never happen, but I know of at
> least one example where VMX on Intel completely disappeared[1]. The "compatibility"
> checks are really more about the entire VMX/SVM feature set, the base VMX/SVM
> support check is just an easy and obvious precursor to the full compatibility
> checks.
>
> Of course, SVM doesn't currently have compatibility checks on the full SVM feature
> set, but that's more due to lack of a forcing function than a desire to _not_ have
> them. Intel CPUs have a pesky habit of bugs, ucode updates, and/or in-field errors
> resulting in VMX features randomly appearing or disappearing. E.g. there's an
> ongoing buzilla (sorry) issue[2] where a user is only able to load KVM *after* a
> suspend+resume cycle, because TSC scaling only shows up on one socket immediately
> after boot, which is then somehow resolved by suspend+resume.
>
> [1] 009bce1df0bb ("x86/split_lock: Don't write MSR_TEST_CTRL on CPUs that aren't whitelisted")
> [2] https://bugzilla.kernel.org/show_bug.cgi?id=217574
Is that using late loading of ucode? Anything that changes *any* feature
flag must be early ucode load, there is no other possible way since
einux does feature enumeration early, and features are fixed thereafter.
This is one of the many reasons late loading is a trainwreck.
Doing suspend/resume probably re-loads the firmware and re-does the
feature enumeration -- I didn't check.
Also, OMG don't you just love computers :/
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0
2023-07-25 3:51 ` Huang, Kai
@ 2023-07-25 18:15 ` Sean Christopherson
2023-07-25 22:20 ` Huang, Kai
0 siblings, 1 reply; 36+ messages in thread
From: Sean Christopherson @ 2023-07-25 18:15 UTC (permalink / raw)
To: Kai Huang
Cc: tglx@linutronix.de, x86@kernel.org, mingo@redhat.com,
pbonzini@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Chao Gao,
andrew.cooper3@citrix.com
On Tue, Jul 25, 2023, Kai Huang wrote:
> On Fri, 2023-07-21 at 13:18 -0700, Sean Christopherson wrote:
> > Bail from vmx_emergency_disable() without processing the list of loaded
> > VMCSes if CR4.VMXE=0, i.e. if the CPU can't be post-VMXON. It should be
> > impossible for the list to have entries if VMX is already disabled, and
> > even if that invariant doesn't hold, VMCLEAR will #UD anyways, i.e.
> > processing the list is pointless even if it somehow isn't empty.
> >
> > Assuming no existing KVM bugs, this should be a glorified nop. The
> > primary motivation for the change is to avoid having code that looks like
> > it does VMCLEAR, but then skips VMXON, which is nonsensical.
> >
> > Suggested-by: Kai Huang <kai.huang@intel.com>
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> > arch/x86/kvm/vmx/vmx.c | 12 ++++++++++--
> > 1 file changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 5d21931842a5..0ef5ede9cb7c 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -773,12 +773,20 @@ static void vmx_emergency_disable(void)
> >
> > kvm_rebooting = true;
> >
> > + /*
> > + * Note, CR4.VMXE can be _cleared_ in NMI context, but it can only be
> > + * set in task context. If this races with VMX is disabled by an NMI,
> > + * VMCLEAR and VMXOFF may #UD, but KVM will eat those faults due to
> > + * kvm_rebooting set.
> > + */
>
> I am not quite following this comment. IIUC this code path is only called from
> NMI context in case of emergency VMX disable.
The CPU that initiates the emergency reboot can invoke the callback from process
context, only responding CPUs are guaranteed to be handled via NMI shootdown.
E.g. `reboot -f` will reach this point synchronously.
> How can it race with "VMX is disabled by an NMI"?
Somewhat theoretically, a different CPU could panic() and do a shootdown of the
CPU that is handling `reboot -f`.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0
2023-07-25 18:15 ` Sean Christopherson
@ 2023-07-25 22:20 ` Huang, Kai
0 siblings, 0 replies; 36+ messages in thread
From: Huang, Kai @ 2023-07-25 22:20 UTC (permalink / raw)
To: Christopherson,, Sean
Cc: Gao, Chao, dave.hansen@linux.intel.com, bp@alien8.de,
x86@kernel.org, mingo@redhat.com, tglx@linutronix.de,
andrew.cooper3@citrix.com, pbonzini@redhat.com,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org
On Tue, 2023-07-25 at 11:15 -0700, Sean Christopherson wrote:
> On Tue, Jul 25, 2023, Kai Huang wrote:
> > On Fri, 2023-07-21 at 13:18 -0700, Sean Christopherson wrote:
> > > Bail from vmx_emergency_disable() without processing the list of loaded
> > > VMCSes if CR4.VMXE=0, i.e. if the CPU can't be post-VMXON. It should be
> > > impossible for the list to have entries if VMX is already disabled, and
> > > even if that invariant doesn't hold, VMCLEAR will #UD anyways, i.e.
> > > processing the list is pointless even if it somehow isn't empty.
> > >
> > > Assuming no existing KVM bugs, this should be a glorified nop. The
> > > primary motivation for the change is to avoid having code that looks like
> > > it does VMCLEAR, but then skips VMXON, which is nonsensical.
> > >
> > > Suggested-by: Kai Huang <kai.huang@intel.com>
> > > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > > ---
> > > arch/x86/kvm/vmx/vmx.c | 12 ++++++++++--
> > > 1 file changed, 10 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > > index 5d21931842a5..0ef5ede9cb7c 100644
> > > --- a/arch/x86/kvm/vmx/vmx.c
> > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > @@ -773,12 +773,20 @@ static void vmx_emergency_disable(void)
> > >
> > > kvm_rebooting = true;
> > >
> > > + /*
> > > + * Note, CR4.VMXE can be _cleared_ in NMI context, but it can only be
> > > + * set in task context. If this races with VMX is disabled by an NMI,
> > > + * VMCLEAR and VMXOFF may #UD, but KVM will eat those faults due to
> > > + * kvm_rebooting set.
> > > + */
> >
> > I am not quite following this comment. IIUC this code path is only called from
> > NMI context in case of emergency VMX disable.
>
> The CPU that initiates the emergency reboot can invoke the callback from process
> context, only responding CPUs are guaranteed to be handled via NMI shootdown.
> E.g. `reboot -f` will reach this point synchronously.
>
> > How can it race with "VMX is disabled by an NMI"?
>
> Somewhat theoretically, a different CPU could panic() and do a shootdown of the
> CPU that is handling `reboot -f`.
Yeah this is the only case I can think of too.
Anyway, LGTM. Thanks for explaining.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
2023-07-25 9:16 ` Peter Zijlstra
@ 2023-07-27 16:39 ` Sean Christopherson
0 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-07-27 16:39 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On Tue, Jul 25, 2023, Peter Zijlstra wrote:
> On Mon, Jul 24, 2023 at 02:40:03PM -0700, Sean Christopherson wrote:
> > On Mon, Jul 24, 2023, Peter Zijlstra wrote:
> > > On Fri, Jul 21, 2023 at 01:18:54PM -0700, Sean Christopherson wrote:
> > > > Check "this" CPU instead of the boot CPU when querying SVM support so that
> > > > the per-CPU checks done during hardware enabling actually function as
> > > > intended, i.e. will detect issues where SVM isn't support on all CPUs.
> > >
> > > Is that a realistic concern?
> >
> > It's not a concern in the sense that it should never happen, but I know of at
> > least one example where VMX on Intel completely disappeared[1]. The "compatibility"
> > checks are really more about the entire VMX/SVM feature set, the base VMX/SVM
> > support check is just an easy and obvious precursor to the full compatibility
> > checks.
> >
> > Of course, SVM doesn't currently have compatibility checks on the full SVM feature
> > set, but that's more due to lack of a forcing function than a desire to _not_ have
> > them. Intel CPUs have a pesky habit of bugs, ucode updates, and/or in-field errors
> > resulting in VMX features randomly appearing or disappearing. E.g. there's an
> > ongoing buzilla (sorry) issue[2] where a user is only able to load KVM *after* a
> > suspend+resume cycle, because TSC scaling only shows up on one socket immediately
> > after boot, which is then somehow resolved by suspend+resume.
> >
> > [1] 009bce1df0bb ("x86/split_lock: Don't write MSR_TEST_CTRL on CPUs that aren't whitelisted")
> > [2] https://bugzilla.kernel.org/show_bug.cgi?id=217574
>
> Is that using late loading of ucode?
Not sure, though I don't think that is relevant for this particular bug.
> Anything that changes *any* feature flag must be early ucode load, there is
> no other possible way since einux does feature enumeration early, and
> features are fixed thereafter.
>
> This is one of the many reasons late loading is a trainwreck.
>
> Doing suspend/resume probably re-loads the firmware
Ya, it does.
> and re-does the feature enumeration -- I didn't check.
The reported ucode revision is the same before and after resume, and is consistent
across all CPUs. KVM does the per-CPU feature enumeration (for sanity checks)
everytime userspace attempts to load KVM (the module), so the timing of the ucode
patch load _shouldn't_ matter.
The user is running quite old ucode for their system, so the current theory is that
old buggy ucode is to blame.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into KVM VMX
2023-07-21 20:18 ` [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into " Sean Christopherson
@ 2023-07-28 9:08 ` Xu Yilun
2023-07-28 9:43 ` Huang, Kai
0 siblings, 1 reply; 36+ messages in thread
From: Xu Yilun @ 2023-07-28 9:08 UTC (permalink / raw)
To: Sean Christopherson
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
Paolo Bonzini, linux-kernel, kvm, Andrew Cooper, Kai Huang,
Chao Gao
On 2023-07-21 at 13:18:50 -0700, Sean Christopherson wrote:
> Now that VMX is disabled in emergencies via the virt callbacks, move the
> VMXOFF helpers into KVM, the only remaining user.
Not sure if it's too early to mention.
Intel TDX Connect could be a future user, it is the TDX extension for
device security.
TDX uses SEAMCALL to interact with TDX Module, and SEAMCALL execution
requires VMXON. This is also true for TDX Connect. But TDX Connect
covers more controls out of KVM scope, like PCI IDE, SPDM, IOMMU.
IOW, other driver modules may use SEAMCALLs and in turn use VMXON/OFF
for TDX Connect.
I'm wondering if then we should again move VMXON/OFF helpers back to
virtext.h
Or, could we just keep vmxoff unchanged now?
Thanks,
Yilun
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into KVM VMX
2023-07-28 9:08 ` Xu Yilun
@ 2023-07-28 9:43 ` Huang, Kai
0 siblings, 0 replies; 36+ messages in thread
From: Huang, Kai @ 2023-07-28 9:43 UTC (permalink / raw)
To: Xu, Yilun, Christopherson,, Sean
Cc: andrew.cooper3@citrix.com, Gao, Chao, x86@kernel.org,
dave.hansen@linux.intel.com, bp@alien8.de, mingo@redhat.com,
tglx@linutronix.de, pbonzini@redhat.com, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
On Fri, 2023-07-28 at 17:08 +0800, Xu, Yilun wrote:
> On 2023-07-21 at 13:18:50 -0700, Sean Christopherson wrote:
> > Now that VMX is disabled in emergencies via the virt callbacks, move the
> > VMXOFF helpers into KVM, the only remaining user.
>
> Not sure if it's too early to mention.
>
> Intel TDX Connect could be a future user, it is the TDX extension for
> device security.
>
> TDX uses SEAMCALL to interact with TDX Module, and SEAMCALL execution
> requires VMXON. This is also true for TDX Connect. But TDX Connect
> covers more controls out of KVM scope, like PCI IDE, SPDM, IOMMU.
> IOW, other driver modules may use SEAMCALLs and in turn use VMXON/OFF
> for TDX Connect.
>
> I'm wondering if then we should again move VMXON/OFF helpers back to
> virtext.h
>
> Or, could we just keep vmxoff unchanged now?
>
I'd say we should just proceed with Sean's this patch. Moving VMXON/VMXOFF out
from KVM needs additional things besides keeping the basic vmxon()/vmxoff()
functions at core-x86 in order to handle multiple callers from multiple kernel
components. And vmxon()/vmxoff() aren't necessary to be in virtext.h either,
depending on the implementation. Let's handle that when we need that in the
future.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
` (18 preceding siblings ...)
2023-07-21 20:18 ` [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0 Sean Christopherson
@ 2023-08-04 0:40 ` Sean Christopherson
19 siblings, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-08-04 0:40 UTC (permalink / raw)
To: Sean Christopherson, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, Paolo Bonzini
Cc: linux-kernel, kvm, Andrew Cooper, Kai Huang, Chao Gao
On Fri, 21 Jul 2023 13:18:40 -0700, Sean Christopherson wrote:
> If there are no objections, my plan is to take this through the KVM tree
> for 6.6.
>
> Instead of having the reboot code blindly try to disable virtualization
> during an emergency, use the existing callback into KVM to disable virt
> as "needed". In quotes because KVM still somewhat blindly attempts to
> disable virt, e.g. if KVM is loaded but doesn't have active VMs and thus
> hasn't enabled hardware. That could theoretically be "fixed", but due to
> the callback being invoked from NMI context, I'm not convinced it would
> be worth the complexity. E.g. false positives would still be possible,
> and KVM would have to play games with the per-CPU hardware_enabled flag
> to ensure there are no false negatives.
>
> [...]
Applied to kvm-x86 misc, thanks!
[01/19] x86/reboot: VMCLEAR active VMCSes before emergency reboot
https://github.com/kvm-x86/linux/commit/b23c83ad2c63
[02/19] x86/reboot: Harden virtualization hooks for emergency reboot
https://github.com/kvm-x86/linux/commit/5e408396c60c
[03/19] x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback
https://github.com/kvm-x86/linux/commit/119b5cb4ffd0
[04/19] x86/reboot: KVM: Disable SVM during reboot via virt/KVM reboot callback
https://github.com/kvm-x86/linux/commit/baeb4de7ad12
[05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization
https://github.com/kvm-x86/linux/commit/ad93c1a7c010
[06/19] x86/reboot: Hoist "disable virt" helpers above "emergency reboot" path
https://github.com/kvm-x86/linux/commit/edc8deb087d8
[07/19] x86/reboot: Disable virtualization during reboot iff callback is registered
https://github.com/kvm-x86/linux/commit/59765db5fc82
[08/19] x86/reboot: Expose VMCS crash hooks if and only if KVM_{INTEL,AMD} is enabled
https://github.com/kvm-x86/linux/commit/261cd5ed934e
[09/19] x86/virt: KVM: Open code cpu_has_vmx() in KVM VMX
https://github.com/kvm-x86/linux/commit/b6a6af0d19ce
[10/19] x86/virt: KVM: Move VMXOFF helpers into KVM VMX
https://github.com/kvm-x86/linux/commit/22e420e12739
[11/19] KVM: SVM: Make KVM_AMD depend on CPU_SUP_AMD or CPU_SUP_HYGON
https://github.com/kvm-x86/linux/commit/554856b69e3d
[12/19] x86/virt: Drop unnecessary check on extended CPUID level in cpu_has_svm()
https://github.com/kvm-x86/linux/commit/5df8ecfe3632
[13/19] x86/virt: KVM: Open code cpu_has_svm() into kvm_is_svm_supported()
https://github.com/kvm-x86/linux/commit/85fd29dd5fe4
[14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported()
https://github.com/kvm-x86/linux/commit/c4db4f20f3bf
[15/19] KVM: VMX: Ensure CPU is stable when probing basic VMX support
https://github.com/kvm-x86/linux/commit/f9a8866040fc
[16/19] x86/virt: KVM: Move "disable SVM" helper into KVM SVM
https://github.com/kvm-x86/linux/commit/76ab8161083b
[17/19] KVM: x86: Force kvm_rebooting=true during emergency reboot/crash
https://github.com/kvm-x86/linux/commit/6ae44e012f4c
[18/19] KVM: SVM: Use "standard" stgi() helper when disabling SVM
https://github.com/kvm-x86/linux/commit/2e6b9bd49b70
[19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0
https://github.com/kvm-x86/linux/commit/a788fbb763b5
--
https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/fixes
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2023-08-04 0:41 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-21 20:18 [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 01/19] x86/reboot: VMCLEAR active VMCSes before emergency reboot Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 02/19] x86/reboot: Harden virtualization hooks for " Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 03/19] x86/reboot: KVM: Handle VMXOFF in KVM's reboot callback Sean Christopherson
2023-07-24 23:57 ` Huang, Kai
2023-07-21 20:18 ` [PATCH v4 04/19] x86/reboot: KVM: Disable SVM during reboot via virt/KVM " Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 05/19] x86/reboot: Assert that IRQs are disabled when turning off virtualization Sean Christopherson
2023-07-24 21:19 ` Peter Zijlstra
2023-07-24 21:41 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 06/19] x86/reboot: Hoist "disable virt" helpers above "emergency reboot" path Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 07/19] x86/reboot: Disable virtualization during reboot iff callback is registered Sean Christopherson
2023-07-24 23:57 ` Huang, Kai
2023-07-21 20:18 ` [PATCH v4 08/19] x86/reboot: Expose VMCS crash hooks if and only if KVM_{INTEL,AMD} is enabled Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 09/19] x86/virt: KVM: Open code cpu_has_vmx() in KVM VMX Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 10/19] x86/virt: KVM: Move VMXOFF helpers into " Sean Christopherson
2023-07-28 9:08 ` Xu Yilun
2023-07-28 9:43 ` Huang, Kai
2023-07-21 20:18 ` [PATCH v4 11/19] KVM: SVM: Make KVM_AMD depend on CPU_SUP_AMD or CPU_SUP_HYGON Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 12/19] x86/virt: Drop unnecessary check on extended CPUID level in cpu_has_svm() Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 13/19] x86/virt: KVM: Open code cpu_has_svm() into kvm_is_svm_supported() Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 14/19] KVM: SVM: Check that the current CPU supports SVM in kvm_is_svm_supported() Sean Christopherson
2023-07-24 21:21 ` Peter Zijlstra
2023-07-24 21:40 ` Sean Christopherson
2023-07-25 9:16 ` Peter Zijlstra
2023-07-27 16:39 ` Sean Christopherson
2023-07-24 22:29 ` Dmitry Torokhov
2023-07-24 23:53 ` Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 15/19] KVM: VMX: Ensure CPU is stable when probing basic VMX support Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 16/19] x86/virt: KVM: Move "disable SVM" helper into KVM SVM Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 17/19] KVM: x86: Force kvm_rebooting=true during emergency reboot/crash Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 18/19] KVM: SVM: Use "standard" stgi() helper when disabling SVM Sean Christopherson
2023-07-21 20:18 ` [PATCH v4 19/19] KVM: VMX: Skip VMCLEAR logic during emergency reboots if CR4.VMXE=0 Sean Christopherson
2023-07-25 3:51 ` Huang, Kai
2023-07-25 18:15 ` Sean Christopherson
2023-07-25 22:20 ` Huang, Kai
2023-08-04 0:40 ` [PATCH v4 00/19] x86/reboot: KVM: Clean up "emergency" virt code Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).