* [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD
@ 2024-07-12 9:39 Ravi Bangoria
2024-07-12 9:39 ` [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file Ravi Bangoria
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Ravi Bangoria @ 2024-07-12 9:39 UTC (permalink / raw)
To: tglx, mingo, bp, dave.hansen, seanjc, pbonzini, thomas.lendacky
Cc: ravi.bangoria, hpa, rmk+kernel, peterz, james.morse,
lukas.bulwahn, arjan, j.granados, sibs, nik.borisov, michael.roth,
nikunj.dadhania, babu.moger, x86, kvm, linux-kernel,
santosh.shukla, ananth.narayan, sandipan.das, manali.shukla,
jmattson
Upcoming AMD uarch will support Bus Lock Detect (called Bus Lock Trap
in AMD docs). Add support for the same in Linux. Bus Lock Detect is
enumerated with cpuid CPUID Fn0000_0007_ECX_x0 bit [24 / BUSLOCKTRAP].
It can be enabled through MSR_IA32_DEBUGCTLMSR. When enabled, hardware
clears DR6[11] and raises a #DB exception on occurrence of Bus Lock if
CPL > 0. More detail about the feature can be found in AMD APM[1].
Patches are prepared on tip/master (a6fffa92da54).
[1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June
2023, Vol 2, 13.1.3.6 Bus Lock Trap
https://bugzilla.kernel.org/attachment.cgi?id=304653
v1: https://lore.kernel.org/r/20240429060643.211-1-ravi.bangoria@amd.com
v1->v2:
- Call bus_lock_init() from common.c. Although common.c is shared across
all X86_VENDOR_*, bus_lock_init() internally checks for
X86_FEATURE_BUS_LOCK_DETECT, hence it's safe to call it from common.c.
- s/split-bus-lock.c/bus_lock.c/ for a new filename.
- Add a KVM patch to disable Bus Lock Trap unconditionally when SVM
support is missing.
Note:
A Qemu fix is also require to handle corner case where a hardware
instruction or data breakpoint is created by Qemu remote debugger (gdb)
on the same instruction which also causes a Bus Lock. I'll post a Qemu
patch separately.
Ravi Bangoria (4):
x86/split_lock: Move Split and Bus lock code to a dedicated file
x86/bus_lock: Add support for AMD
KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is
missing
KVM: SVM: Add Bus Lock Detect support
arch/x86/include/asm/cpu.h | 4 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/bus_lock.c | 406 ++++++++++++++++++++++++++++++++
arch/x86/kernel/cpu/common.c | 2 +
arch/x86/kernel/cpu/intel.c | 407 ---------------------------------
arch/x86/kvm/svm/nested.c | 3 +-
arch/x86/kvm/svm/svm.c | 16 +-
7 files changed, 430 insertions(+), 409 deletions(-)
create mode 100644 arch/x86/kernel/cpu/bus_lock.c
--
2.34.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file
2024-07-12 9:39 [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Ravi Bangoria
@ 2024-07-12 9:39 ` Ravi Bangoria
2024-07-13 10:33 ` kernel test robot
2024-07-13 12:41 ` kernel test robot
2024-07-12 9:39 ` [PATCH v2 2/4] x86/bus_lock: Add support for AMD Ravi Bangoria
` (3 subsequent siblings)
4 siblings, 2 replies; 10+ messages in thread
From: Ravi Bangoria @ 2024-07-12 9:39 UTC (permalink / raw)
To: tglx, mingo, bp, dave.hansen, seanjc, pbonzini, thomas.lendacky
Cc: ravi.bangoria, hpa, rmk+kernel, peterz, james.morse,
lukas.bulwahn, arjan, j.granados, sibs, nik.borisov, michael.roth,
nikunj.dadhania, babu.moger, x86, kvm, linux-kernel,
santosh.shukla, ananth.narayan, sandipan.das, manali.shukla,
jmattson
Upcoming AMD uarch will support Bus Lock Detect, which functionally works
identical to Intel. Move split_lock and bus_lock specific code from
intel.c to a dedicated file so that it can be compiled and supported on
non-Intel platforms.
Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
arch/x86/include/asm/cpu.h | 4 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/bus_lock.c | 406 +++++++++++++++++++++++++++++++++
arch/x86/kernel/cpu/intel.c | 406 ---------------------------------
4 files changed, 411 insertions(+), 406 deletions(-)
create mode 100644 arch/x86/kernel/cpu/bus_lock.c
diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
index aa30fd8cad7f..4b5c31dc8112 100644
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -51,6 +51,10 @@ static inline u8 get_this_hybrid_cpu_type(void)
return 0;
}
#endif
+
+void split_lock_init(void);
+void bus_lock_init(void);
+
#ifdef CONFIG_IA32_FEAT_CTL
void init_ia32_feat_ctl(struct cpuinfo_x86 *c);
#else
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 5857a0f5d514..9f74e0011f01 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -27,6 +27,7 @@ obj-y += aperfmperf.o
obj-y += cpuid-deps.o
obj-y += umwait.o
obj-y += capflags.o powerflags.o
+obj-y += bus_lock.o
obj-$(CONFIG_X86_LOCAL_APIC) += topology.o
diff --git a/arch/x86/kernel/cpu/bus_lock.c b/arch/x86/kernel/cpu/bus_lock.c
new file mode 100644
index 000000000000..704e9241b964
--- /dev/null
+++ b/arch/x86/kernel/cpu/bus_lock.c
@@ -0,0 +1,406 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define pr_fmt(fmt) "x86/split lock detection: " fmt
+
+#include <linux/semaphore.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+#include <linux/cpuhotplug.h>
+#include <asm/cpu_device_id.h>
+#include <asm/cmdline.h>
+#include <asm/traps.h>
+#include <asm/cpu.h>
+
+enum split_lock_detect_state {
+ sld_off = 0,
+ sld_warn,
+ sld_fatal,
+ sld_ratelimit,
+};
+
+/*
+ * Default to sld_off because most systems do not support split lock detection.
+ * sld_state_setup() will switch this to sld_warn on systems that support
+ * split lock/bus lock detect, unless there is a command line override.
+ */
+static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
+static u64 msr_test_ctrl_cache __ro_after_init;
+
+/*
+ * With a name like MSR_TEST_CTL it should go without saying, but don't touch
+ * MSR_TEST_CTL unless the CPU is one of the whitelisted models. Writing it
+ * on CPUs that do not support SLD can cause fireworks, even when writing '0'.
+ */
+static bool cpu_model_supports_sld __ro_after_init;
+
+static const struct {
+ const char *option;
+ enum split_lock_detect_state state;
+} sld_options[] __initconst = {
+ { "off", sld_off },
+ { "warn", sld_warn },
+ { "fatal", sld_fatal },
+ { "ratelimit:", sld_ratelimit },
+};
+
+static struct ratelimit_state bld_ratelimit;
+
+static unsigned int sysctl_sld_mitigate = 1;
+static DEFINE_SEMAPHORE(buslock_sem, 1);
+
+#ifdef CONFIG_PROC_SYSCTL
+static struct ctl_table sld_sysctls[] = {
+ {
+ .procname = "split_lock_mitigate",
+ .data = &sysctl_sld_mitigate,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_douintvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ },
+};
+
+static int __init sld_mitigate_sysctl_init(void)
+{
+ register_sysctl_init("kernel", sld_sysctls);
+ return 0;
+}
+
+late_initcall(sld_mitigate_sysctl_init);
+#endif
+
+static inline bool match_option(const char *arg, int arglen, const char *opt)
+{
+ int len = strlen(opt), ratelimit;
+
+ if (strncmp(arg, opt, len))
+ return false;
+
+ /*
+ * Min ratelimit is 1 bus lock/sec.
+ * Max ratelimit is 1000 bus locks/sec.
+ */
+ if (sscanf(arg, "ratelimit:%d", &ratelimit) == 1 &&
+ ratelimit > 0 && ratelimit <= 1000) {
+ ratelimit_state_init(&bld_ratelimit, HZ, ratelimit);
+ ratelimit_set_flags(&bld_ratelimit, RATELIMIT_MSG_ON_RELEASE);
+ return true;
+ }
+
+ return len == arglen;
+}
+
+static bool split_lock_verify_msr(bool on)
+{
+ u64 ctrl, tmp;
+
+ if (rdmsrl_safe(MSR_TEST_CTRL, &ctrl))
+ return false;
+ if (on)
+ ctrl |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+ else
+ ctrl &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+ if (wrmsrl_safe(MSR_TEST_CTRL, ctrl))
+ return false;
+ rdmsrl(MSR_TEST_CTRL, tmp);
+ return ctrl == tmp;
+}
+
+static void __init sld_state_setup(void)
+{
+ enum split_lock_detect_state state = sld_warn;
+ char arg[20];
+ int i, ret;
+
+ if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
+ !boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
+ return;
+
+ ret = cmdline_find_option(boot_command_line, "split_lock_detect",
+ arg, sizeof(arg));
+ if (ret >= 0) {
+ for (i = 0; i < ARRAY_SIZE(sld_options); i++) {
+ if (match_option(arg, ret, sld_options[i].option)) {
+ state = sld_options[i].state;
+ break;
+ }
+ }
+ }
+ sld_state = state;
+}
+
+static void __init __split_lock_setup(void)
+{
+ if (!split_lock_verify_msr(false)) {
+ pr_info("MSR access failed: Disabled\n");
+ return;
+ }
+
+ rdmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);
+
+ if (!split_lock_verify_msr(true)) {
+ pr_info("MSR access failed: Disabled\n");
+ return;
+ }
+
+ /* Restore the MSR to its cached value. */
+ wrmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);
+
+ setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
+}
+
+/*
+ * MSR_TEST_CTRL is per core, but we treat it like a per CPU MSR. Locking
+ * is not implemented as one thread could undo the setting of the other
+ * thread immediately after dropping the lock anyway.
+ */
+static void sld_update_msr(bool on)
+{
+ u64 test_ctrl_val = msr_test_ctrl_cache;
+
+ if (on)
+ test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+
+ wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
+}
+
+void split_lock_init(void)
+{
+ /*
+ * #DB for bus lock handles ratelimit and #AC for split lock is
+ * disabled.
+ */
+ if (sld_state == sld_ratelimit) {
+ split_lock_verify_msr(false);
+ return;
+ }
+
+ if (cpu_model_supports_sld)
+ split_lock_verify_msr(sld_state != sld_off);
+}
+
+static void __split_lock_reenable_unlock(struct work_struct *work)
+{
+ sld_update_msr(true);
+ up(&buslock_sem);
+}
+
+static DECLARE_DELAYED_WORK(sl_reenable_unlock, __split_lock_reenable_unlock);
+
+static void __split_lock_reenable(struct work_struct *work)
+{
+ sld_update_msr(true);
+}
+static DECLARE_DELAYED_WORK(sl_reenable, __split_lock_reenable);
+
+/*
+ * If a CPU goes offline with pending delayed work to re-enable split lock
+ * detection then the delayed work will be executed on some other CPU. That
+ * handles releasing the buslock_sem, but because it executes on a
+ * different CPU probably won't re-enable split lock detection. This is a
+ * problem on HT systems since the sibling CPU on the same core may then be
+ * left running with split lock detection disabled.
+ *
+ * Unconditionally re-enable detection here.
+ */
+static int splitlock_cpu_offline(unsigned int cpu)
+{
+ sld_update_msr(true);
+
+ return 0;
+}
+
+static void split_lock_warn(unsigned long ip)
+{
+ struct delayed_work *work;
+ int cpu;
+
+ if (!current->reported_split_lock)
+ pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
+ current->comm, current->pid, ip);
+ current->reported_split_lock = 1;
+
+ if (sysctl_sld_mitigate) {
+ /*
+ * misery factor #1:
+ * sleep 10ms before trying to execute split lock.
+ */
+ if (msleep_interruptible(10) > 0)
+ return;
+ /*
+ * Misery factor #2:
+ * only allow one buslocked disabled core at a time.
+ */
+ if (down_interruptible(&buslock_sem) == -EINTR)
+ return;
+ work = &sl_reenable_unlock;
+ } else {
+ work = &sl_reenable;
+ }
+
+ cpu = get_cpu();
+ schedule_delayed_work_on(cpu, work, 2);
+
+ /* Disable split lock detection on this CPU to make progress */
+ sld_update_msr(false);
+ put_cpu();
+}
+
+bool handle_guest_split_lock(unsigned long ip)
+{
+ if (sld_state == sld_warn) {
+ split_lock_warn(ip);
+ return true;
+ }
+
+ pr_warn_once("#AC: %s/%d %s split_lock trap at address: 0x%lx\n",
+ current->comm, current->pid,
+ sld_state == sld_fatal ? "fatal" : "bogus", ip);
+
+ current->thread.error_code = 0;
+ current->thread.trap_nr = X86_TRAP_AC;
+ force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
+ return false;
+}
+EXPORT_SYMBOL_GPL(handle_guest_split_lock);
+
+void bus_lock_init(void)
+{
+ u64 val;
+
+ if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
+ return;
+
+ rdmsrl(MSR_IA32_DEBUGCTLMSR, val);
+
+ if ((boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
+ (sld_state == sld_warn || sld_state == sld_fatal)) ||
+ sld_state == sld_off) {
+ /*
+ * Warn and fatal are handled by #AC for split lock if #AC for
+ * split lock is supported.
+ */
+ val &= ~DEBUGCTLMSR_BUS_LOCK_DETECT;
+ } else {
+ val |= DEBUGCTLMSR_BUS_LOCK_DETECT;
+ }
+
+ wrmsrl(MSR_IA32_DEBUGCTLMSR, val);
+}
+
+bool handle_user_split_lock(struct pt_regs *regs, long error_code)
+{
+ if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
+ return false;
+ split_lock_warn(regs->ip);
+ return true;
+}
+
+void handle_bus_lock(struct pt_regs *regs)
+{
+ switch (sld_state) {
+ case sld_off:
+ break;
+ case sld_ratelimit:
+ /* Enforce no more than bld_ratelimit bus locks/sec. */
+ while (!__ratelimit(&bld_ratelimit))
+ msleep(20);
+ /* Warn on the bus lock. */
+ fallthrough;
+ case sld_warn:
+ pr_warn_ratelimited("#DB: %s/%d took a bus_lock trap at address: 0x%lx\n",
+ current->comm, current->pid, regs->ip);
+ break;
+ case sld_fatal:
+ force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
+ break;
+ }
+}
+
+/*
+ * CPU models that are known to have the per-core split-lock detection
+ * feature even though they do not enumerate IA32_CORE_CAPABILITIES.
+ */
+static const struct x86_cpu_id split_lock_cpu_ids[] __initconst = {
+ X86_MATCH_VFM(INTEL_ICELAKE_X, 0),
+ X86_MATCH_VFM(INTEL_ICELAKE_L, 0),
+ X86_MATCH_VFM(INTEL_ICELAKE_D, 0),
+ {}
+};
+
+static void __init split_lock_setup(struct cpuinfo_x86 *c)
+{
+ const struct x86_cpu_id *m;
+ u64 ia32_core_caps;
+
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ return;
+
+ /* Check for CPUs that have support but do not enumerate it: */
+ m = x86_match_cpu(split_lock_cpu_ids);
+ if (m)
+ goto supported;
+
+ if (!cpu_has(c, X86_FEATURE_CORE_CAPABILITIES))
+ return;
+
+ /*
+ * Not all bits in MSR_IA32_CORE_CAPS are architectural, but
+ * MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT is. All CPUs that set
+ * it have split lock detection.
+ */
+ rdmsrl(MSR_IA32_CORE_CAPS, ia32_core_caps);
+ if (ia32_core_caps & MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT)
+ goto supported;
+
+ /* CPU is not in the model list and does not have the MSR bit: */
+ return;
+
+supported:
+ cpu_model_supports_sld = true;
+ __split_lock_setup();
+}
+
+static void sld_state_show(void)
+{
+ if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) &&
+ !boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+ return;
+
+ switch (sld_state) {
+ case sld_off:
+ pr_info("disabled\n");
+ break;
+ case sld_warn:
+ if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
+ pr_info("#AC: crashing the kernel on kernel split_locks and warning on user-space split_locks\n");
+ if (cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
+ "x86/splitlock", NULL, splitlock_cpu_offline) < 0)
+ pr_warn("No splitlock CPU offline handler\n");
+ } else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) {
+ pr_info("#DB: warning on user-space bus_locks\n");
+ }
+ break;
+ case sld_fatal:
+ if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
+ pr_info("#AC: crashing the kernel on kernel split_locks and sending SIGBUS on user-space split_locks\n");
+ } else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) {
+ pr_info("#DB: sending SIGBUS on user-space bus_locks%s\n",
+ boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) ?
+ " from non-WB" : "");
+ }
+ break;
+ case sld_ratelimit:
+ if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
+ pr_info("#DB: setting system wide bus lock rate limit to %u/sec\n", bld_ratelimit.burst);
+ break;
+ }
+}
+
+void __init sld_setup(struct cpuinfo_x86 *c)
+{
+ split_lock_setup(c);
+ sld_state_setup();
+ sld_state_show();
+}
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 08b95a35b5cb..8a483f4ad026 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -7,13 +7,9 @@
#include <linux/smp.h>
#include <linux/sched.h>
#include <linux/sched/clock.h>
-#include <linux/semaphore.h>
#include <linux/thread_info.h>
#include <linux/init.h>
#include <linux/uaccess.h>
-#include <linux/workqueue.h>
-#include <linux/delay.h>
-#include <linux/cpuhotplug.h>
#include <asm/cpufeature.h>
#include <asm/msr.h>
@@ -24,8 +20,6 @@
#include <asm/hwcap2.h>
#include <asm/elf.h>
#include <asm/cpu_device_id.h>
-#include <asm/cmdline.h>
-#include <asm/traps.h>
#include <asm/resctrl.h>
#include <asm/numa.h>
#include <asm/thermal.h>
@@ -41,28 +35,6 @@
#include <asm/apic.h>
#endif
-enum split_lock_detect_state {
- sld_off = 0,
- sld_warn,
- sld_fatal,
- sld_ratelimit,
-};
-
-/*
- * Default to sld_off because most systems do not support split lock detection.
- * sld_state_setup() will switch this to sld_warn on systems that support
- * split lock/bus lock detect, unless there is a command line override.
- */
-static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
-static u64 msr_test_ctrl_cache __ro_after_init;
-
-/*
- * With a name like MSR_TEST_CTL it should go without saying, but don't touch
- * MSR_TEST_CTL unless the CPU is one of the whitelisted models. Writing it
- * on CPUs that do not support SLD can cause fireworks, even when writing '0'.
- */
-static bool cpu_model_supports_sld __ro_after_init;
-
/*
* Processors which have self-snooping capability can handle conflicting
* memory type across CPUs by snooping its own cache. However, there exists
@@ -547,9 +519,6 @@ static void init_intel_misc_features(struct cpuinfo_x86 *c)
wrmsrl(MSR_MISC_FEATURES_ENABLES, msr);
}
-static void split_lock_init(void);
-static void bus_lock_init(void);
-
static void init_intel(struct cpuinfo_x86 *c)
{
early_init_intel(c);
@@ -907,381 +876,6 @@ static const struct cpu_dev intel_cpu_dev = {
cpu_dev_register(intel_cpu_dev);
-#undef pr_fmt
-#define pr_fmt(fmt) "x86/split lock detection: " fmt
-
-static const struct {
- const char *option;
- enum split_lock_detect_state state;
-} sld_options[] __initconst = {
- { "off", sld_off },
- { "warn", sld_warn },
- { "fatal", sld_fatal },
- { "ratelimit:", sld_ratelimit },
-};
-
-static struct ratelimit_state bld_ratelimit;
-
-static unsigned int sysctl_sld_mitigate = 1;
-static DEFINE_SEMAPHORE(buslock_sem, 1);
-
-#ifdef CONFIG_PROC_SYSCTL
-static struct ctl_table sld_sysctls[] = {
- {
- .procname = "split_lock_mitigate",
- .data = &sysctl_sld_mitigate,
- .maxlen = sizeof(unsigned int),
- .mode = 0644,
- .proc_handler = proc_douintvec_minmax,
- .extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
- },
-};
-
-static int __init sld_mitigate_sysctl_init(void)
-{
- register_sysctl_init("kernel", sld_sysctls);
- return 0;
-}
-
-late_initcall(sld_mitigate_sysctl_init);
-#endif
-
-static inline bool match_option(const char *arg, int arglen, const char *opt)
-{
- int len = strlen(opt), ratelimit;
-
- if (strncmp(arg, opt, len))
- return false;
-
- /*
- * Min ratelimit is 1 bus lock/sec.
- * Max ratelimit is 1000 bus locks/sec.
- */
- if (sscanf(arg, "ratelimit:%d", &ratelimit) == 1 &&
- ratelimit > 0 && ratelimit <= 1000) {
- ratelimit_state_init(&bld_ratelimit, HZ, ratelimit);
- ratelimit_set_flags(&bld_ratelimit, RATELIMIT_MSG_ON_RELEASE);
- return true;
- }
-
- return len == arglen;
-}
-
-static bool split_lock_verify_msr(bool on)
-{
- u64 ctrl, tmp;
-
- if (rdmsrl_safe(MSR_TEST_CTRL, &ctrl))
- return false;
- if (on)
- ctrl |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
- else
- ctrl &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
- if (wrmsrl_safe(MSR_TEST_CTRL, ctrl))
- return false;
- rdmsrl(MSR_TEST_CTRL, tmp);
- return ctrl == tmp;
-}
-
-static void __init sld_state_setup(void)
-{
- enum split_lock_detect_state state = sld_warn;
- char arg[20];
- int i, ret;
-
- if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
- !boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
- return;
-
- ret = cmdline_find_option(boot_command_line, "split_lock_detect",
- arg, sizeof(arg));
- if (ret >= 0) {
- for (i = 0; i < ARRAY_SIZE(sld_options); i++) {
- if (match_option(arg, ret, sld_options[i].option)) {
- state = sld_options[i].state;
- break;
- }
- }
- }
- sld_state = state;
-}
-
-static void __init __split_lock_setup(void)
-{
- if (!split_lock_verify_msr(false)) {
- pr_info("MSR access failed: Disabled\n");
- return;
- }
-
- rdmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);
-
- if (!split_lock_verify_msr(true)) {
- pr_info("MSR access failed: Disabled\n");
- return;
- }
-
- /* Restore the MSR to its cached value. */
- wrmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);
-
- setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
-}
-
-/*
- * MSR_TEST_CTRL is per core, but we treat it like a per CPU MSR. Locking
- * is not implemented as one thread could undo the setting of the other
- * thread immediately after dropping the lock anyway.
- */
-static void sld_update_msr(bool on)
-{
- u64 test_ctrl_val = msr_test_ctrl_cache;
-
- if (on)
- test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
-
- wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
-}
-
-static void split_lock_init(void)
-{
- /*
- * #DB for bus lock handles ratelimit and #AC for split lock is
- * disabled.
- */
- if (sld_state == sld_ratelimit) {
- split_lock_verify_msr(false);
- return;
- }
-
- if (cpu_model_supports_sld)
- split_lock_verify_msr(sld_state != sld_off);
-}
-
-static void __split_lock_reenable_unlock(struct work_struct *work)
-{
- sld_update_msr(true);
- up(&buslock_sem);
-}
-
-static DECLARE_DELAYED_WORK(sl_reenable_unlock, __split_lock_reenable_unlock);
-
-static void __split_lock_reenable(struct work_struct *work)
-{
- sld_update_msr(true);
-}
-static DECLARE_DELAYED_WORK(sl_reenable, __split_lock_reenable);
-
-/*
- * If a CPU goes offline with pending delayed work to re-enable split lock
- * detection then the delayed work will be executed on some other CPU. That
- * handles releasing the buslock_sem, but because it executes on a
- * different CPU probably won't re-enable split lock detection. This is a
- * problem on HT systems since the sibling CPU on the same core may then be
- * left running with split lock detection disabled.
- *
- * Unconditionally re-enable detection here.
- */
-static int splitlock_cpu_offline(unsigned int cpu)
-{
- sld_update_msr(true);
-
- return 0;
-}
-
-static void split_lock_warn(unsigned long ip)
-{
- struct delayed_work *work;
- int cpu;
-
- if (!current->reported_split_lock)
- pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
- current->comm, current->pid, ip);
- current->reported_split_lock = 1;
-
- if (sysctl_sld_mitigate) {
- /*
- * misery factor #1:
- * sleep 10ms before trying to execute split lock.
- */
- if (msleep_interruptible(10) > 0)
- return;
- /*
- * Misery factor #2:
- * only allow one buslocked disabled core at a time.
- */
- if (down_interruptible(&buslock_sem) == -EINTR)
- return;
- work = &sl_reenable_unlock;
- } else {
- work = &sl_reenable;
- }
-
- cpu = get_cpu();
- schedule_delayed_work_on(cpu, work, 2);
-
- /* Disable split lock detection on this CPU to make progress */
- sld_update_msr(false);
- put_cpu();
-}
-
-bool handle_guest_split_lock(unsigned long ip)
-{
- if (sld_state == sld_warn) {
- split_lock_warn(ip);
- return true;
- }
-
- pr_warn_once("#AC: %s/%d %s split_lock trap at address: 0x%lx\n",
- current->comm, current->pid,
- sld_state == sld_fatal ? "fatal" : "bogus", ip);
-
- current->thread.error_code = 0;
- current->thread.trap_nr = X86_TRAP_AC;
- force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
- return false;
-}
-EXPORT_SYMBOL_GPL(handle_guest_split_lock);
-
-static void bus_lock_init(void)
-{
- u64 val;
-
- if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
- return;
-
- rdmsrl(MSR_IA32_DEBUGCTLMSR, val);
-
- if ((boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
- (sld_state == sld_warn || sld_state == sld_fatal)) ||
- sld_state == sld_off) {
- /*
- * Warn and fatal are handled by #AC for split lock if #AC for
- * split lock is supported.
- */
- val &= ~DEBUGCTLMSR_BUS_LOCK_DETECT;
- } else {
- val |= DEBUGCTLMSR_BUS_LOCK_DETECT;
- }
-
- wrmsrl(MSR_IA32_DEBUGCTLMSR, val);
-}
-
-bool handle_user_split_lock(struct pt_regs *regs, long error_code)
-{
- if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
- return false;
- split_lock_warn(regs->ip);
- return true;
-}
-
-void handle_bus_lock(struct pt_regs *regs)
-{
- switch (sld_state) {
- case sld_off:
- break;
- case sld_ratelimit:
- /* Enforce no more than bld_ratelimit bus locks/sec. */
- while (!__ratelimit(&bld_ratelimit))
- msleep(20);
- /* Warn on the bus lock. */
- fallthrough;
- case sld_warn:
- pr_warn_ratelimited("#DB: %s/%d took a bus_lock trap at address: 0x%lx\n",
- current->comm, current->pid, regs->ip);
- break;
- case sld_fatal:
- force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
- break;
- }
-}
-
-/*
- * CPU models that are known to have the per-core split-lock detection
- * feature even though they do not enumerate IA32_CORE_CAPABILITIES.
- */
-static const struct x86_cpu_id split_lock_cpu_ids[] __initconst = {
- X86_MATCH_VFM(INTEL_ICELAKE_X, 0),
- X86_MATCH_VFM(INTEL_ICELAKE_L, 0),
- X86_MATCH_VFM(INTEL_ICELAKE_D, 0),
- {}
-};
-
-static void __init split_lock_setup(struct cpuinfo_x86 *c)
-{
- const struct x86_cpu_id *m;
- u64 ia32_core_caps;
-
- if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
- return;
-
- /* Check for CPUs that have support but do not enumerate it: */
- m = x86_match_cpu(split_lock_cpu_ids);
- if (m)
- goto supported;
-
- if (!cpu_has(c, X86_FEATURE_CORE_CAPABILITIES))
- return;
-
- /*
- * Not all bits in MSR_IA32_CORE_CAPS are architectural, but
- * MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT is. All CPUs that set
- * it have split lock detection.
- */
- rdmsrl(MSR_IA32_CORE_CAPS, ia32_core_caps);
- if (ia32_core_caps & MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT)
- goto supported;
-
- /* CPU is not in the model list and does not have the MSR bit: */
- return;
-
-supported:
- cpu_model_supports_sld = true;
- __split_lock_setup();
-}
-
-static void sld_state_show(void)
-{
- if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) &&
- !boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
- return;
-
- switch (sld_state) {
- case sld_off:
- pr_info("disabled\n");
- break;
- case sld_warn:
- if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
- pr_info("#AC: crashing the kernel on kernel split_locks and warning on user-space split_locks\n");
- if (cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
- "x86/splitlock", NULL, splitlock_cpu_offline) < 0)
- pr_warn("No splitlock CPU offline handler\n");
- } else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) {
- pr_info("#DB: warning on user-space bus_locks\n");
- }
- break;
- case sld_fatal:
- if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
- pr_info("#AC: crashing the kernel on kernel split_locks and sending SIGBUS on user-space split_locks\n");
- } else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) {
- pr_info("#DB: sending SIGBUS on user-space bus_locks%s\n",
- boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) ?
- " from non-WB" : "");
- }
- break;
- case sld_ratelimit:
- if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
- pr_info("#DB: setting system wide bus lock rate limit to %u/sec\n", bld_ratelimit.burst);
- break;
- }
-}
-
-void __init sld_setup(struct cpuinfo_x86 *c)
-{
- split_lock_setup(c);
- sld_state_setup();
- sld_state_show();
-}
-
#define X86_HYBRID_CPU_TYPE_ID_SHIFT 24
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 2/4] x86/bus_lock: Add support for AMD
2024-07-12 9:39 [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Ravi Bangoria
2024-07-12 9:39 ` [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file Ravi Bangoria
@ 2024-07-12 9:39 ` Ravi Bangoria
2024-07-12 9:39 ` [PATCH v2 3/4] KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is missing Ravi Bangoria
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Ravi Bangoria @ 2024-07-12 9:39 UTC (permalink / raw)
To: tglx, mingo, bp, dave.hansen, seanjc, pbonzini, thomas.lendacky
Cc: ravi.bangoria, hpa, rmk+kernel, peterz, james.morse,
lukas.bulwahn, arjan, j.granados, sibs, nik.borisov, michael.roth,
nikunj.dadhania, babu.moger, x86, kvm, linux-kernel,
santosh.shukla, ananth.narayan, sandipan.das, manali.shukla,
jmattson
Upcoming AMD uarch will support Bus Lock Detect (called Bus Lock Trap
in AMD docs). Add support for the same in Linux. Bus Lock Detect is
enumerated with cpuid CPUID Fn0000_0007_ECX_x0 bit [24 / BUSLOCKTRAP].
It can be enabled through MSR_IA32_DEBUGCTLMSR. When enabled, hardware
clears DR6[11] and raises a #DB exception on occurrence of Bus Lock if
CPL > 0. More detail about the feature can be found in AMD APM[1].
[1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June
2023, Vol 2, 13.1.3.6 Bus Lock Trap
https://bugzilla.kernel.org/attachment.cgi?id=304653
Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
arch/x86/kernel/cpu/common.c | 2 ++
arch/x86/kernel/cpu/intel.c | 1 -
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index d4e539d4e158..a37670e1ab4d 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1832,6 +1832,8 @@ static void identify_cpu(struct cpuinfo_x86 *c)
if (this_cpu->c_init)
this_cpu->c_init(c);
+ bus_lock_init();
+
/* Disable the PN if appropriate */
squash_the_stupid_serial_number(c);
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 8a483f4ad026..799f18545c6e 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -610,7 +610,6 @@ static void init_intel(struct cpuinfo_x86 *c)
init_intel_misc_features(c);
split_lock_init();
- bus_lock_init();
intel_init_thermal(c);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 3/4] KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is missing
2024-07-12 9:39 [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Ravi Bangoria
2024-07-12 9:39 ` [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file Ravi Bangoria
2024-07-12 9:39 ` [PATCH v2 2/4] x86/bus_lock: Add support for AMD Ravi Bangoria
@ 2024-07-12 9:39 ` Ravi Bangoria
2024-07-12 23:33 ` Jim Mattson
2024-07-12 9:39 ` [PATCH v2 4/4] KVM: SVM: Add Bus Lock Detect support Ravi Bangoria
2024-07-15 15:11 ` [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Tom Lendacky
4 siblings, 1 reply; 10+ messages in thread
From: Ravi Bangoria @ 2024-07-12 9:39 UTC (permalink / raw)
To: tglx, mingo, bp, dave.hansen, seanjc, pbonzini, thomas.lendacky
Cc: ravi.bangoria, hpa, rmk+kernel, peterz, james.morse,
lukas.bulwahn, arjan, j.granados, sibs, nik.borisov, michael.roth,
nikunj.dadhania, babu.moger, x86, kvm, linux-kernel,
santosh.shukla, ananth.narayan, sandipan.das, manali.shukla,
jmattson
If host supports Bus Lock Detect, KVM advertises it to guests even if
SVM support is absent. Additionally, guest wouldn't be able to use it
despite guest CPUID bit being set. Fix it by unconditionally clearing
the feature bit in KVM cpu capability.
Reported-by: Jim Mattson <jmattson@google.com>
Closes: https://lore.kernel.org/r/CALMp9eRet6+v8Y1Q-i6mqPm4hUow_kJNhmVHfOV8tMfuSS=tVg@mail.gmail.com
Fixes: 76ea438b4afc ("KVM: X86: Expose bus lock debug exception to guest")
Cc: stable@vger.kernel.org
Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
arch/x86/kvm/svm/svm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c95d3900fe56..4a1d0a8478a5 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5211,6 +5211,9 @@ static __init void svm_set_cpu_caps(void)
/* CPUID 0x8000001F (SME/SEV features) */
sev_set_cpu_caps();
+
+ /* Don't advertise Bus Lock Detect to guest if SVM support is absent */
+ kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
}
static __init int svm_hardware_setup(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 4/4] KVM: SVM: Add Bus Lock Detect support
2024-07-12 9:39 [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Ravi Bangoria
` (2 preceding siblings ...)
2024-07-12 9:39 ` [PATCH v2 3/4] KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is missing Ravi Bangoria
@ 2024-07-12 9:39 ` Ravi Bangoria
2024-07-15 15:11 ` [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Tom Lendacky
4 siblings, 0 replies; 10+ messages in thread
From: Ravi Bangoria @ 2024-07-12 9:39 UTC (permalink / raw)
To: tglx, mingo, bp, dave.hansen, seanjc, pbonzini, thomas.lendacky
Cc: ravi.bangoria, hpa, rmk+kernel, peterz, james.morse,
lukas.bulwahn, arjan, j.granados, sibs, nik.borisov, michael.roth,
nikunj.dadhania, babu.moger, x86, kvm, linux-kernel,
santosh.shukla, ananth.narayan, sandipan.das, manali.shukla,
jmattson
Upcoming AMD uarch will support Bus Lock Detect. Add support for it
in KVM. Bus Lock Detect is enabled through MSR_IA32_DEBUGCTLMSR and
MSR_IA32_DEBUGCTLMSR is virtualized only if LBR Virtualization is
enabled. Add this dependency in the KVM.
Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
arch/x86/kvm/svm/nested.c | 3 ++-
arch/x86/kvm/svm/svm.c | 17 ++++++++++++++---
2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 55b9a6d96bcf..6e93c2d9e7df 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -586,7 +586,8 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
/* These bits will be set properly on the first execution when new_vmc12 is true */
if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_DR))) {
vmcb02->save.dr7 = svm->nested.save.dr7 | DR7_FIXED_1;
- svm->vcpu.arch.dr6 = svm->nested.save.dr6 | DR6_ACTIVE_LOW;
+ /* DR6_RTM is not supported on AMD as of now. */
+ svm->vcpu.arch.dr6 = svm->nested.save.dr6 | DR6_FIXED_1 | DR6_RTM;
vmcb_mark_dirty(vmcb02, VMCB_DR);
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4a1d0a8478a5..e00e1e2a0b78 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1044,7 +1044,8 @@ void svm_update_lbrv(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
bool current_enable_lbrv = svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK;
- bool enable_lbrv = (svm_get_lbr_vmcb(svm)->save.dbgctl & DEBUGCTLMSR_LBR) ||
+ u64 dbgctl_buslock_lbr = DEBUGCTLMSR_BUS_LOCK_DETECT | DEBUGCTLMSR_LBR;
+ bool enable_lbrv = (svm_get_lbr_vmcb(svm)->save.dbgctl & dbgctl_buslock_lbr) ||
(is_guest_mode(vcpu) && guest_can_use(vcpu, X86_FEATURE_LBRV) &&
(svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK));
@@ -3145,6 +3146,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
if (data & DEBUGCTL_RESERVED_BITS)
return 1;
+ if ((data & DEBUGCTLMSR_BUS_LOCK_DETECT) &&
+ !guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT))
+ return 1;
+
svm_get_lbr_vmcb(svm)->save.dbgctl = data;
svm_update_lbrv(vcpu);
break;
@@ -5212,8 +5217,14 @@ static __init void svm_set_cpu_caps(void)
/* CPUID 0x8000001F (SME/SEV features) */
sev_set_cpu_caps();
- /* Don't advertise Bus Lock Detect to guest if SVM support is absent */
- kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
+ /*
+ * LBR Virtualization must be enabled to support BusLockTrap inside the
+ * guest, since BusLockTrap is enabled through MSR_IA32_DEBUGCTLMSR and
+ * MSR_IA32_DEBUGCTLMSR is virtualized only if LBR Virtualization is
+ * enabled.
+ */
+ if (!lbrv)
+ kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
}
static __init int svm_hardware_setup(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2 3/4] KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is missing
2024-07-12 9:39 ` [PATCH v2 3/4] KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is missing Ravi Bangoria
@ 2024-07-12 23:33 ` Jim Mattson
0 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2024-07-12 23:33 UTC (permalink / raw)
To: Ravi Bangoria
Cc: tglx, mingo, bp, dave.hansen, seanjc, pbonzini, thomas.lendacky,
hpa, rmk+kernel, peterz, james.morse, lukas.bulwahn, arjan,
j.granados, sibs, nik.borisov, michael.roth, nikunj.dadhania,
babu.moger, x86, kvm, linux-kernel, santosh.shukla,
ananth.narayan, sandipan.das, manali.shukla
On Fri, Jul 12, 2024 at 2:41 AM Ravi Bangoria <ravi.bangoria@amd.com> wrote:
>
> If host supports Bus Lock Detect, KVM advertises it to guests even if
> SVM support is absent. Additionally, guest wouldn't be able to use it
> despite guest CPUID bit being set. Fix it by unconditionally clearing
> the feature bit in KVM cpu capability.
>
> Reported-by: Jim Mattson <jmattson@google.com>
> Closes: https://lore.kernel.org/r/CALMp9eRet6+v8Y1Q-i6mqPm4hUow_kJNhmVHfOV8tMfuSS=tVg@mail.gmail.com
> Fixes: 76ea438b4afc ("KVM: X86: Expose bus lock debug exception to guest")
> Cc: stable@vger.kernel.org
> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file
2024-07-12 9:39 ` [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file Ravi Bangoria
@ 2024-07-13 10:33 ` kernel test robot
2024-07-13 12:41 ` kernel test robot
1 sibling, 0 replies; 10+ messages in thread
From: kernel test robot @ 2024-07-13 10:33 UTC (permalink / raw)
To: Ravi Bangoria, tglx, mingo, bp, dave.hansen, seanjc, pbonzini,
thomas.lendacky
Cc: llvm, oe-kbuild-all, ravi.bangoria, hpa, rmk+kernel, peterz,
james.morse, lukas.bulwahn, arjan, j.granados, sibs, nik.borisov,
michael.roth, nikunj.dadhania, babu.moger, x86, kvm, linux-kernel,
santosh.shukla, ananth.narayan, sandipan.das, manali.shukla,
jmattson
Hi Ravi,
kernel test robot noticed the following build errors:
[auto build test ERROR on tip/master]
[also build test ERROR on next-20240712]
[cannot apply to tip/x86/core kvm/queue linus/master tip/auto-latest kvm/linux-next v6.10-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Ravi-Bangoria/x86-split_lock-Move-Split-and-Bus-lock-code-to-a-dedicated-file/20240712-175306
base: tip/master
patch link: https://lore.kernel.org/r/20240712093943.1288-2-ravi.bangoria%40amd.com
patch subject: [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file
config: i386-buildonly-randconfig-002-20240713 (https://download.01.org/0day-ci/archive/20240713/202407131818.mNFDcgjd-lkp@intel.com/config)
compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240713/202407131818.mNFDcgjd-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202407131818.mNFDcgjd-lkp@intel.com/
All errors (new ones prefixed by >>):
>> arch/x86/kernel/cpu/bus_lock.c:219:16: error: no member named 'reported_split_lock' in 'struct task_struct'
219 | if (!current->reported_split_lock)
| ~~~~~~~ ^
arch/x86/kernel/cpu/bus_lock.c:222:11: error: no member named 'reported_split_lock' in 'struct task_struct'
222 | current->reported_split_lock = 1;
| ~~~~~~~ ^
>> arch/x86/kernel/cpu/bus_lock.c:250:6: error: redefinition of 'handle_guest_split_lock'
250 | bool handle_guest_split_lock(unsigned long ip)
| ^
arch/x86/include/asm/cpu.h:42:20: note: previous definition is here
42 | static inline bool handle_guest_split_lock(unsigned long ip)
| ^
>> arch/x86/kernel/cpu/bus_lock.c:292:6: error: redefinition of 'handle_user_split_lock'
292 | bool handle_user_split_lock(struct pt_regs *regs, long error_code)
| ^
arch/x86/include/asm/cpu.h:37:20: note: previous definition is here
37 | static inline bool handle_user_split_lock(struct pt_regs *regs, long error_code)
| ^
>> arch/x86/kernel/cpu/bus_lock.c:300:6: error: redefinition of 'handle_bus_lock'
300 | void handle_bus_lock(struct pt_regs *regs)
| ^
arch/x86/include/asm/cpu.h:47:20: note: previous definition is here
47 | static inline void handle_bus_lock(struct pt_regs *regs) {}
| ^
>> arch/x86/kernel/cpu/bus_lock.c:401:13: error: redefinition of 'sld_setup'
401 | void __init sld_setup(struct cpuinfo_x86 *c)
| ^
arch/x86/include/asm/cpu.h:36:27: note: previous definition is here
36 | static inline void __init sld_setup(struct cpuinfo_x86 *c) {}
| ^
6 errors generated.
vim +219 arch/x86/kernel/cpu/bus_lock.c
213
214 static void split_lock_warn(unsigned long ip)
215 {
216 struct delayed_work *work;
217 int cpu;
218
> 219 if (!current->reported_split_lock)
220 pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
221 current->comm, current->pid, ip);
222 current->reported_split_lock = 1;
223
224 if (sysctl_sld_mitigate) {
225 /*
226 * misery factor #1:
227 * sleep 10ms before trying to execute split lock.
228 */
229 if (msleep_interruptible(10) > 0)
230 return;
231 /*
232 * Misery factor #2:
233 * only allow one buslocked disabled core at a time.
234 */
235 if (down_interruptible(&buslock_sem) == -EINTR)
236 return;
237 work = &sl_reenable_unlock;
238 } else {
239 work = &sl_reenable;
240 }
241
242 cpu = get_cpu();
243 schedule_delayed_work_on(cpu, work, 2);
244
245 /* Disable split lock detection on this CPU to make progress */
246 sld_update_msr(false);
247 put_cpu();
248 }
249
> 250 bool handle_guest_split_lock(unsigned long ip)
251 {
252 if (sld_state == sld_warn) {
253 split_lock_warn(ip);
254 return true;
255 }
256
257 pr_warn_once("#AC: %s/%d %s split_lock trap at address: 0x%lx\n",
258 current->comm, current->pid,
259 sld_state == sld_fatal ? "fatal" : "bogus", ip);
260
261 current->thread.error_code = 0;
262 current->thread.trap_nr = X86_TRAP_AC;
263 force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
264 return false;
265 }
266 EXPORT_SYMBOL_GPL(handle_guest_split_lock);
267
268 void bus_lock_init(void)
269 {
270 u64 val;
271
272 if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
273 return;
274
275 rdmsrl(MSR_IA32_DEBUGCTLMSR, val);
276
277 if ((boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
278 (sld_state == sld_warn || sld_state == sld_fatal)) ||
279 sld_state == sld_off) {
280 /*
281 * Warn and fatal are handled by #AC for split lock if #AC for
282 * split lock is supported.
283 */
284 val &= ~DEBUGCTLMSR_BUS_LOCK_DETECT;
285 } else {
286 val |= DEBUGCTLMSR_BUS_LOCK_DETECT;
287 }
288
289 wrmsrl(MSR_IA32_DEBUGCTLMSR, val);
290 }
291
> 292 bool handle_user_split_lock(struct pt_regs *regs, long error_code)
293 {
294 if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
295 return false;
296 split_lock_warn(regs->ip);
297 return true;
298 }
299
> 300 void handle_bus_lock(struct pt_regs *regs)
301 {
302 switch (sld_state) {
303 case sld_off:
304 break;
305 case sld_ratelimit:
306 /* Enforce no more than bld_ratelimit bus locks/sec. */
307 while (!__ratelimit(&bld_ratelimit))
308 msleep(20);
309 /* Warn on the bus lock. */
310 fallthrough;
311 case sld_warn:
312 pr_warn_ratelimited("#DB: %s/%d took a bus_lock trap at address: 0x%lx\n",
313 current->comm, current->pid, regs->ip);
314 break;
315 case sld_fatal:
316 force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
317 break;
318 }
319 }
320
321 /*
322 * CPU models that are known to have the per-core split-lock detection
323 * feature even though they do not enumerate IA32_CORE_CAPABILITIES.
324 */
325 static const struct x86_cpu_id split_lock_cpu_ids[] __initconst = {
326 X86_MATCH_VFM(INTEL_ICELAKE_X, 0),
327 X86_MATCH_VFM(INTEL_ICELAKE_L, 0),
328 X86_MATCH_VFM(INTEL_ICELAKE_D, 0),
329 {}
330 };
331
332 static void __init split_lock_setup(struct cpuinfo_x86 *c)
333 {
334 const struct x86_cpu_id *m;
335 u64 ia32_core_caps;
336
337 if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
338 return;
339
340 /* Check for CPUs that have support but do not enumerate it: */
341 m = x86_match_cpu(split_lock_cpu_ids);
342 if (m)
343 goto supported;
344
345 if (!cpu_has(c, X86_FEATURE_CORE_CAPABILITIES))
346 return;
347
348 /*
349 * Not all bits in MSR_IA32_CORE_CAPS are architectural, but
350 * MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT is. All CPUs that set
351 * it have split lock detection.
352 */
353 rdmsrl(MSR_IA32_CORE_CAPS, ia32_core_caps);
354 if (ia32_core_caps & MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT)
355 goto supported;
356
357 /* CPU is not in the model list and does not have the MSR bit: */
358 return;
359
360 supported:
361 cpu_model_supports_sld = true;
362 __split_lock_setup();
363 }
364
365 static void sld_state_show(void)
366 {
367 if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) &&
368 !boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
369 return;
370
371 switch (sld_state) {
372 case sld_off:
373 pr_info("disabled\n");
374 break;
375 case sld_warn:
376 if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
377 pr_info("#AC: crashing the kernel on kernel split_locks and warning on user-space split_locks\n");
378 if (cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
379 "x86/splitlock", NULL, splitlock_cpu_offline) < 0)
380 pr_warn("No splitlock CPU offline handler\n");
381 } else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) {
382 pr_info("#DB: warning on user-space bus_locks\n");
383 }
384 break;
385 case sld_fatal:
386 if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
387 pr_info("#AC: crashing the kernel on kernel split_locks and sending SIGBUS on user-space split_locks\n");
388 } else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) {
389 pr_info("#DB: sending SIGBUS on user-space bus_locks%s\n",
390 boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) ?
391 " from non-WB" : "");
392 }
393 break;
394 case sld_ratelimit:
395 if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
396 pr_info("#DB: setting system wide bus lock rate limit to %u/sec\n", bld_ratelimit.burst);
397 break;
398 }
399 }
400
> 401 void __init sld_setup(struct cpuinfo_x86 *c)
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file
2024-07-12 9:39 ` [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file Ravi Bangoria
2024-07-13 10:33 ` kernel test robot
@ 2024-07-13 12:41 ` kernel test robot
1 sibling, 0 replies; 10+ messages in thread
From: kernel test robot @ 2024-07-13 12:41 UTC (permalink / raw)
To: Ravi Bangoria, tglx, mingo, bp, dave.hansen, seanjc, pbonzini,
thomas.lendacky
Cc: oe-kbuild-all, ravi.bangoria, hpa, rmk+kernel, peterz,
james.morse, lukas.bulwahn, arjan, j.granados, sibs, nik.borisov,
michael.roth, nikunj.dadhania, babu.moger, x86, kvm, linux-kernel,
santosh.shukla, ananth.narayan, sandipan.das, manali.shukla,
jmattson
Hi Ravi,
kernel test robot noticed the following build errors:
[auto build test ERROR on tip/master]
[also build test ERROR on next-20240712]
[cannot apply to tip/x86/core kvm/queue linus/master tip/auto-latest kvm/linux-next v6.10-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Ravi-Bangoria/x86-split_lock-Move-Split-and-Bus-lock-code-to-a-dedicated-file/20240712-175306
base: tip/master
patch link: https://lore.kernel.org/r/20240712093943.1288-2-ravi.bangoria%40amd.com
patch subject: [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file
config: i386-randconfig-015-20240713 (https://download.01.org/0day-ci/archive/20240713/202407132059.uppmW6rR-lkp@intel.com/config)
compiler: gcc-11 (Ubuntu 11.4.0-4ubuntu1) 11.4.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240713/202407132059.uppmW6rR-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202407132059.uppmW6rR-lkp@intel.com/
All errors (new ones prefixed by >>):
arch/x86/kernel/cpu/bus_lock.c: In function 'split_lock_warn':
>> arch/x86/kernel/cpu/bus_lock.c:219:21: error: 'struct task_struct' has no member named 'reported_split_lock'
219 | if (!current->reported_split_lock)
| ^~
arch/x86/kernel/cpu/bus_lock.c:222:16: error: 'struct task_struct' has no member named 'reported_split_lock'
222 | current->reported_split_lock = 1;
| ^~
arch/x86/kernel/cpu/bus_lock.c: At top level:
arch/x86/kernel/cpu/bus_lock.c:250:6: error: redefinition of 'handle_guest_split_lock'
250 | bool handle_guest_split_lock(unsigned long ip)
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from arch/x86/kernel/cpu/bus_lock.c:12:
arch/x86/include/asm/cpu.h:42:20: note: previous definition of 'handle_guest_split_lock' with type 'bool(long unsigned int)' {aka '_Bool(long unsigned int)'}
42 | static inline bool handle_guest_split_lock(unsigned long ip)
| ^~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kernel/cpu/bus_lock.c:292:6: error: redefinition of 'handle_user_split_lock'
292 | bool handle_user_split_lock(struct pt_regs *regs, long error_code)
| ^~~~~~~~~~~~~~~~~~~~~~
In file included from arch/x86/kernel/cpu/bus_lock.c:12:
arch/x86/include/asm/cpu.h:37:20: note: previous definition of 'handle_user_split_lock' with type 'bool(struct pt_regs *, long int)' {aka '_Bool(struct pt_regs *, long int)'}
37 | static inline bool handle_user_split_lock(struct pt_regs *regs, long error_code)
| ^~~~~~~~~~~~~~~~~~~~~~
arch/x86/kernel/cpu/bus_lock.c:300:6: error: redefinition of 'handle_bus_lock'
300 | void handle_bus_lock(struct pt_regs *regs)
| ^~~~~~~~~~~~~~~
In file included from arch/x86/kernel/cpu/bus_lock.c:12:
arch/x86/include/asm/cpu.h:47:20: note: previous definition of 'handle_bus_lock' with type 'void(struct pt_regs *)'
47 | static inline void handle_bus_lock(struct pt_regs *regs) {}
| ^~~~~~~~~~~~~~~
arch/x86/kernel/cpu/bus_lock.c:401:13: error: redefinition of 'sld_setup'
401 | void __init sld_setup(struct cpuinfo_x86 *c)
| ^~~~~~~~~
In file included from arch/x86/kernel/cpu/bus_lock.c:12:
arch/x86/include/asm/cpu.h:36:27: note: previous definition of 'sld_setup' with type 'void(struct cpuinfo_x86 *)'
36 | static inline void __init sld_setup(struct cpuinfo_x86 *c) {}
| ^~~~~~~~~
vim +219 arch/x86/kernel/cpu/bus_lock.c
213
214 static void split_lock_warn(unsigned long ip)
215 {
216 struct delayed_work *work;
217 int cpu;
218
> 219 if (!current->reported_split_lock)
220 pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
221 current->comm, current->pid, ip);
222 current->reported_split_lock = 1;
223
224 if (sysctl_sld_mitigate) {
225 /*
226 * misery factor #1:
227 * sleep 10ms before trying to execute split lock.
228 */
229 if (msleep_interruptible(10) > 0)
230 return;
231 /*
232 * Misery factor #2:
233 * only allow one buslocked disabled core at a time.
234 */
235 if (down_interruptible(&buslock_sem) == -EINTR)
236 return;
237 work = &sl_reenable_unlock;
238 } else {
239 work = &sl_reenable;
240 }
241
242 cpu = get_cpu();
243 schedule_delayed_work_on(cpu, work, 2);
244
245 /* Disable split lock detection on this CPU to make progress */
246 sld_update_msr(false);
247 put_cpu();
248 }
249
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD
2024-07-12 9:39 [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Ravi Bangoria
` (3 preceding siblings ...)
2024-07-12 9:39 ` [PATCH v2 4/4] KVM: SVM: Add Bus Lock Detect support Ravi Bangoria
@ 2024-07-15 15:11 ` Tom Lendacky
2024-07-15 15:18 ` Ravi Bangoria
4 siblings, 1 reply; 10+ messages in thread
From: Tom Lendacky @ 2024-07-15 15:11 UTC (permalink / raw)
To: Ravi Bangoria, tglx, mingo, bp, dave.hansen, seanjc, pbonzini
Cc: hpa, rmk+kernel, peterz, james.morse, lukas.bulwahn, arjan,
j.granados, sibs, nik.borisov, michael.roth, nikunj.dadhania,
babu.moger, x86, kvm, linux-kernel, santosh.shukla,
ananth.narayan, sandipan.das, manali.shukla, jmattson
On 7/12/24 04:39, Ravi Bangoria wrote:
> Upcoming AMD uarch will support Bus Lock Detect (called Bus Lock Trap
Saying "Upcoming AMD uarch" is ok in the cover letter, but in a commit
message it doesn't mean a lot when viewed a few years from now. I would
just word it as, depending on the context, something like "AMD
processors that support Bus Lock Detect ..." or "AMD processors support
Bus Lock Detect ...".
Since it looks like a v2 will be needed to address the kernel test robot
issues, maybe re-work your commit messages.
Thanks,
Tom
> in AMD docs). Add support for the same in Linux. Bus Lock Detect is
> enumerated with cpuid CPUID Fn0000_0007_ECX_x0 bit [24 / BUSLOCKTRAP].
> It can be enabled through MSR_IA32_DEBUGCTLMSR. When enabled, hardware
> clears DR6[11] and raises a #DB exception on occurrence of Bus Lock if
> CPL > 0. More detail about the feature can be found in AMD APM[1].
>
> Patches are prepared on tip/master (a6fffa92da54).
>
> [1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June
> 2023, Vol 2, 13.1.3.6 Bus Lock Trap
> https://bugzilla.kernel.org/attachment.cgi?id=304653
>
> v1: https://lore.kernel.org/r/20240429060643.211-1-ravi.bangoria@amd.com
> v1->v2:
> - Call bus_lock_init() from common.c. Although common.c is shared across
> all X86_VENDOR_*, bus_lock_init() internally checks for
> X86_FEATURE_BUS_LOCK_DETECT, hence it's safe to call it from common.c.
> - s/split-bus-lock.c/bus_lock.c/ for a new filename.
> - Add a KVM patch to disable Bus Lock Trap unconditionally when SVM
> support is missing.
>
> Note:
> A Qemu fix is also require to handle corner case where a hardware
> instruction or data breakpoint is created by Qemu remote debugger (gdb)
> on the same instruction which also causes a Bus Lock. I'll post a Qemu
> patch separately.
>
> Ravi Bangoria (4):
> x86/split_lock: Move Split and Bus lock code to a dedicated file
> x86/bus_lock: Add support for AMD
> KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is
> missing
> KVM: SVM: Add Bus Lock Detect support
>
> arch/x86/include/asm/cpu.h | 4 +
> arch/x86/kernel/cpu/Makefile | 1 +
> arch/x86/kernel/cpu/bus_lock.c | 406 ++++++++++++++++++++++++++++++++
> arch/x86/kernel/cpu/common.c | 2 +
> arch/x86/kernel/cpu/intel.c | 407 ---------------------------------
> arch/x86/kvm/svm/nested.c | 3 +-
> arch/x86/kvm/svm/svm.c | 16 +-
> 7 files changed, 430 insertions(+), 409 deletions(-)
> create mode 100644 arch/x86/kernel/cpu/bus_lock.c
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD
2024-07-15 15:11 ` [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Tom Lendacky
@ 2024-07-15 15:18 ` Ravi Bangoria
0 siblings, 0 replies; 10+ messages in thread
From: Ravi Bangoria @ 2024-07-15 15:18 UTC (permalink / raw)
To: Tom Lendacky, tglx, mingo, bp, dave.hansen, seanjc, pbonzini
Cc: hpa, rmk+kernel, peterz, james.morse, lukas.bulwahn, arjan,
j.granados, sibs, nik.borisov, michael.roth, nikunj.dadhania,
babu.moger, x86, kvm, linux-kernel, santosh.shukla,
ananth.narayan, sandipan.das, manali.shukla, jmattson,
Ravi Bangoria
On 15-Jul-24 8:41 PM, Tom Lendacky wrote:
> On 7/12/24 04:39, Ravi Bangoria wrote:
>> Upcoming AMD uarch will support Bus Lock Detect (called Bus Lock Trap
>
> Saying "Upcoming AMD uarch" is ok in the cover letter, but in a commit
> message it doesn't mean a lot when viewed a few years from now. I would
> just word it as, depending on the context, something like "AMD
> processors that support Bus Lock Detect ..." or "AMD processors support
> Bus Lock Detect ...".
>
> Since it looks like a v2 will be needed to address the kernel test robot
> issues, maybe re-work your commit messages.
Makes sense. Will reword it in v3.
Thanks,
Ravi
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-07-15 15:19 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-12 9:39 [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Ravi Bangoria
2024-07-12 9:39 ` [PATCH v2 1/4] x86/split_lock: Move Split and Bus lock code to a dedicated file Ravi Bangoria
2024-07-13 10:33 ` kernel test robot
2024-07-13 12:41 ` kernel test robot
2024-07-12 9:39 ` [PATCH v2 2/4] x86/bus_lock: Add support for AMD Ravi Bangoria
2024-07-12 9:39 ` [PATCH v2 3/4] KVM: SVM: Don't advertise Bus Lock Detect to guest if SVM support is missing Ravi Bangoria
2024-07-12 23:33 ` Jim Mattson
2024-07-12 9:39 ` [PATCH v2 4/4] KVM: SVM: Add Bus Lock Detect support Ravi Bangoria
2024-07-15 15:11 ` [PATCH v2 0/4] x86/cpu: Add Bus Lock Detect support for AMD Tom Lendacky
2024-07-15 15:18 ` Ravi Bangoria
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox