* [PATCH v6 01/15] x86/mce: Set CR4.MCE last during init
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-10 11:43 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 02/15] x86/mce: Define BSP-only init Yazen Ghannam
` (14 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Set the CR4.MCE bit as the last step during init. This brings the MCA
init flow closer to what is described in the x86 docs.
x86 docs:
AMD Intel
MCG_CTL
MCA_CONFIG MCG_EXT_CTL
MCi_CTL MCi_CTL
MCG_CTL
CR4.MCE CR4.MCE
Current Linux:
AMD Intel
CR4.MCE CR4.MCE
MCG_CTL MCG_CTL
MCA_CONFIG MCG_EXT_CTL
MCi_CTL MCi_CTL
Updated Linux:
AMD Intel
MCG_CTL MCG_CTL
MCA_CONFIG MCG_EXT_CTL
MCi_CTL MCi_CTL
CR4.MCE CR4.MCE
The new init flow will match Intel's docs, but there will still be a
mismatch for AMD regarding MCG_CTL. However, there is no known issue
with this ordering, so leave it for now.
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-7-865768a2eef8@amd.com
v5->v6:
* Only move CR4.MCE programming.
* Update commit message to match the new change.
v4->v5:
* New in v5.
arch/x86/kernel/cpu/mce/core.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 0326fbb83adc..9e31834b3542 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1850,8 +1850,6 @@ static void __mcheck_cpu_init_generic(void)
{
u64 cap;
- cr4_set_bits(X86_CR4_MCE);
-
rdmsrq(MSR_IA32_MCG_CAP, cap);
if (cap & MCG_CTL_P)
wrmsr(MSR_IA32_MCG_CTL, 0xffffffff, 0xffffffff);
@@ -2276,6 +2274,7 @@ void mcheck_cpu_init(struct cpuinfo_x86 *c)
__mcheck_cpu_init_vendor(c);
__mcheck_cpu_init_prepare_banks();
__mcheck_cpu_setup_timer();
+ cr4_set_bits(X86_CR4_MCE);
}
/*
@@ -2443,6 +2442,7 @@ static void mce_syscore_resume(void)
__mcheck_cpu_init_generic();
__mcheck_cpu_init_vendor(raw_cpu_ptr(&cpu_info));
__mcheck_cpu_init_prepare_banks();
+ cr4_set_bits(X86_CR4_MCE);
}
static struct syscore_ops mce_syscore_ops = {
@@ -2462,6 +2462,7 @@ static void mce_cpu_restart(void *data)
__mcheck_cpu_init_generic();
__mcheck_cpu_init_prepare_banks();
__mcheck_cpu_init_timer();
+ cr4_set_bits(X86_CR4_MCE);
}
/* Reinit MCEs after user configuration changes */
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 01/15] x86/mce: Set CR4.MCE last during init
2025-09-08 15:40 ` [PATCH v6 01/15] x86/mce: Set CR4.MCE last during init Yazen Ghannam
@ 2025-09-10 11:43 ` Nikolay Borisov
0 siblings, 0 replies; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-10 11:43 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 9/8/25 18:40, Yazen Ghannam wrote:
> Set the CR4.MCE bit as the last step during init. This brings the MCA
> init flow closer to what is described in the x86 docs.
>
> x86 docs:
> AMD Intel
> MCG_CTL
> MCA_CONFIG MCG_EXT_CTL
> MCi_CTL MCi_CTL
> MCG_CTL
> CR4.MCE CR4.MCE
>
> Current Linux:
> AMD Intel
> CR4.MCE CR4.MCE
> MCG_CTL MCG_CTL
> MCA_CONFIG MCG_EXT_CTL
> MCi_CTL MCi_CTL
>
> Updated Linux:
> AMD Intel
> MCG_CTL MCG_CTL
> MCA_CONFIG MCG_EXT_CTL
> MCi_CTL MCi_CTL
> CR4.MCE CR4.MCE
>
> The new init flow will match Intel's docs, but there will still be a
> mismatch for AMD regarding MCG_CTL. However, there is no known issue
> with this ordering, so leave it for now.
>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 02/15] x86/mce: Define BSP-only init
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
2025-09-08 15:40 ` [PATCH v6 01/15] x86/mce: Set CR4.MCE last during init Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-10 11:47 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 03/15] x86/mce: Define BSP-only SMCA init Yazen Ghannam
` (13 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Currently, MCA initialization is executed identically on each CPU as
they are brought online. However, a number of MCA initialization tasks
only need to be done once.
Define a function to collect all 'global' init tasks and call this from
the BSP only. Start with CPU features.
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-8-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* Change cpu_mca_init() to mca_bsp_init().
* Drop code comment.
v2->v3:
* Add tags from Qiuxu and Tony.
v1->v2:
* New in v2.
arch/x86/include/asm/mce.h | 2 ++
arch/x86/kernel/cpu/common.c | 1 +
arch/x86/kernel/cpu/mce/amd.c | 3 ---
arch/x86/kernel/cpu/mce/core.c | 28 +++++++++++++++++++++-------
4 files changed, 24 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index 3224f3862dc8..31e3cb550fb3 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -241,12 +241,14 @@ struct cper_ia_proc_ctx;
#ifdef CONFIG_X86_MCE
int mcheck_init(void);
+void mca_bsp_init(struct cpuinfo_x86 *c);
void mcheck_cpu_init(struct cpuinfo_x86 *c);
void mcheck_cpu_clear(struct cpuinfo_x86 *c);
int apei_smca_report_x86_error(struct cper_ia_proc_ctx *ctx_info,
u64 lapic_id);
#else
static inline int mcheck_init(void) { return 0; }
+static inline void mca_bsp_init(struct cpuinfo_x86 *c) {}
static inline void mcheck_cpu_init(struct cpuinfo_x86 *c) {}
static inline void mcheck_cpu_clear(struct cpuinfo_x86 *c) {}
static inline int apei_smca_report_x86_error(struct cper_ia_proc_ctx *ctx_info,
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 34a054181c4d..8bbfde05f04f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1784,6 +1784,7 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
setup_clear_cpu_cap(X86_FEATURE_LA57);
detect_nopl();
+ mca_bsp_init(c);
}
void __init init_cpu_devs(void)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index aa13c9304ad8..3c6c19eb0a18 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -653,9 +653,6 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
u32 low = 0, high = 0, address = 0;
int offset = -1;
- mce_flags.overflow_recov = cpu_feature_enabled(X86_FEATURE_OVERFLOW_RECOV);
- mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
- mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
mce_flags.amd_threshold = 1;
for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 9e31834b3542..79f3dd7f7851 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1837,13 +1837,6 @@ static void __mcheck_cpu_cap_init(void)
this_cpu_write(mce_num_banks, b);
__mcheck_cpu_mce_banks_init();
-
- /* Use accurate RIP reporting if available. */
- if ((cap & MCG_EXT_P) && MCG_EXT_CNT(cap) >= 9)
- mca_cfg.rip_msr = MSR_IA32_MCG_EIP;
-
- if (cap & MCG_SER_P)
- mca_cfg.ser = 1;
}
static void __mcheck_cpu_init_generic(void)
@@ -2240,6 +2233,27 @@ DEFINE_IDTENTRY_RAW(exc_machine_check)
}
#endif
+void mca_bsp_init(struct cpuinfo_x86 *c)
+{
+ u64 cap;
+
+ if (!mce_available(c))
+ return;
+
+ mce_flags.overflow_recov = cpu_feature_enabled(X86_FEATURE_OVERFLOW_RECOV);
+ mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
+ mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
+
+ rdmsrq(MSR_IA32_MCG_CAP, cap);
+
+ /* Use accurate RIP reporting if available. */
+ if ((cap & MCG_EXT_P) && MCG_EXT_CNT(cap) >= 9)
+ mca_cfg.rip_msr = MSR_IA32_MCG_EIP;
+
+ if (cap & MCG_SER_P)
+ mca_cfg.ser = 1;
+}
+
/*
* Called for each booted CPU to set up machine checks.
* Must be called with preempt off:
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 02/15] x86/mce: Define BSP-only init
2025-09-08 15:40 ` [PATCH v6 02/15] x86/mce: Define BSP-only init Yazen Ghannam
@ 2025-09-10 11:47 ` Nikolay Borisov
2025-09-10 13:53 ` Yazen Ghannam
2025-09-10 17:23 ` Luck, Tony
0 siblings, 2 replies; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-10 11:47 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 9/8/25 18:40, Yazen Ghannam wrote:
> Currently, MCA initialization is executed identically on each CPU as
> they are brought online. However, a number of MCA initialization tasks
> only need to be done once.
>
> Define a function to collect all 'global' init tasks and call this from
> the BSP only. Start with CPU features.
>
> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
> Tested-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
<snip>
> @@ -2240,6 +2233,27 @@ DEFINE_IDTENTRY_RAW(exc_machine_check)
> }
> #endif
>
> +void mca_bsp_init(struct cpuinfo_x86 *c)
> +{
> + u64 cap;
> +
> + if (!mce_available(c))
> + return;
> +
> + mce_flags.overflow_recov = cpu_feature_enabled(X86_FEATURE_OVERFLOW_RECOV);
> + mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
> + mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
> +
> + rdmsrq(MSR_IA32_MCG_CAP, cap);
> +
> + /* Use accurate RIP reporting if available. */
> + if ((cap & MCG_EXT_P) && MCG_EXT_CNT(cap) >= 9)
> + mca_cfg.rip_msr = MSR_IA32_MCG_EIP;
> +
> + if (cap & MCG_SER_P)
> + mca_cfg.ser = 1;
> +}
> +
LGTM
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
nit: One question though for those CPUs which consist of P+E cores, is
it mandated that both types of cores will have identical MCE
architecture, I assume the x86 world will be a lot more unified than
Arm's big.LITTLE ?
> /*
> * Called for each booted CPU to set up machine checks.
> * Must be called with preempt off:
>
^ permalink raw reply [flat|nested] 37+ messages in thread* Re: [PATCH v6 02/15] x86/mce: Define BSP-only init
2025-09-10 11:47 ` Nikolay Borisov
@ 2025-09-10 13:53 ` Yazen Ghannam
2025-09-10 14:28 ` Nikolay Borisov
2025-09-10 17:23 ` Luck, Tony
1 sibling, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-10 13:53 UTC (permalink / raw)
To: Nikolay Borisov
Cc: x86, Tony Luck, Rafael J. Wysocki, linux-kernel, linux-edac,
Smita.KoralahalliChannabasappa, Qiuxu Zhuo, linux-acpi
On Wed, Sep 10, 2025 at 02:47:16PM +0300, Nikolay Borisov wrote:
>
>
> On 9/8/25 18:40, Yazen Ghannam wrote:
> > Currently, MCA initialization is executed identically on each CPU as
> > they are brought online. However, a number of MCA initialization tasks
> > only need to be done once.
> >
> > Define a function to collect all 'global' init tasks and call this from
> > the BSP only. Start with CPU features.
> >
> > Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
> > Tested-by: Tony Luck <tony.luck@intel.com>
> > Reviewed-by: Tony Luck <tony.luck@intel.com>
> > Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
>
> <snip>
>
> > @@ -2240,6 +2233,27 @@ DEFINE_IDTENTRY_RAW(exc_machine_check)
> > }
> > #endif
> > +void mca_bsp_init(struct cpuinfo_x86 *c)
> > +{
> > + u64 cap;
> > +
> > + if (!mce_available(c))
> > + return;
> > +
> > + mce_flags.overflow_recov = cpu_feature_enabled(X86_FEATURE_OVERFLOW_RECOV);
> > + mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
> > + mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
> > +
> > + rdmsrq(MSR_IA32_MCG_CAP, cap);
> > +
> > + /* Use accurate RIP reporting if available. */
> > + if ((cap & MCG_EXT_P) && MCG_EXT_CNT(cap) >= 9)
> > + mca_cfg.rip_msr = MSR_IA32_MCG_EIP;
> > +
> > + if (cap & MCG_SER_P)
> > + mca_cfg.ser = 1;
> > +}
> > +
>
>
> LGTM
>
> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Thanks Nikokay for the reviews.
>
> nit: One question though for those CPUs which consist of P+E cores, is it
> mandated that both types of cores will have identical MCE architecture, I
> assume the x86 world will be a lot more unified than Arm's big.LITTLE ?
>
I think technically no, they won't be mandated to be identical. We already
have per-CPU feature bits, registers, etc. And in Scalable MCA there are
also per-bank config bits.
However, this doesn't mean we will have different MCE features between
CPUs in a system just yet. The architects do try to make things flexible
and scalable just in case there is a need in the future.
We can code to the architectures and be *mostly* future-proof. But it's
not guaranteed to be without bugs. And it's not guaranteed that every
possible case will be used.
Do you have any concerns or see any issues with the code today in this
area? Maybe you're thinking we should have per-CPU config for feature
bits? If so, I agree but we don't have a real need today AFAIK.
Maybe others can comment too.
Thanks,
Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread* Re: [PATCH v6 02/15] x86/mce: Define BSP-only init
2025-09-10 13:53 ` Yazen Ghannam
@ 2025-09-10 14:28 ` Nikolay Borisov
0 siblings, 0 replies; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-10 14:28 UTC (permalink / raw)
To: Yazen Ghannam
Cc: x86, Tony Luck, Rafael J. Wysocki, linux-kernel, linux-edac,
Smita.KoralahalliChannabasappa, Qiuxu Zhuo, linux-acpi
On 10.09.25 г. 16:53 ч., Yazen Ghannam wrote:
> On Wed, Sep 10, 2025 at 02:47:16PM +0300, Nikolay Borisov wrote:
>>
>>
>> On 9/8/25 18:40, Yazen Ghannam wrote:
>>> Currently, MCA initialization is executed identically on each CPU as
>>> they are brought online. However, a number of MCA initialization tasks
>>> only need to be done once.
>>>
>>> Define a function to collect all 'global' init tasks and call this from
>>> the BSP only. Start with CPU features.
>>>
>>> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
>>> Tested-by: Tony Luck <tony.luck@intel.com>
>>> Reviewed-by: Tony Luck <tony.luck@intel.com>
>>> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
>>
>> <snip>
>>
>>> @@ -2240,6 +2233,27 @@ DEFINE_IDTENTRY_RAW(exc_machine_check)
>>> }
>>> #endif
>>> +void mca_bsp_init(struct cpuinfo_x86 *c)
>>> +{
>>> + u64 cap;
>>> +
>>> + if (!mce_available(c))
>>> + return;
>>> +
>>> + mce_flags.overflow_recov = cpu_feature_enabled(X86_FEATURE_OVERFLOW_RECOV);
>>> + mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
>>> + mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
>>> +
>>> + rdmsrq(MSR_IA32_MCG_CAP, cap);
>>> +
>>> + /* Use accurate RIP reporting if available. */
>>> + if ((cap & MCG_EXT_P) && MCG_EXT_CNT(cap) >= 9)
>>> + mca_cfg.rip_msr = MSR_IA32_MCG_EIP;
>>> +
>>> + if (cap & MCG_SER_P)
>>> + mca_cfg.ser = 1;
>>> +}
>>> +
>>
>>
>> LGTM
>>
>> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
>
> Thanks Nikokay for the reviews.
>
>>
>> nit: One question though for those CPUs which consist of P+E cores, is it
>> mandated that both types of cores will have identical MCE architecture, I
>> assume the x86 world will be a lot more unified than Arm's big.LITTLE ?
>>
>
> I think technically no, they won't be mandated to be identical. We already
> have per-CPU feature bits, registers, etc. And in Scalable MCA there are
> also per-bank config bits.
>
> However, this doesn't mean we will have different MCE features between
> CPUs in a system just yet. The architects do try to make things flexible
> and scalable just in case there is a need in the future.
>
> We can code to the architectures and be *mostly* future-proof. But it's
> not guaranteed to be without bugs. And it's not guaranteed that every
> possible case will be used.
>
> Do you have any concerns or see any issues with the code today in this
> area? Maybe you're thinking we should have per-CPU config for feature
> bits? If so, I agree but we don't have a real need today AFAIK.
Yes, for example in your current code you assume that whatever the value
of MSR_IA32_MCG_CAP w.r.t MCG_EXT_P or MCG_SER_P it will be identical
for all CPUs. That might be fine, but what if, in the future, we end up
with machines where MCG_CAP will have different values for different
CPU's/cores.
Or that in a system we can end up with cpu's which have different
X86_FEATURE_SMCA value.
OTOH, you are right that when the time comes we can worry about this
since there can possibly be infinite number of possible scenarios.
>
> Maybe others can comment too.
>
> Thanks,
> Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v6 02/15] x86/mce: Define BSP-only init
2025-09-10 11:47 ` Nikolay Borisov
2025-09-10 13:53 ` Yazen Ghannam
@ 2025-09-10 17:23 ` Luck, Tony
1 sibling, 0 replies; 37+ messages in thread
From: Luck, Tony @ 2025-09-10 17:23 UTC (permalink / raw)
To: Nikolay Borisov, Yazen Ghannam, x86@kernel.org, Rafael J. Wysocki
Cc: linux-kernel@vger.kernel.org, linux-edac@vger.kernel.org,
Smita.KoralahalliChannabasappa@amd.com, Zhuo, Qiuxu,
linux-acpi@vger.kernel.org
> nit: One question though for those CPUs which consist of P+E cores, is
> it mandated that both types of cores will have identical MCE
> architecture, I assume the x86 world will be a lot more unified than
> Arm's big.LITTLE ?
Intel P and E cores have the same architectural MCE behavior (though the
model specific bits like IA32_MCi_MISC and IA32_STATUS.MSCOD could
be different on cores in the same hybrid part).
-Tony
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 03/15] x86/mce: Define BSP-only SMCA init
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
2025-09-08 15:40 ` [PATCH v6 01/15] x86/mce: Set CR4.MCE last during init Yazen Ghannam
2025-09-08 15:40 ` [PATCH v6 02/15] x86/mce: Define BSP-only init Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-10 11:48 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 04/15] x86/mce: Do 'UNKNOWN' vendor check early Yazen Ghannam
` (12 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Currently on AMD systems, MCA interrupt handler functions are set during
CPU init. However, the functions only need to be set once for the whole
system.
Assign the handlers only during BSP init. Do so only for SMCA systems to
maintain the old behavior for legacy systems.
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-9-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* Change mce_smca_cpu_init() to smca_bsp_init().
v2->v3:
* No change.
v1->v2:
* New in v2.
arch/x86/kernel/cpu/mce/amd.c | 6 ++++++
arch/x86/kernel/cpu/mce/core.c | 3 +++
arch/x86/kernel/cpu/mce/internal.h | 2 ++
3 files changed, 11 insertions(+)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 3c6c19eb0a18..7345e24bf658 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -684,6 +684,12 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
deferred_error_interrupt_enable(c);
}
+void smca_bsp_init(void)
+{
+ mce_threshold_vector = amd_threshold_interrupt;
+ deferred_error_int_vector = amd_deferred_error_interrupt;
+}
+
/*
* DRAM ECC errors are reported in the Northbridge (bank 4) with
* Extended Error Code 8.
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 79f3dd7f7851..a8cb7ff53e32 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -2244,6 +2244,9 @@ void mca_bsp_init(struct cpuinfo_x86 *c)
mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
+ if (mce_flags.smca)
+ smca_bsp_init();
+
rdmsrq(MSR_IA32_MCG_CAP, cap);
/* Use accurate RIP reporting if available. */
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index 64ac25b95360..6cb2995f0ec1 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -294,12 +294,14 @@ static __always_inline void smca_extract_err_addr(struct mce *m)
m->addr &= GENMASK_ULL(55, lsb);
}
+void smca_bsp_init(void);
#else
static inline void mce_threshold_create_device(unsigned int cpu) { }
static inline void mce_threshold_remove_device(unsigned int cpu) { }
static inline bool amd_filter_mce(struct mce *m) { return false; }
static inline bool amd_mce_usable_address(struct mce *m) { return false; }
static inline void smca_extract_err_addr(struct mce *m) { }
+static inline void smca_bsp_init(void) { }
#endif
#ifdef CONFIG_X86_ANCIENT_MCE
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 03/15] x86/mce: Define BSP-only SMCA init
2025-09-08 15:40 ` [PATCH v6 03/15] x86/mce: Define BSP-only SMCA init Yazen Ghannam
@ 2025-09-10 11:48 ` Nikolay Borisov
0 siblings, 0 replies; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-10 11:48 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 9/8/25 18:40, Yazen Ghannam wrote:
> Currently on AMD systems, MCA interrupt handler functions are set during
> CPU init. However, the functions only need to be set once for the whole
> system.
>
> Assign the handlers only during BSP init. Do so only for SMCA systems to
> maintain the old behavior for legacy systems.
>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 04/15] x86/mce: Do 'UNKNOWN' vendor check early
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (2 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 03/15] x86/mce: Define BSP-only SMCA init Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-10 13:27 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 05/15] x86/mce: Separate global and per-CPU quirks Yazen Ghannam
` (11 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
The 'UNKNOWN' vendor check is handled as a quirk that is run on each
online CPU. However, all CPUs are expected to have the same vendor.
Move the 'UNKNOWN' vendor check to the BSP-only init so it is done early
and once. Remove the unnecessary return value from the quirks check.
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-10-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* No change.
v2->v3:
* Add tags from Qiuxu and Tony.
v1->v2:
* New in v2.
arch/x86/kernel/cpu/mce/core.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index a8cb7ff53e32..515942cbfeb5 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1977,14 +1977,11 @@ static void apply_quirks_zhaoxin(struct cpuinfo_x86 *c)
}
/* Add per CPU specific workarounds here */
-static bool __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
+static void __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
{
struct mca_config *cfg = &mca_cfg;
switch (c->x86_vendor) {
- case X86_VENDOR_UNKNOWN:
- pr_info("unknown CPU type - not enabling MCE support\n");
- return false;
case X86_VENDOR_AMD:
apply_quirks_amd(c);
break;
@@ -2000,8 +1997,6 @@ static bool __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
cfg->monarch_timeout = 0;
if (cfg->bootlog != 0)
cfg->panic_timeout = 30;
-
- return true;
}
static bool __mcheck_cpu_ancient_init(struct cpuinfo_x86 *c)
@@ -2240,6 +2235,12 @@ void mca_bsp_init(struct cpuinfo_x86 *c)
if (!mce_available(c))
return;
+ if (c->x86_vendor == X86_VENDOR_UNKNOWN) {
+ mca_cfg.disabled = 1;
+ pr_info("unknown CPU type - not enabling MCE support\n");
+ return;
+ }
+
mce_flags.overflow_recov = cpu_feature_enabled(X86_FEATURE_OVERFLOW_RECOV);
mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
@@ -2274,10 +2275,7 @@ void mcheck_cpu_init(struct cpuinfo_x86 *c)
__mcheck_cpu_cap_init();
- if (!__mcheck_cpu_apply_quirks(c)) {
- mca_cfg.disabled = 1;
- return;
- }
+ __mcheck_cpu_apply_quirks(c);
if (!mce_gen_pool_init()) {
mca_cfg.disabled = 1;
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 04/15] x86/mce: Do 'UNKNOWN' vendor check early
2025-09-08 15:40 ` [PATCH v6 04/15] x86/mce: Do 'UNKNOWN' vendor check early Yazen Ghannam
@ 2025-09-10 13:27 ` Nikolay Borisov
0 siblings, 0 replies; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-10 13:27 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 8.09.25 г. 18:40 ч., Yazen Ghannam wrote:
> The 'UNKNOWN' vendor check is handled as a quirk that is run on each
> online CPU. However, all CPUs are expected to have the same vendor.
>
> Move the 'UNKNOWN' vendor check to the BSP-only init so it is done early
> and once. Remove the unnecessary return value from the quirks check.
>
> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
> Tested-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 05/15] x86/mce: Separate global and per-CPU quirks
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (3 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 04/15] x86/mce: Do 'UNKNOWN' vendor check early Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-10 13:29 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 06/15] x86/mce: Move machine_check_poll() status checks to helper functions Yazen Ghannam
` (10 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Many quirks are global configuration settings and a handful apply to
each CPU.
Move the per-CPU quirks to vendor init to execute them on each online
CPU. Set the global quirks during BSP-only init so they're only executed
once and early.
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-11-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* Apply consistent naming to quirk functions.
v3->v4:
* Add newline in mce_amd_feature_init().
* Remove __mcheck_cpu_apply_quirks().
* Update code comment ref. __mcheck_cpu_apply_quirks().
v2->v3:
* Update code comment.
* Add tags from Qiuxu and Tony.
v1->v2:
* New in v2.
arch/x86/kernel/cpu/mce/amd.c | 24 ++++++++++++
arch/x86/kernel/cpu/mce/core.c | 85 +++++++++++------------------------------
arch/x86/kernel/cpu/mce/intel.c | 18 +++++++++
3 files changed, 65 insertions(+), 62 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 7345e24bf658..b8aed0ac765c 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -646,6 +646,28 @@ static void disable_err_thresholding(struct cpuinfo_x86 *c, unsigned int bank)
wrmsrq(MSR_K7_HWCR, hwcr);
}
+static void amd_apply_cpu_quirks(struct cpuinfo_x86 *c)
+{
+ struct mce_bank *mce_banks = this_cpu_ptr(mce_banks_array);
+
+ /* This should be disabled by the BIOS, but isn't always */
+ if (c->x86 == 15 && this_cpu_read(mce_num_banks) > 4) {
+ /*
+ * disable GART TBL walk error reporting, which
+ * trips off incorrectly with the IOMMU & 3ware
+ * & Cerberus:
+ */
+ clear_bit(10, (unsigned long *)&mce_banks[4].ctl);
+ }
+
+ /*
+ * Various K7s with broken bank 0 around. Always disable
+ * by default.
+ */
+ if (c->x86 == 6 && this_cpu_read(mce_num_banks))
+ mce_banks[0].ctl = 0;
+}
+
/* cpu init entry point, called from mce.c with preempt off */
void mce_amd_feature_init(struct cpuinfo_x86 *c)
{
@@ -653,6 +675,8 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
u32 low = 0, high = 0, address = 0;
int offset = -1;
+ amd_apply_cpu_quirks(c);
+
mce_flags.amd_threshold = 1;
for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 515942cbfeb5..7fd86c8006ba 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1807,8 +1807,9 @@ static void __mcheck_cpu_mce_banks_init(void)
struct mce_bank *b = &mce_banks[i];
/*
- * Init them all, __mcheck_cpu_apply_quirks() is going to apply
- * the required vendor quirks before
+ * Init them all by default.
+ *
+ * The required vendor quirks will be applied before
* __mcheck_cpu_init_prepare_banks() does the final bank setup.
*/
b->ctl = -1ULL;
@@ -1880,20 +1881,8 @@ static void __mcheck_cpu_init_prepare_banks(void)
}
}
-static void apply_quirks_amd(struct cpuinfo_x86 *c)
+static void amd_apply_global_quirks(struct cpuinfo_x86 *c)
{
- struct mce_bank *mce_banks = this_cpu_ptr(mce_banks_array);
-
- /* This should be disabled by the BIOS, but isn't always */
- if (c->x86 == 15 && this_cpu_read(mce_num_banks) > 4) {
- /*
- * disable GART TBL walk error reporting, which
- * trips off incorrectly with the IOMMU & 3ware
- * & Cerberus:
- */
- clear_bit(10, (unsigned long *)&mce_banks[4].ctl);
- }
-
if (c->x86 < 0x11 && mca_cfg.bootlog < 0) {
/*
* Lots of broken BIOS around that don't clear them
@@ -1902,13 +1891,6 @@ static void apply_quirks_amd(struct cpuinfo_x86 *c)
mca_cfg.bootlog = 0;
}
- /*
- * Various K7s with broken bank 0 around. Always disable
- * by default.
- */
- if (c->x86 == 6 && this_cpu_read(mce_num_banks))
- mce_banks[0].ctl = 0;
-
/*
* overflow_recov is supported for F15h Models 00h-0fh
* even though we don't have a CPUID bit for it.
@@ -1920,25 +1902,12 @@ static void apply_quirks_amd(struct cpuinfo_x86 *c)
mce_flags.zen_ifu_quirk = 1;
}
-static void apply_quirks_intel(struct cpuinfo_x86 *c)
+static void intel_apply_global_quirks(struct cpuinfo_x86 *c)
{
- struct mce_bank *mce_banks = this_cpu_ptr(mce_banks_array);
-
/* Older CPUs (prior to family 6) don't need quirks. */
if (c->x86_vfm < INTEL_PENTIUM_PRO)
return;
- /*
- * SDM documents that on family 6 bank 0 should not be written
- * because it aliases to another special BIOS controlled
- * register.
- * But it's not aliased anymore on model 0x1a+
- * Don't ignore bank 0 completely because there could be a
- * valid event later, merely don't write CTL0.
- */
- if (c->x86_vfm < INTEL_NEHALEM_EP && this_cpu_read(mce_num_banks))
- mce_banks[0].init = false;
-
/*
* All newer Intel systems support MCE broadcasting. Enable
* synchronization with a one second timeout.
@@ -1964,7 +1933,7 @@ static void apply_quirks_intel(struct cpuinfo_x86 *c)
mce_flags.skx_repmov_quirk = 1;
}
-static void apply_quirks_zhaoxin(struct cpuinfo_x86 *c)
+static void zhaoxin_apply_global_quirks(struct cpuinfo_x86 *c)
{
/*
* All newer Zhaoxin CPUs support MCE broadcasting. Enable
@@ -1976,29 +1945,6 @@ static void apply_quirks_zhaoxin(struct cpuinfo_x86 *c)
}
}
-/* Add per CPU specific workarounds here */
-static void __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
-{
- struct mca_config *cfg = &mca_cfg;
-
- switch (c->x86_vendor) {
- case X86_VENDOR_AMD:
- apply_quirks_amd(c);
- break;
- case X86_VENDOR_INTEL:
- apply_quirks_intel(c);
- break;
- case X86_VENDOR_ZHAOXIN:
- apply_quirks_zhaoxin(c);
- break;
- }
-
- if (cfg->monarch_timeout < 0)
- cfg->monarch_timeout = 0;
- if (cfg->bootlog != 0)
- cfg->panic_timeout = 30;
-}
-
static bool __mcheck_cpu_ancient_init(struct cpuinfo_x86 *c)
{
if (c->x86 != 5)
@@ -2256,6 +2202,23 @@ void mca_bsp_init(struct cpuinfo_x86 *c)
if (cap & MCG_SER_P)
mca_cfg.ser = 1;
+
+ switch (c->x86_vendor) {
+ case X86_VENDOR_AMD:
+ amd_apply_global_quirks(c);
+ break;
+ case X86_VENDOR_INTEL:
+ intel_apply_global_quirks(c);
+ break;
+ case X86_VENDOR_ZHAOXIN:
+ zhaoxin_apply_global_quirks(c);
+ break;
+ }
+
+ if (mca_cfg.monarch_timeout < 0)
+ mca_cfg.monarch_timeout = 0;
+ if (mca_cfg.bootlog != 0)
+ mca_cfg.panic_timeout = 30;
}
/*
@@ -2275,8 +2238,6 @@ void mcheck_cpu_init(struct cpuinfo_x86 *c)
__mcheck_cpu_cap_init();
- __mcheck_cpu_apply_quirks(c);
-
if (!mce_gen_pool_init()) {
mca_cfg.disabled = 1;
pr_emerg("Couldn't allocate MCE records pool!\n");
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
index 9b149b9c4109..4655223ba560 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -468,8 +468,26 @@ static void intel_imc_init(struct cpuinfo_x86 *c)
}
}
+static void intel_apply_cpu_quirks(struct cpuinfo_x86 *c)
+{
+ /*
+ * SDM documents that on family 6 bank 0 should not be written
+ * because it aliases to another special BIOS controlled
+ * register.
+ * But it's not aliased anymore on model 0x1a+
+ * Don't ignore bank 0 completely because there could be a
+ * valid event later, merely don't write CTL0.
+ *
+ * Older CPUs (prior to family 6) can't reach this point and already
+ * return early due to the check of __mcheck_cpu_ancient_init().
+ */
+ if (c->x86_vfm < INTEL_NEHALEM_EP && this_cpu_read(mce_num_banks))
+ this_cpu_ptr(mce_banks_array)[0].init = false;
+}
+
void mce_intel_feature_init(struct cpuinfo_x86 *c)
{
+ intel_apply_cpu_quirks(c);
intel_init_cmci();
intel_init_lmce();
intel_imc_init(c);
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 05/15] x86/mce: Separate global and per-CPU quirks
2025-09-08 15:40 ` [PATCH v6 05/15] x86/mce: Separate global and per-CPU quirks Yazen Ghannam
@ 2025-09-10 13:29 ` Nikolay Borisov
0 siblings, 0 replies; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-10 13:29 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 8.09.25 г. 18:40 ч., Yazen Ghannam wrote:
> Many quirks are global configuration settings and a handful apply to
> each CPU.
>
> Move the per-CPU quirks to vendor init to execute them on each online
> CPU. Set the global quirks during BSP-only init so they're only executed
> once and early.
>
> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
> Tested-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 06/15] x86/mce: Move machine_check_poll() status checks to helper functions
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (4 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 05/15] x86/mce: Separate global and per-CPU quirks Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-10 15:09 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 07/15] x86/mce: Add clear_bank() helper Yazen Ghannam
` (9 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
There are a number of generic and vendor-specific status checks in
machine_check_poll(). These are used to determine if an error should be
skipped.
Move these into helper functions. Future vendor-specific checks will be
added to the helpers.
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-12-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* No change.
v2->v3:
* Add tags from Qiuxu and Tony.
v1->v2:
* Change log_poll_error() to should_log_poll_error().
* Keep code comment.
arch/x86/kernel/cpu/mce/core.c | 88 +++++++++++++++++++++++-------------------
1 file changed, 48 insertions(+), 40 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 7fd86c8006ba..5dec0da6169e 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -714,6 +714,52 @@ static noinstr void mce_read_aux(struct mce_hw_err *err, int i)
DEFINE_PER_CPU(unsigned, mce_poll_count);
+/*
+ * Newer Intel systems that support software error
+ * recovery need to make additional checks. Other
+ * CPUs should skip over uncorrected errors, but log
+ * everything else.
+ */
+static bool ser_should_log_poll_error(struct mce *m)
+{
+ /* Log "not enabled" (speculative) errors */
+ if (!(m->status & MCI_STATUS_EN))
+ return true;
+
+ /*
+ * Log UCNA (SDM: 15.6.3 "UCR Error Classification")
+ * UC == 1 && PCC == 0 && S == 0
+ */
+ if (!(m->status & MCI_STATUS_PCC) && !(m->status & MCI_STATUS_S))
+ return true;
+
+ return false;
+}
+
+static bool should_log_poll_error(enum mcp_flags flags, struct mce_hw_err *err)
+{
+ struct mce *m = &err->m;
+
+ /* If this entry is not valid, ignore it. */
+ if (!(m->status & MCI_STATUS_VAL))
+ return false;
+
+ /*
+ * If we are logging everything (at CPU online) or this
+ * is a corrected error, then we must log it.
+ */
+ if ((flags & MCP_UC) || !(m->status & MCI_STATUS_UC))
+ return true;
+
+ if (mca_cfg.ser)
+ return ser_should_log_poll_error(m);
+
+ if (m->status & MCI_STATUS_UC)
+ return false;
+
+ return true;
+}
+
/*
* Poll for corrected events or events that happened before reset.
* Those are just logged through /dev/mcelog.
@@ -765,48 +811,10 @@ void machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
if (!mca_cfg.cmci_disabled)
mce_track_storm(m);
- /* If this entry is not valid, ignore it */
- if (!(m->status & MCI_STATUS_VAL))
+ /* Verify that the error should be logged based on hardware conditions. */
+ if (!should_log_poll_error(flags, &err))
continue;
- /*
- * If we are logging everything (at CPU online) or this
- * is a corrected error, then we must log it.
- */
- if ((flags & MCP_UC) || !(m->status & MCI_STATUS_UC))
- goto log_it;
-
- /*
- * Newer Intel systems that support software error
- * recovery need to make additional checks. Other
- * CPUs should skip over uncorrected errors, but log
- * everything else.
- */
- if (!mca_cfg.ser) {
- if (m->status & MCI_STATUS_UC)
- continue;
- goto log_it;
- }
-
- /* Log "not enabled" (speculative) errors */
- if (!(m->status & MCI_STATUS_EN))
- goto log_it;
-
- /*
- * Log UCNA (SDM: 15.6.3 "UCR Error Classification")
- * UC == 1 && PCC == 0 && S == 0
- */
- if (!(m->status & MCI_STATUS_PCC) && !(m->status & MCI_STATUS_S))
- goto log_it;
-
- /*
- * Skip anything else. Presumption is that our read of this
- * bank is racing with a machine check. Leave the log alone
- * for do_machine_check() to deal with it.
- */
- continue;
-
-log_it:
mce_read_aux(&err, i);
m->severity = mce_severity(m, NULL, NULL, false);
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* [PATCH v6 07/15] x86/mce: Add clear_bank() helper
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (5 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 06/15] x86/mce: Move machine_check_poll() status checks to helper functions Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-10 15:22 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 08/15] x86/mce: Unify AMD THR handler with MCA Polling Yazen Ghannam
` (8 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Add a helper at the end of the MCA polling function to collect vendor
and/or feature actions.
Start with a basic skeleton for now. Actions for AMD thresholding and
deferred errors will be added later.
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250903094859.GGaLgPC_eWQgAqpHHb@fat_crate.local
v5->v6:
* New in v6.
arch/x86/kernel/cpu/mce/amd.c | 5 +++++
arch/x86/kernel/cpu/mce/core.c | 12 ++++++++++--
arch/x86/kernel/cpu/mce/internal.h | 3 +++
3 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index b8aed0ac765c..d6906442f49b 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -955,6 +955,11 @@ static void amd_threshold_interrupt(void)
}
}
+void amd_clear_bank(struct mce *m)
+{
+ mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);
+}
+
/*
* Sysfs Interface
*/
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 5dec0da6169e..06645f56b564 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -423,7 +423,7 @@ noinstr u64 mce_rdmsrq(u32 msr)
return EAX_EDX_VAL(val, low, high);
}
-static noinstr void mce_wrmsrq(u32 msr, u64 v)
+noinstr void mce_wrmsrq(u32 msr, u64 v)
{
u32 low, high;
@@ -760,6 +760,14 @@ static bool should_log_poll_error(enum mcp_flags flags, struct mce_hw_err *err)
return true;
}
+static void clear_bank(struct mce *m)
+{
+ if (m->cpuvendor == X86_VENDOR_AMD)
+ return amd_clear_bank(m);
+
+ mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);
+}
+
/*
* Poll for corrected events or events that happened before reset.
* Those are just logged through /dev/mcelog.
@@ -834,7 +842,7 @@ void machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
/*
* Clear state for this bank.
*/
- mce_wrmsrq(mca_msr_reg(i, MCA_STATUS), 0);
+ clear_bank(m);
}
/*
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index 6cb2995f0ec1..b0e00ec5cc8c 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -269,6 +269,7 @@ void mce_threshold_create_device(unsigned int cpu);
void mce_threshold_remove_device(unsigned int cpu);
extern bool amd_filter_mce(struct mce *m);
bool amd_mce_usable_address(struct mce *m);
+void amd_clear_bank(struct mce *m);
/*
* If MCA_CONFIG[McaLsbInStatusSupported] is set, extract ErrAddr in bits
@@ -300,6 +301,7 @@ static inline void mce_threshold_create_device(unsigned int cpu) { }
static inline void mce_threshold_remove_device(unsigned int cpu) { }
static inline bool amd_filter_mce(struct mce *m) { return false; }
static inline bool amd_mce_usable_address(struct mce *m) { return false; }
+static inline void amd_clear_bank(struct mce *m) { }
static inline void smca_extract_err_addr(struct mce *m) { }
static inline void smca_bsp_init(void) { }
#endif
@@ -319,6 +321,7 @@ static __always_inline void winchip_machine_check(struct pt_regs *regs) {}
#endif
noinstr u64 mce_rdmsrq(u32 msr);
+noinstr void mce_wrmsrq(u32 msr, u64 v);
static __always_inline u32 mca_msr_reg(int bank, enum mca_msr reg)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* [PATCH v6 08/15] x86/mce: Unify AMD THR handler with MCA Polling
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (6 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 07/15] x86/mce: Add clear_bank() helper Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-08 15:40 ` [PATCH v6 09/15] x86/mce: Unify AMD DFR " Yazen Ghannam
` (7 subsequent siblings)
15 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
AMD systems optionally support an MCA thresholding interrupt. The
interrupt should be used as another signal to trigger MCA polling. This
is similar to how the Intel Corrected Machine Check interrupt (CMCI) is
handled.
AMD MCA thresholding is managed using the MCA_MISC registers within an
MCA bank. The OS will need to modify the hardware error count field in
order to reset the threshold limit and rearm the interrupt. Management
of the MCA_MISC register should be done as a follow up to the basic MCA
polling flow. It should not be the main focus of the interrupt handler.
Furthermore, future systems will have the ability to send an MCA
thresholding interrupt to the OS even when the OS does not manage the
feature, i.e. MCA_MISC registers are Read-as-Zero/Locked.
Call the common MCA polling function when handling the MCA thresholding
interrupt. This will allow the OS to find any valid errors whether or
not the MCA thresholding feature is OS-managed. Also, this allows the
common MCA polling options and kernel parameters to apply to AMD
systems.
Add a callback to the MCA polling function to check and reset any
threshold blocks that have reached their threshold limit.
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-13-865768a2eef8@amd.com
v5->v6:
* Move bank/block reset code to new helper.
v4->v5:
* No change.
v3->v4:
* No change.
v2->v3:
* Add tags from Qiuxu and Tony.
v1->v2:
* Start collecting per-CPU items in a struct.
* Keep and use mce_flags.amd_threshold.
arch/x86/kernel/cpu/mce/amd.c | 51 +++++++++++++++++++------------------------
1 file changed, 23 insertions(+), 28 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index d6906442f49b..ac6a98aa7bc2 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -54,6 +54,12 @@
static bool thresholding_irq_en;
+struct mce_amd_cpu_data {
+ mce_banks_t thr_intr_banks;
+};
+
+static DEFINE_PER_CPU_READ_MOSTLY(struct mce_amd_cpu_data, mce_amd_data);
+
static const char * const th_names[] = {
"load_store",
"insn_fetch",
@@ -556,6 +562,7 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
if (!b.interrupt_capable)
goto done;
+ __set_bit(bank, this_cpu_ptr(&mce_amd_data)->thr_intr_banks);
b.interrupt_enable = 1;
if (!mce_flags.smca) {
@@ -896,12 +903,7 @@ static void amd_deferred_error_interrupt(void)
log_error_deferred(bank);
}
-static void log_error_thresholding(unsigned int bank, u64 misc)
-{
- _log_error_deferred(bank, misc);
-}
-
-static void log_and_reset_block(struct threshold_block *block)
+static void reset_block(struct threshold_block *block)
{
struct thresh_restart tr;
u32 low = 0, high = 0;
@@ -915,23 +917,14 @@ static void log_and_reset_block(struct threshold_block *block)
if (!(high & MASK_OVERFLOW_HI))
return;
- /* Log the MCE which caused the threshold event. */
- log_error_thresholding(block->bank, ((u64)high << 32) | low);
-
- /* Reset threshold block after logging error. */
memset(&tr, 0, sizeof(tr));
tr.b = block;
threshold_restart_block(&tr);
}
-/*
- * Threshold interrupt handler will service THRESHOLD_APIC_VECTOR. The interrupt
- * goes off when error_count reaches threshold_limit.
- */
-static void amd_threshold_interrupt(void)
+static void amd_reset_thr_limit(unsigned int bank)
{
- struct threshold_bank **bp = this_cpu_read(threshold_banks), *thr_bank;
- unsigned int bank, cpu = smp_processor_id();
+ struct threshold_bank **bp = this_cpu_read(threshold_banks);
struct threshold_block *block, *tmp;
/*
@@ -939,24 +932,26 @@ static void amd_threshold_interrupt(void)
* handler is installed at boot time, but on a hotplug event the
* interrupt might fire before the data has been initialized.
*/
- if (!bp)
+ if (!bp || !bp[bank])
return;
- for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
- if (!(per_cpu(bank_map, cpu) & BIT_ULL(bank)))
- continue;
-
- thr_bank = bp[bank];
- if (!thr_bank)
- continue;
+ list_for_each_entry_safe(block, tmp, &bp[bank]->miscj, miscj)
+ reset_block(block);
+}
- list_for_each_entry_safe(block, tmp, &thr_bank->miscj, miscj)
- log_and_reset_block(block);
- }
+/*
+ * Threshold interrupt handler will service THRESHOLD_APIC_VECTOR. The interrupt
+ * goes off when error_count reaches threshold_limit.
+ */
+static void amd_threshold_interrupt(void)
+{
+ machine_check_poll(MCP_TIMESTAMP, &this_cpu_ptr(&mce_amd_data)->thr_intr_banks);
}
void amd_clear_bank(struct mce *m)
{
+ amd_reset_thr_limit(m->bank);
+
mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* [PATCH v6 09/15] x86/mce: Unify AMD DFR handler with MCA Polling
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (7 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 08/15] x86/mce: Unify AMD THR handler with MCA Polling Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-08 15:40 ` [PATCH v6 10/15] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems Yazen Ghannam
` (6 subsequent siblings)
15 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
AMD systems optionally support a deferred error interrupt. The interrupt
should be used as another signal to trigger MCA polling. This is similar
to how other MCA interrupts are handled.
Deferred errors do not require any special handling related to the
interrupt, e.g. resetting or rearming the interrupt, etc.
However, Scalable MCA systems include a pair of registers, MCA_DESTAT
and MCA_DEADDR, that should be checked for valid errors. This check
should be done whenever MCA registers are polled. Currently, the
deferred error interrupt does this check, but the MCA polling function
does not.
Call the MCA polling function when handling the deferred error
interrupt. This keeps all "polling" cases in a common function.
Call the polling function only for banks that have the deferred error
interrupt enabled.
Add an SMCA status check helper. This will do the same status check and
register clearing that the interrupt handler has done. And it extends
the common polling flow to find AMD deferred errors.
Remove old code whose functionality is already covered in the common MCA
code.
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-14-865768a2eef8@amd.com
v5->v6:
* Move status clearing code to new helper.
v4->v5:
* No change.
v3->v4:
* Add kflag for checking DFR registers.
v2->v3:
* Add tags from Qiuxu and Tony.
v1->v2:
* Keep code comment.
* Log directly from helper function rather than pass values.
Link:
https://lore.kernel.org/r/20250213-wip-mca-updates-v2-13-3636547fe05f@amd.com
v2->v3:
* Add tags from Qiuxu and Tony.
v1->v2:
* Keep code comment.
* Log directly from helper function rather than pass values.
arch/x86/include/asm/mce.h | 6 +++
arch/x86/kernel/cpu/mce/amd.c | 108 ++++-------------------------------------
arch/x86/kernel/cpu/mce/core.c | 45 ++++++++++++++++-
3 files changed, 59 insertions(+), 100 deletions(-)
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index 31e3cb550fb3..7d6588195d56 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -165,6 +165,12 @@
*/
#define MCE_IN_KERNEL_COPYIN BIT_ULL(7)
+/*
+ * Indicates that handler should check and clear Deferred error registers
+ * rather than common ones.
+ */
+#define MCE_CHECK_DFR_REGS BIT_ULL(8)
+
/*
* This structure contains all data related to the MCE log. Also
* carries a signature to make it easier to find from external
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index ac6a98aa7bc2..1b1b83b3aef9 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -56,6 +56,7 @@ static bool thresholding_irq_en;
struct mce_amd_cpu_data {
mce_banks_t thr_intr_banks;
+ mce_banks_t dfr_intr_banks;
};
static DEFINE_PER_CPU_READ_MOSTLY(struct mce_amd_cpu_data, mce_amd_data);
@@ -300,8 +301,10 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
* APIC based interrupt. First, check that no interrupt has been
* set.
*/
- if ((low & BIT(5)) && !((high >> 5) & 0x3))
+ if ((low & BIT(5)) && !((high >> 5) & 0x3)) {
+ __set_bit(bank, this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
high |= BIT(5);
+ }
this_cpu_ptr(mce_banks_array)[bank].lsb_in_status = !!(low & BIT(8));
@@ -792,37 +795,6 @@ bool amd_mce_usable_address(struct mce *m)
return false;
}
-static void __log_error(unsigned int bank, u64 status, u64 addr, u64 misc)
-{
- struct mce_hw_err err;
- struct mce *m = &err.m;
-
- mce_prep_record(&err);
-
- m->status = status;
- m->misc = misc;
- m->bank = bank;
- m->tsc = rdtsc();
-
- if (m->status & MCI_STATUS_ADDRV) {
- m->addr = addr;
-
- smca_extract_err_addr(m);
- }
-
- if (mce_flags.smca) {
- rdmsrq(MSR_AMD64_SMCA_MCx_IPID(bank), m->ipid);
-
- if (m->status & MCI_STATUS_SYNDV) {
- rdmsrq(MSR_AMD64_SMCA_MCx_SYND(bank), m->synd);
- rdmsrq(MSR_AMD64_SMCA_MCx_SYND1(bank), err.vendor.amd.synd1);
- rdmsrq(MSR_AMD64_SMCA_MCx_SYND2(bank), err.vendor.amd.synd2);
- }
- }
-
- mce_log(&err);
-}
-
DEFINE_IDTENTRY_SYSVEC(sysvec_deferred_error)
{
trace_deferred_error_apic_entry(DEFERRED_ERROR_VECTOR);
@@ -832,75 +804,10 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_deferred_error)
apic_eoi();
}
-/*
- * Returns true if the logged error is deferred. False, otherwise.
- */
-static inline bool
-_log_error_bank(unsigned int bank, u32 msr_stat, u32 msr_addr, u64 misc)
-{
- u64 status, addr = 0;
-
- rdmsrq(msr_stat, status);
- if (!(status & MCI_STATUS_VAL))
- return false;
-
- if (status & MCI_STATUS_ADDRV)
- rdmsrq(msr_addr, addr);
-
- __log_error(bank, status, addr, misc);
-
- wrmsrq(msr_stat, 0);
-
- return status & MCI_STATUS_DEFERRED;
-}
-
-static bool _log_error_deferred(unsigned int bank, u32 misc)
-{
- if (!_log_error_bank(bank, mca_msr_reg(bank, MCA_STATUS),
- mca_msr_reg(bank, MCA_ADDR), misc))
- return false;
-
- /*
- * Non-SMCA systems don't have MCA_DESTAT/MCA_DEADDR registers.
- * Return true here to avoid accessing these registers.
- */
- if (!mce_flags.smca)
- return true;
-
- /* Clear MCA_DESTAT if the deferred error was logged from MCA_STATUS. */
- wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(bank), 0);
- return true;
-}
-
-/*
- * We have three scenarios for checking for Deferred errors:
- *
- * 1) Non-SMCA systems check MCA_STATUS and log error if found.
- * 2) SMCA systems check MCA_STATUS. If error is found then log it and also
- * clear MCA_DESTAT.
- * 3) SMCA systems check MCA_DESTAT, if error was not found in MCA_STATUS, and
- * log it.
- */
-static void log_error_deferred(unsigned int bank)
-{
- if (_log_error_deferred(bank, 0))
- return;
-
- /*
- * Only deferred errors are logged in MCA_DE{STAT,ADDR} so just check
- * for a valid error.
- */
- _log_error_bank(bank, MSR_AMD64_SMCA_MCx_DESTAT(bank),
- MSR_AMD64_SMCA_MCx_DEADDR(bank), 0);
-}
-
/* APIC interrupt handler for deferred errors */
static void amd_deferred_error_interrupt(void)
{
- unsigned int bank;
-
- for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank)
- log_error_deferred(bank);
+ machine_check_poll(MCP_TIMESTAMP, &this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
}
static void reset_block(struct threshold_block *block)
@@ -952,7 +859,10 @@ void amd_clear_bank(struct mce *m)
{
amd_reset_thr_limit(m->bank);
- mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);
+ if (m->kflags & MCE_CHECK_DFR_REGS)
+ mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);
+ else
+ mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0);
}
/*
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 06645f56b564..e2d51609d2cb 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -687,7 +687,10 @@ static noinstr void mce_read_aux(struct mce_hw_err *err, int i)
m->misc = mce_rdmsrq(mca_msr_reg(i, MCA_MISC));
if (m->status & MCI_STATUS_ADDRV) {
- m->addr = mce_rdmsrq(mca_msr_reg(i, MCA_ADDR));
+ if (m->kflags & MCE_CHECK_DFR_REGS)
+ m->addr = mce_rdmsrq(MSR_AMD64_SMCA_MCx_DEADDR(i));
+ else
+ m->addr = mce_rdmsrq(mca_msr_reg(i, MCA_ADDR));
/*
* Mask the reported address by the reported granularity.
@@ -714,6 +717,43 @@ static noinstr void mce_read_aux(struct mce_hw_err *err, int i)
DEFINE_PER_CPU(unsigned, mce_poll_count);
+/*
+ * We have three scenarios for checking for Deferred errors:
+ *
+ * 1) Non-SMCA systems check MCA_STATUS and log error if found.
+ * 2) SMCA systems check MCA_STATUS. If error is found then log it and also
+ * clear MCA_DESTAT.
+ * 3) SMCA systems check MCA_DESTAT, if error was not found in MCA_STATUS, and
+ * log it.
+ */
+static bool smca_should_log_poll_error(enum mcp_flags flags, struct mce_hw_err *err)
+{
+ struct mce *m = &err->m;
+
+ /*
+ * If this is a deferred error found in MCA_STATUS, then clear
+ * the redundant data from the MCA_DESTAT register.
+ */
+ if (m->status & MCI_STATUS_VAL) {
+ if (m->status & MCI_STATUS_DEFERRED)
+ mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0);
+
+ return true;
+ }
+
+ /*
+ * If the MCA_DESTAT register has valid data, then use
+ * it as the status register.
+ */
+ m->status = mce_rdmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank));
+
+ if (!(m->status & MCI_STATUS_VAL))
+ return false;
+
+ m->kflags |= MCE_CHECK_DFR_REGS;
+ return true;
+}
+
/*
* Newer Intel systems that support software error
* recovery need to make additional checks. Other
@@ -740,6 +780,9 @@ static bool should_log_poll_error(enum mcp_flags flags, struct mce_hw_err *err)
{
struct mce *m = &err->m;
+ if (mce_flags.smca)
+ return smca_should_log_poll_error(flags, err);
+
/* If this entry is not valid, ignore it. */
if (!(m->status & MCI_STATUS_VAL))
return false;
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* [PATCH v6 10/15] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (8 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 09/15] x86/mce: Unify AMD DFR " Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-11 10:22 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 11/15] x86/mce/amd: Support SMCA Corrected Error Interrupt Yazen Ghannam
` (5 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Scalable MCA systems have a per-CPU register that gives the APIC LVT
offset for the thresholding and deferred error interrupts.
Currently, this register is read once to set up the deferred error
interrupt and then read again for each thresholding block. Furthermore,
the APIC LVT registers are configured each time, but they only need to
be configured once per-CPU.
Move the APIC LVT setup to the early part of CPU init, so that the
registers are set up once. Also, this ensures that the kernel is ready
to service the interrupts before the individual error sources (each MCA
bank) are enabled.
Apply this change only to SMCA systems to avoid breaking any legacy
behavior. The deferred error interrupt is technically advertised by the
SUCCOR feature. However, this was first made available on SMCA systems.
Therefore, only set up the deferred error interrupt on SMCA systems and
simplify the code.
Guidance from hardware designers is that the LVT offsets provided from
the platform should be used. The kernel should not try to enforce
specific values. However, the kernel should check that an LVT offset is
not reused for multiple sources.
Therefore, remove the extra checking and value enforcement from the MCE
code. The "reuse/conflict" case is already handled in
setup_APIC_eilvt().
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-15-865768a2eef8@amd.com
v5->v6:
* Applied "bools to flags" and other fixups from Boris.
v4->v5:
* Added back to set.
* Updated commit message with more details.
v3->v4:
* Dropped from set.
v2->v3:
* Add tags from Tony.
v1->v2:
* Use new per-CPU struct.
* Don't set up interrupt vectors.
arch/x86/kernel/cpu/mce/amd.c | 121 ++++++++++++++++++------------------------
1 file changed, 53 insertions(+), 68 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 1b1b83b3aef9..a6f5c9339d7c 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -43,9 +43,6 @@
/* Deferred error settings */
#define MSR_CU_DEF_ERR 0xC0000410
#define MASK_DEF_LVTOFF 0x000000F0
-#define MASK_DEF_INT_TYPE 0x00000006
-#define DEF_LVT_OFF 0x2
-#define DEF_INT_TYPE_APIC 0x2
/* Scalable MCA: */
@@ -57,6 +54,10 @@ static bool thresholding_irq_en;
struct mce_amd_cpu_data {
mce_banks_t thr_intr_banks;
mce_banks_t dfr_intr_banks;
+
+ u32 thr_intr_en: 1,
+ dfr_intr_en: 1,
+ __resv: 30;
};
static DEFINE_PER_CPU_READ_MOSTLY(struct mce_amd_cpu_data, mce_amd_data);
@@ -271,6 +272,7 @@ void (*deferred_error_int_vector)(void) = default_deferred_error_interrupt;
static void smca_configure(unsigned int bank, unsigned int cpu)
{
+ struct mce_amd_cpu_data *data = this_cpu_ptr(&mce_amd_data);
u8 *bank_counts = this_cpu_ptr(smca_bank_counts);
const struct smca_hwid *s_hwid;
unsigned int i, hwid_mcatype;
@@ -301,8 +303,8 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
* APIC based interrupt. First, check that no interrupt has been
* set.
*/
- if ((low & BIT(5)) && !((high >> 5) & 0x3)) {
- __set_bit(bank, this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
+ if ((low & BIT(5)) && !((high >> 5) & 0x3) && data->dfr_intr_en) {
+ __set_bit(bank, data->dfr_intr_banks);
high |= BIT(5);
}
@@ -377,6 +379,14 @@ static bool lvt_off_valid(struct threshold_block *b, int apic, u32 lo, u32 hi)
{
int msr = (hi & MASK_LVTOFF_HI) >> 20;
+ /*
+ * On SMCA CPUs, LVT offset is programmed at a different MSR, and
+ * the BIOS provides the value. The original field where LVT offset
+ * was set is reserved. Return early here:
+ */
+ if (mce_flags.smca)
+ return false;
+
if (apic < 0) {
pr_err(FW_BUG "cpu %d, failed to setup threshold interrupt "
"for bank %d, block %d (MSR%08X=0x%x%08x)\n", b->cpu,
@@ -385,14 +395,6 @@ static bool lvt_off_valid(struct threshold_block *b, int apic, u32 lo, u32 hi)
}
if (apic != msr) {
- /*
- * On SMCA CPUs, LVT offset is programmed at a different MSR, and
- * the BIOS provides the value. The original field where LVT offset
- * was set is reserved. Return early here:
- */
- if (mce_flags.smca)
- return false;
-
pr_err(FW_BUG "cpu %d, invalid threshold interrupt offset %d "
"for bank %d, block %d (MSR%08X=0x%x%08x)\n",
b->cpu, apic, b->bank, b->block, b->address, hi, lo);
@@ -473,41 +475,6 @@ static int setup_APIC_mce_threshold(int reserved, int new)
return reserved;
}
-static int setup_APIC_deferred_error(int reserved, int new)
-{
- if (reserved < 0 && !setup_APIC_eilvt(new, DEFERRED_ERROR_VECTOR,
- APIC_EILVT_MSG_FIX, 0))
- return new;
-
- return reserved;
-}
-
-static void deferred_error_interrupt_enable(struct cpuinfo_x86 *c)
-{
- u32 low = 0, high = 0;
- int def_offset = -1, def_new;
-
- if (rdmsr_safe(MSR_CU_DEF_ERR, &low, &high))
- return;
-
- def_new = (low & MASK_DEF_LVTOFF) >> 4;
- if (!(low & MASK_DEF_LVTOFF)) {
- pr_err(FW_BUG "Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly.\n");
- def_new = DEF_LVT_OFF;
- low = (low & ~MASK_DEF_LVTOFF) | (DEF_LVT_OFF << 4);
- }
-
- def_offset = setup_APIC_deferred_error(def_offset, def_new);
- if ((def_offset == def_new) &&
- (deferred_error_int_vector != amd_deferred_error_interrupt))
- deferred_error_int_vector = amd_deferred_error_interrupt;
-
- if (!mce_flags.smca)
- low = (low & ~MASK_DEF_INT_TYPE) | DEF_INT_TYPE_APIC;
-
- wrmsr(MSR_CU_DEF_ERR, low, high);
-}
-
static u32 get_block_address(u32 current_addr, u32 low, u32 high,
unsigned int bank, unsigned int block,
unsigned int cpu)
@@ -543,12 +510,10 @@ static u32 get_block_address(u32 current_addr, u32 low, u32 high,
return addr;
}
-static int
-prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
- int offset, u32 misc_high)
+static int prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
+ int offset, u32 misc_high)
{
unsigned int cpu = smp_processor_id();
- u32 smca_low, smca_high;
struct threshold_block b;
int new;
@@ -568,18 +533,10 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
__set_bit(bank, this_cpu_ptr(&mce_amd_data)->thr_intr_banks);
b.interrupt_enable = 1;
- if (!mce_flags.smca) {
- new = (misc_high & MASK_LVTOFF_HI) >> 20;
- goto set_offset;
- }
-
- /* Gather LVT offset for thresholding: */
- if (rdmsr_safe(MSR_CU_DEF_ERR, &smca_low, &smca_high))
- goto out;
-
- new = (smca_low & SMCA_THR_LVT_OFF) >> 12;
+ if (mce_flags.smca)
+ goto done;
-set_offset:
+ new = (misc_high & MASK_LVTOFF_HI) >> 20;
offset = setup_APIC_mce_threshold(offset, new);
if (offset == new)
thresholding_irq_en = true;
@@ -587,7 +544,6 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
done:
mce_threshold_block_init(&b, offset);
-out:
return offset;
}
@@ -678,6 +634,32 @@ static void amd_apply_cpu_quirks(struct cpuinfo_x86 *c)
mce_banks[0].ctl = 0;
}
+/*
+ * Enable the APIC LVT interrupt vectors once per-CPU. This should be done before hardware is
+ * ready to send interrupts.
+ *
+ * Individual error sources are enabled later during per-bank init.
+ */
+static void smca_enable_interrupt_vectors(void)
+{
+ struct mce_amd_cpu_data *data = this_cpu_ptr(&mce_amd_data);
+ u64 mca_intr_cfg, offset;
+
+ if (!mce_flags.smca || !mce_flags.succor)
+ return;
+
+ if (rdmsrq_safe(MSR_CU_DEF_ERR, &mca_intr_cfg))
+ return;
+
+ offset = (mca_intr_cfg & SMCA_THR_LVT_OFF) >> 12;
+ if (!setup_APIC_eilvt(offset, THRESHOLD_APIC_VECTOR, APIC_EILVT_MSG_FIX, 0))
+ data->thr_intr_en = 1;
+
+ offset = (mca_intr_cfg & MASK_DEF_LVTOFF) >> 4;
+ if (!setup_APIC_eilvt(offset, DEFERRED_ERROR_VECTOR, APIC_EILVT_MSG_FIX, 0))
+ data->dfr_intr_en = 1;
+}
+
/* cpu init entry point, called from mce.c with preempt off */
void mce_amd_feature_init(struct cpuinfo_x86 *c)
{
@@ -689,10 +671,16 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
mce_flags.amd_threshold = 1;
+ smca_enable_interrupt_vectors();
+
for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
- if (mce_flags.smca)
+ if (mce_flags.smca) {
smca_configure(bank, cpu);
+ if (!this_cpu_ptr(&mce_amd_data)->thr_intr_en)
+ continue;
+ }
+
disable_err_thresholding(c, bank);
for (block = 0; block < NR_BLOCKS; ++block) {
@@ -713,9 +701,6 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
offset = prepare_threshold_block(bank, block, address, offset, high);
}
}
-
- if (mce_flags.succor)
- deferred_error_interrupt_enable(c);
}
void smca_bsp_init(void)
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 10/15] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems
2025-09-08 15:40 ` [PATCH v6 10/15] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems Yazen Ghannam
@ 2025-09-11 10:22 ` Nikolay Borisov
2025-09-11 15:53 ` Yazen Ghannam
0 siblings, 1 reply; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-11 10:22 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 9/8/25 18:40, Yazen Ghannam wrote:
> Scalable MCA systems have a per-CPU register that gives the APIC LVT
> offset for the thresholding and deferred error interrupts.
>
> Currently, this register is read once to set up the deferred error
> interrupt and then read again for each thresholding block. Furthermore,
> the APIC LVT registers are configured each time, but they only need to
> be configured once per-CPU.
>
> Move the APIC LVT setup to the early part of CPU init, so that the
> registers are set up once. Also, this ensures that the kernel is ready
> to service the interrupts before the individual error sources (each MCA
> bank) are enabled.
>
> Apply this change only to SMCA systems to avoid breaking any legacy
> behavior. The deferred error interrupt is technically advertised by the
> SUCCOR feature. However, this was first made available on SMCA systems.
> Therefore, only set up the deferred error interrupt on SMCA systems and
> simplify the code.
>
> Guidance from hardware designers is that the LVT offsets provided from
> the platform should be used. The kernel should not try to enforce
> specific values. However, the kernel should check that an LVT offset is
> not reused for multiple sources.
>
> Therefore, remove the extra checking and value enforcement from the MCE
> code. The "reuse/conflict" case is already handled in
> setup_APIC_eilvt().
>
> Tested-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
> ---
>
> Notes:
> Link:
> https://lore.kernel.org/r/20250825-wip-mca-updates-v5-15-865768a2eef8@amd.com
>
> v5->v6:
> * Applied "bools to flags" and other fixups from Boris.
>
> v4->v5:
> * Added back to set.
> * Updated commit message with more details.
>
> v3->v4:
> * Dropped from set.
>
> v2->v3:
> * Add tags from Tony.
>
> v1->v2:
> * Use new per-CPU struct.
> * Don't set up interrupt vectors.
>
> arch/x86/kernel/cpu/mce/amd.c | 121 ++++++++++++++++++------------------------
> 1 file changed, 53 insertions(+), 68 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> index 1b1b83b3aef9..a6f5c9339d7c 100644
> --- a/arch/x86/kernel/cpu/mce/amd.c
> +++ b/arch/x86/kernel/cpu/mce/amd.c
> @@ -43,9 +43,6 @@
> /* Deferred error settings */
> #define MSR_CU_DEF_ERR 0xC0000410
nit: While touching this code why not finally rename this in line with
the APM, section 9.3.1.4: MCA_INTR_CFG
Perhaps as a separate patch. I see that you did send a patch containing
this rename:
https://lore.kernel.org/all/20231118193248.1296798-13-yazen.ghannam@amd.com/
But I guess it didn't land.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v6 10/15] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems
2025-09-11 10:22 ` Nikolay Borisov
@ 2025-09-11 15:53 ` Yazen Ghannam
0 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-11 15:53 UTC (permalink / raw)
To: Nikolay Borisov
Cc: x86, Tony Luck, Rafael J. Wysocki, linux-kernel, linux-edac,
Smita.KoralahalliChannabasappa, Qiuxu Zhuo, linux-acpi
On Thu, Sep 11, 2025 at 01:22:10PM +0300, Nikolay Borisov wrote:
>
>
> On 9/8/25 18:40, Yazen Ghannam wrote:
> > Scalable MCA systems have a per-CPU register that gives the APIC LVT
> > offset for the thresholding and deferred error interrupts.
> >
> > Currently, this register is read once to set up the deferred error
> > interrupt and then read again for each thresholding block. Furthermore,
> > the APIC LVT registers are configured each time, but they only need to
> > be configured once per-CPU.
> >
> > Move the APIC LVT setup to the early part of CPU init, so that the
> > registers are set up once. Also, this ensures that the kernel is ready
> > to service the interrupts before the individual error sources (each MCA
> > bank) are enabled.
> >
> > Apply this change only to SMCA systems to avoid breaking any legacy
> > behavior. The deferred error interrupt is technically advertised by the
> > SUCCOR feature. However, this was first made available on SMCA systems.
> > Therefore, only set up the deferred error interrupt on SMCA systems and
> > simplify the code.
> >
> > Guidance from hardware designers is that the LVT offsets provided from
> > the platform should be used. The kernel should not try to enforce
> > specific values. However, the kernel should check that an LVT offset is
> > not reused for multiple sources.
> >
> > Therefore, remove the extra checking and value enforcement from the MCE
> > code. The "reuse/conflict" case is already handled in
> > setup_APIC_eilvt().
> >
> > Tested-by: Tony Luck <tony.luck@intel.com>
> > Reviewed-by: Tony Luck <tony.luck@intel.com>
> > Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
> > ---
> >
> > Notes:
> > Link:
> > https://lore.kernel.org/r/20250825-wip-mca-updates-v5-15-865768a2eef8@amd.com
> > v5->v6:
> > * Applied "bools to flags" and other fixups from Boris.
> > v4->v5:
> > * Added back to set.
> > * Updated commit message with more details.
> > v3->v4:
> > * Dropped from set.
> > v2->v3:
> > * Add tags from Tony.
> > v1->v2:
> > * Use new per-CPU struct.
> > * Don't set up interrupt vectors.
> >
> > arch/x86/kernel/cpu/mce/amd.c | 121 ++++++++++++++++++------------------------
> > 1 file changed, 53 insertions(+), 68 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> > index 1b1b83b3aef9..a6f5c9339d7c 100644
> > --- a/arch/x86/kernel/cpu/mce/amd.c
> > +++ b/arch/x86/kernel/cpu/mce/amd.c
> > @@ -43,9 +43,6 @@
> > /* Deferred error settings */
> > #define MSR_CU_DEF_ERR 0xC0000410
>
> nit: While touching this code why not finally rename this in line with the
> APM, section 9.3.1.4: MCA_INTR_CFG
>
> Perhaps as a separate patch. I see that you did send a patch containing this
> rename:
> https://lore.kernel.org/all/20231118193248.1296798-13-yazen.ghannam@amd.com/
> But I guess it didn't land.
Yep, thanks for noticing. :)
IIRC, I tried to reduce this set down to (mostly) functional changes.
I think that there is still more worthwhile refactoring to do.
Thanks,
Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 11/15] x86/mce/amd: Support SMCA Corrected Error Interrupt
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (9 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 10/15] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-08 15:40 ` [PATCH v6 12/15] x86/mce/amd: Remove redundant reset_block() Yazen Ghannam
` (4 subsequent siblings)
15 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
AMD systems optionally support MCA thresholding which provides the
ability for hardware to send an interrupt when a set error threshold is
reached. This feature counts errors of all severities, but it is
commonly used to report correctable errors with an interrupt rather than
polling.
Scalable MCA systems allow the Platform to take control of this feature.
In this case, the OS will not see the feature configuration and control
bits in the MCA_MISC* registers. The OS will not receive the MCA
thresholding interrupt, and it will need to poll for correctable errors.
A "corrected error interrupt" will be available on Scalable MCA systems.
This will be used in the same configuration where the Platform controls
MCA thresholding. However, the Platform will now be able to send the
MCA thresholding interrupt to the OS.
Check for, and enable, this feature during per-CPU SMCA init.
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-16-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* Add code comment describing bits.
v2->v3:
* Add tags from Tony.
v1->v2:
* Use new per-CPU struct.
arch/x86/kernel/cpu/mce/amd.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index a6f5c9339d7c..34268940c88a 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -308,6 +308,23 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
high |= BIT(5);
}
+ /*
+ * SMCA Corrected Error Interrupt
+ *
+ * MCA_CONFIG[IntPresent] is bit 10, and tells us if the bank can
+ * send an MCA Thresholding interrupt without the OS initializing
+ * this feature. This can be used if the threshold limit is managed
+ * by the platform.
+ *
+ * MCA_CONFIG[IntEn] is bit 40 (8 in the high portion of the MSR).
+ * The OS should set this to inform the platform that the OS is ready
+ * to handle the MCA Thresholding interrupt.
+ */
+ if ((low & BIT(10)) && data->thr_intr_en) {
+ __set_bit(bank, data->thr_intr_banks);
+ high |= BIT(8);
+ }
+
this_cpu_ptr(mce_banks_array)[bank].lsb_in_status = !!(low & BIT(8));
wrmsr(smca_config, low, high);
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* [PATCH v6 12/15] x86/mce/amd: Remove redundant reset_block()
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (10 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 11/15] x86/mce/amd: Support SMCA Corrected Error Interrupt Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-11 14:42 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 13/15] x86/mce/amd: Define threshold restart function for banks Yazen Ghannam
` (3 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Many of the checks in reset_block() are done again in the block reset
function. So drop the redundant checks.
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-17-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* New in v4.
arch/x86/kernel/cpu/mce/amd.c | 28 +++++++---------------------
1 file changed, 7 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 34268940c88a..9ca4079ff342 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -812,29 +812,11 @@ static void amd_deferred_error_interrupt(void)
machine_check_poll(MCP_TIMESTAMP, &this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
}
-static void reset_block(struct threshold_block *block)
-{
- struct thresh_restart tr;
- u32 low = 0, high = 0;
-
- if (!block)
- return;
-
- if (rdmsr_safe(block->address, &low, &high))
- return;
-
- if (!(high & MASK_OVERFLOW_HI))
- return;
-
- memset(&tr, 0, sizeof(tr));
- tr.b = block;
- threshold_restart_block(&tr);
-}
-
static void amd_reset_thr_limit(unsigned int bank)
{
struct threshold_bank **bp = this_cpu_read(threshold_banks);
struct threshold_block *block, *tmp;
+ struct thresh_restart tr;
/*
* Validate that the threshold bank has been initialized already. The
@@ -844,8 +826,12 @@ static void amd_reset_thr_limit(unsigned int bank)
if (!bp || !bp[bank])
return;
- list_for_each_entry_safe(block, tmp, &bp[bank]->miscj, miscj)
- reset_block(block);
+ memset(&tr, 0, sizeof(tr));
+
+ list_for_each_entry_safe(block, tmp, &bp[bank]->miscj, miscj) {
+ tr.b = block;
+ threshold_restart_block(&tr);
+ }
}
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 12/15] x86/mce/amd: Remove redundant reset_block()
2025-09-08 15:40 ` [PATCH v6 12/15] x86/mce/amd: Remove redundant reset_block() Yazen Ghannam
@ 2025-09-11 14:42 ` Nikolay Borisov
2025-09-11 16:11 ` Yazen Ghannam
0 siblings, 1 reply; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-11 14:42 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 8.09.25 г. 18:40 ч., Yazen Ghannam wrote:
> Many of the checks in reset_block() are done again in the block reset
> function. So drop the redundant checks.
>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
> ---
>
> Notes:
> Link:
> https://lore.kernel.org/r/20250825-wip-mca-updates-v5-17-865768a2eef8@amd.com
>
> v5->v6:
> * No change.
>
> v4->v5:
> * No change.
>
> v3->v4:
> * New in v4.
>
> arch/x86/kernel/cpu/mce/amd.c | 28 +++++++---------------------
> 1 file changed, 7 insertions(+), 21 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> index 34268940c88a..9ca4079ff342 100644
> --- a/arch/x86/kernel/cpu/mce/amd.c
> +++ b/arch/x86/kernel/cpu/mce/amd.c
> @@ -812,29 +812,11 @@ static void amd_deferred_error_interrupt(void)
> machine_check_poll(MCP_TIMESTAMP, &this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
> }
>
> -static void reset_block(struct threshold_block *block)
> -{
> - struct thresh_restart tr;
> - u32 low = 0, high = 0;
> -
> - if (!block)
> - return;
> -
> - if (rdmsr_safe(block->address, &low, &high))
> - return;
This is being replaced by rdmsr, I guess it's safe because the fact we
are processing a block which has been on the bank list means it's
unlikely the rdmsr will fault.
> -
> - if (!(high & MASK_OVERFLOW_HI))
> - return;
nit: However, now, if mask overflow is not set a write to the msr will
be performed, with the effect that IntType is going to be cleared (hi &=
~MASK_INT_TYPE_HI; in threshold_restart_block), and MASK_COUNT_EN_HI
will be set, that's different than the existing code, albeit it might be
ok.
> -
> - memset(&tr, 0, sizeof(tr));
> - tr.b = block;
> - threshold_restart_block(&tr);
> -}
> -
> static void amd_reset_thr_limit(unsigned int bank)
> {
> struct threshold_bank **bp = this_cpu_read(threshold_banks);
> struct threshold_block *block, *tmp;
> + struct thresh_restart tr;
>
> /*
> * Validate that the threshold bank has been initialized already. The
> @@ -844,8 +826,12 @@ static void amd_reset_thr_limit(unsigned int bank)
> if (!bp || !bp[bank])
> return;
>
> - list_for_each_entry_safe(block, tmp, &bp[bank]->miscj, miscj)
> - reset_block(block);
> + memset(&tr, 0, sizeof(tr));
> +
> + list_for_each_entry_safe(block, tmp, &bp[bank]->miscj, miscj) {
> + tr.b = block;
> + threshold_restart_block(&tr);
> + }
> }
>
> /*
>
^ permalink raw reply [flat|nested] 37+ messages in thread* Re: [PATCH v6 12/15] x86/mce/amd: Remove redundant reset_block()
2025-09-11 14:42 ` Nikolay Borisov
@ 2025-09-11 16:11 ` Yazen Ghannam
0 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-11 16:11 UTC (permalink / raw)
To: Nikolay Borisov
Cc: x86, Tony Luck, Rafael J. Wysocki, linux-kernel, linux-edac,
Smita.KoralahalliChannabasappa, Qiuxu Zhuo, linux-acpi
On Thu, Sep 11, 2025 at 05:42:39PM +0300, Nikolay Borisov wrote:
>
>
> On 8.09.25 г. 18:40 ч., Yazen Ghannam wrote:
> > Many of the checks in reset_block() are done again in the block reset
> > function. So drop the redundant checks.
> >
> > Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
>
>
> > ---
> >
> > Notes:
> > Link:
> > https://lore.kernel.org/r/20250825-wip-mca-updates-v5-17-865768a2eef8@amd.com
> > v5->v6:
> > * No change.
> > v4->v5:
> > * No change.
> > v3->v4:
> > * New in v4.
> >
> > arch/x86/kernel/cpu/mce/amd.c | 28 +++++++---------------------
> > 1 file changed, 7 insertions(+), 21 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> > index 34268940c88a..9ca4079ff342 100644
> > --- a/arch/x86/kernel/cpu/mce/amd.c
> > +++ b/arch/x86/kernel/cpu/mce/amd.c
> > @@ -812,29 +812,11 @@ static void amd_deferred_error_interrupt(void)
> > machine_check_poll(MCP_TIMESTAMP, &this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
> > }
> > -static void reset_block(struct threshold_block *block)
> > -{
> > - struct thresh_restart tr;
> > - u32 low = 0, high = 0;
> > -
> > - if (!block)
> > - return;
> > -
> > - if (rdmsr_safe(block->address, &low, &high))
> > - return;
>
>
> This is being replaced by rdmsr, I guess it's safe because the fact we are
> processing a block which has been on the bank list means it's unlikely the
> rdmsr will fault.
>
Yes, and the MCA register space is predefined even if not all registers
are occupied/implemented/backed by hardware.
The behavior on AMD is that an unused MSR will be Read-as-Zero (RAZ)
rather than cause a #GP.
>
> > -
> > - if (!(high & MASK_OVERFLOW_HI))
> > - return;
>
> nit: However, now, if mask overflow is not set a write to the msr will be
> performed, with the effect that IntType is going to be cleared (hi &=
> ~MASK_INT_TYPE_HI; in threshold_restart_block), and MASK_COUNT_EN_HI will be
> set, that's different than the existing code, albeit it might be ok.
Yes, correct. We may have extra unnecessary writes. This would only
happen if we find a valid error to begin with. And the error is a
deferred error or a corrected error found during polling.
In effect, the register value won't be changed. The control bits will be
set the same way as during initialization.
We could add another flag and try to affect the code flow. But I don't
know that the overhead is worth it.
A lot of this set is trying to do away with extra overhead, duplicate
code, etc.
Thanks,
Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 13/15] x86/mce/amd: Define threshold restart function for banks
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (11 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 12/15] x86/mce/amd: Remove redundant reset_block() Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-11 14:49 ` Nikolay Borisov
2025-09-08 15:40 ` [PATCH v6 14/15] x86/mce: Handle AMD threshold interrupt storms Yazen Ghannam
` (2 subsequent siblings)
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
Prepare for CMCI storm support by moving the common bank/block
iterator code to a helper function.
Include a parameter to switch the interrupt enable. This will be used by
the CMCI storm handling function.
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-18-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* New in v4.
arch/x86/kernel/cpu/mce/amd.c | 37 +++++++++++++++++++------------------
1 file changed, 19 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 9ca4079ff342..fbdb0cec5737 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -471,6 +471,24 @@ static void threshold_restart_block(void *_tr)
wrmsr(tr->b->address, lo, hi);
}
+static void threshold_restart_bank(unsigned int bank, bool intr_en)
+{
+ struct threshold_bank **thr_banks = this_cpu_read(threshold_banks);
+ struct threshold_block *block, *tmp;
+ struct thresh_restart tr;
+
+ if (!thr_banks || !thr_banks[bank])
+ return;
+
+ memset(&tr, 0, sizeof(tr));
+
+ list_for_each_entry_safe(block, tmp, &thr_banks[bank]->miscj, miscj) {
+ tr.b = block;
+ tr.b->interrupt_enable = intr_en;
+ threshold_restart_block(&tr);
+ }
+}
+
static void mce_threshold_block_init(struct threshold_block *b, int offset)
{
struct thresh_restart tr = {
@@ -814,24 +832,7 @@ static void amd_deferred_error_interrupt(void)
static void amd_reset_thr_limit(unsigned int bank)
{
- struct threshold_bank **bp = this_cpu_read(threshold_banks);
- struct threshold_block *block, *tmp;
- struct thresh_restart tr;
-
- /*
- * Validate that the threshold bank has been initialized already. The
- * handler is installed at boot time, but on a hotplug event the
- * interrupt might fire before the data has been initialized.
- */
- if (!bp || !bp[bank])
- return;
-
- memset(&tr, 0, sizeof(tr));
-
- list_for_each_entry_safe(block, tmp, &bp[bank]->miscj, miscj) {
- tr.b = block;
- threshold_restart_block(&tr);
- }
+ threshold_restart_bank(bank, true);
}
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* [PATCH v6 14/15] x86/mce: Handle AMD threshold interrupt storms
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (12 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 13/15] x86/mce/amd: Define threshold restart function for banks Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-08 15:40 ` [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit Yazen Ghannam
2025-09-08 16:10 ` [PATCH v6 00/15] AMD MCA interrupts rework Luck, Tony
15 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
From: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
Extend the logic of handling CMCI storms to AMD threshold interrupts.
Rely on the similar approach as of Intel's CMCI to mitigate storms per
CPU and per bank. But, unlike CMCI, do not set thresholds and reduce
interrupt rate on a storm. Rather, disable the interrupt on the
corresponding CPU and bank. Re-enable back the interrupts if enough
consecutive polls of the bank show no corrected errors (30, as
programmed by Intel).
Turning off the threshold interrupts would be a better solution on AMD
systems as other error severities will still be handled even if the
threshold interrupts are disabled.
[Tony: Small tweak because mce_handle_storm() isn't a pointer now]
[Yazen: Rebase and simplify]
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-19-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* Simplify based on new patches in this set.
v2->v3:
* Add tag from Qiuxu.
v1->v2:
* New in v2, but based on older patch.
* Rebased on current set and simplified.
* Kept old tags.
arch/x86/kernel/cpu/mce/amd.c | 5 +++++
arch/x86/kernel/cpu/mce/internal.h | 2 ++
arch/x86/kernel/cpu/mce/threshold.c | 3 +++
3 files changed, 10 insertions(+)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index fbdb0cec5737..b895559e80ad 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -830,6 +830,11 @@ static void amd_deferred_error_interrupt(void)
machine_check_poll(MCP_TIMESTAMP, &this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
}
+void mce_amd_handle_storm(unsigned int bank, bool on)
+{
+ threshold_restart_bank(bank, on);
+}
+
static void amd_reset_thr_limit(unsigned int bank)
{
threshold_restart_bank(bank, true);
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index b0e00ec5cc8c..9920ee5fb34c 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -267,6 +267,7 @@ void mce_prep_record_per_cpu(unsigned int cpu, struct mce *m);
#ifdef CONFIG_X86_MCE_AMD
void mce_threshold_create_device(unsigned int cpu);
void mce_threshold_remove_device(unsigned int cpu);
+void mce_amd_handle_storm(unsigned int bank, bool on);
extern bool amd_filter_mce(struct mce *m);
bool amd_mce_usable_address(struct mce *m);
void amd_clear_bank(struct mce *m);
@@ -299,6 +300,7 @@ void smca_bsp_init(void);
#else
static inline void mce_threshold_create_device(unsigned int cpu) { }
static inline void mce_threshold_remove_device(unsigned int cpu) { }
+static inline void mce_amd_handle_storm(unsigned int bank, bool on) { }
static inline bool amd_filter_mce(struct mce *m) { return false; }
static inline bool amd_mce_usable_address(struct mce *m) { return false; }
static inline void amd_clear_bank(struct mce *m) { }
diff --git a/arch/x86/kernel/cpu/mce/threshold.c b/arch/x86/kernel/cpu/mce/threshold.c
index f4a007616468..45144598ec74 100644
--- a/arch/x86/kernel/cpu/mce/threshold.c
+++ b/arch/x86/kernel/cpu/mce/threshold.c
@@ -63,6 +63,9 @@ static void mce_handle_storm(unsigned int bank, bool on)
case X86_VENDOR_INTEL:
mce_intel_handle_storm(bank, on);
break;
+ case X86_VENDOR_AMD:
+ mce_amd_handle_storm(bank, on);
+ break;
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (13 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 14/15] x86/mce: Handle AMD threshold interrupt storms Yazen Ghannam
@ 2025-09-08 15:40 ` Yazen Ghannam
2025-09-11 17:01 ` Nikolay Borisov
2025-09-08 16:10 ` [PATCH v6 00/15] AMD MCA interrupts rework Luck, Tony
15 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-08 15:40 UTC (permalink / raw)
To: x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, Nikolay Borisov, linux-acpi, Yazen Ghannam
The MCA threshold limit generally is not something that needs to change
during runtime. It is common for a system administrator to decide on a
policy for their managed systems.
If MCA thresholding is OS-managed, then the threshold limit must be set
at every boot. However, many systems allow the user to set a value in
their BIOS. And this is reported through an APEI HEST entry even if
thresholding is not in FW-First mode.
Use this value, if available, to set the OS-managed threshold limit.
Users can still override it through sysfs if desired for testing or
debug.
APEI is parsed after MCE is initialized. So reset the thresholding
blocks later to pick up the threshold limit.
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250825-wip-mca-updates-v5-20-865768a2eef8@amd.com
v5->v6:
* No change.
v4->v5:
* No change.
v3->v4:
* New in v4.
arch/x86/include/asm/mce.h | 6 ++++++
arch/x86/kernel/acpi/apei.c | 2 ++
arch/x86/kernel/cpu/mce/amd.c | 18 ++++++++++++++++--
arch/x86/kernel/cpu/mce/internal.h | 2 ++
arch/x86/kernel/cpu/mce/threshold.c | 13 +++++++++++++
5 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index 7d6588195d56..1cfbfff0be3f 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -308,6 +308,12 @@ DECLARE_PER_CPU(struct mce, injectm);
/* Disable CMCI/polling for MCA bank claimed by firmware */
extern void mce_disable_bank(int bank);
+#ifdef CONFIG_X86_MCE_THRESHOLD
+void mce_save_apei_thr_limit(u32 thr_limit);
+#else
+static inline void mce_save_apei_thr_limit(u32 thr_limit) { }
+#endif /* CONFIG_X86_MCE_THRESHOLD */
+
/*
* Exception handler
*/
diff --git a/arch/x86/kernel/acpi/apei.c b/arch/x86/kernel/acpi/apei.c
index 0916f00a992e..e21419e686eb 100644
--- a/arch/x86/kernel/acpi/apei.c
+++ b/arch/x86/kernel/acpi/apei.c
@@ -19,6 +19,8 @@ int arch_apei_enable_cmcff(struct acpi_hest_header *hest_hdr, void *data)
if (!cmc->enabled)
return 0;
+ mce_save_apei_thr_limit(cmc->notify.error_threshold_value);
+
/*
* We expect HEST to provide a list of MC banks that report errors
* in firmware first mode. Otherwise, return non-zero value to
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index b895559e80ad..9b746080351f 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -489,6 +489,18 @@ static void threshold_restart_bank(unsigned int bank, bool intr_en)
}
}
+/* Try to use the threshold limit reported through APEI. */
+static u16 get_thr_limit(void)
+{
+ u32 thr_limit = mce_get_apei_thr_limit();
+
+ /* Fallback to old default if APEI limit is not available. */
+ if (!thr_limit)
+ return THRESHOLD_MAX;
+
+ return min(thr_limit, THRESHOLD_MAX);
+}
+
static void mce_threshold_block_init(struct threshold_block *b, int offset)
{
struct thresh_restart tr = {
@@ -497,7 +509,7 @@ static void mce_threshold_block_init(struct threshold_block *b, int offset)
.lvt_off = offset,
};
- b->threshold_limit = THRESHOLD_MAX;
+ b->threshold_limit = get_thr_limit();
threshold_restart_block(&tr);
};
@@ -1071,7 +1083,7 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
b->address = address;
b->interrupt_enable = 0;
b->interrupt_capable = lvt_interrupt_supported(bank, high);
- b->threshold_limit = THRESHOLD_MAX;
+ b->threshold_limit = get_thr_limit();
if (b->interrupt_capable) {
default_attrs[2] = &interrupt_enable.attr;
@@ -1082,6 +1094,8 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
list_add(&b->miscj, &tb->miscj);
+ mce_threshold_block_init(b, (high & MASK_LVTOFF_HI) >> 20);
+
err = kobject_init_and_add(&b->kobj, &threshold_ktype, tb->kobj, get_name(cpu, bank, b));
if (err)
goto out_free;
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index 9920ee5fb34c..a31cf984619c 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -67,6 +67,7 @@ void mce_track_storm(struct mce *mce);
void mce_inherit_storm(unsigned int bank);
bool mce_get_storm_mode(void);
void mce_set_storm_mode(bool storm);
+u32 mce_get_apei_thr_limit(void);
#else
static inline void cmci_storm_begin(unsigned int bank) {}
static inline void cmci_storm_end(unsigned int bank) {}
@@ -74,6 +75,7 @@ static inline void mce_track_storm(struct mce *mce) {}
static inline void mce_inherit_storm(unsigned int bank) {}
static inline bool mce_get_storm_mode(void) { return false; }
static inline void mce_set_storm_mode(bool storm) {}
+static inline u32 mce_get_apei_thr_limit(void) { return 0; }
#endif
/*
diff --git a/arch/x86/kernel/cpu/mce/threshold.c b/arch/x86/kernel/cpu/mce/threshold.c
index 45144598ec74..d00d5bf9959d 100644
--- a/arch/x86/kernel/cpu/mce/threshold.c
+++ b/arch/x86/kernel/cpu/mce/threshold.c
@@ -13,6 +13,19 @@
#include "internal.h"
+static u32 mce_apei_thr_limit;
+
+void mce_save_apei_thr_limit(u32 thr_limit)
+{
+ mce_apei_thr_limit = thr_limit;
+ pr_info("HEST: Corrected error threshold limit = %u\n", thr_limit);
+}
+
+u32 mce_get_apei_thr_limit(void)
+{
+ return mce_apei_thr_limit;
+}
+
static void default_threshold_interrupt(void)
{
pr_err("Unexpected threshold interrupt at vector %x\n",
--
2.51.0
^ permalink raw reply related [flat|nested] 37+ messages in thread* Re: [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit
2025-09-08 15:40 ` [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit Yazen Ghannam
@ 2025-09-11 17:01 ` Nikolay Borisov
2025-09-15 17:33 ` Yazen Ghannam
0 siblings, 1 reply; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-11 17:01 UTC (permalink / raw)
To: Yazen Ghannam, x86, Tony Luck, Rafael J. Wysocki
Cc: linux-kernel, linux-edac, Smita.KoralahalliChannabasappa,
Qiuxu Zhuo, linux-acpi
On 9/8/25 18:40, Yazen Ghannam wrote:
> The MCA threshold limit generally is not something that needs to change
> during runtime. It is common for a system administrator to decide on a
> policy for their managed systems.
>
> If MCA thresholding is OS-managed, then the threshold limit must be set
> at every boot. However, many systems allow the user to set a value in
> their BIOS. And this is reported through an APEI HEST entry even if
> thresholding is not in FW-First mode.
>
> Use this value, if available, to set the OS-managed threshold limit.
> Users can still override it through sysfs if desired for testing or
> debug.
>
> APEI is parsed after MCE is initialized. So reset the thresholding
> blocks later to pick up the threshold limit.
>
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
> ---
>
> Notes:
> Link:
> https://lore.kernel.org/r/20250825-wip-mca-updates-v5-20-865768a2eef8@amd.com
>
> v5->v6:
> * No change.
>
> v4->v5:
> * No change.
>
> v3->v4:
> * New in v4.
>
> arch/x86/include/asm/mce.h | 6 ++++++
> arch/x86/kernel/acpi/apei.c | 2 ++
> arch/x86/kernel/cpu/mce/amd.c | 18 ++++++++++++++++--
> arch/x86/kernel/cpu/mce/internal.h | 2 ++
> arch/x86/kernel/cpu/mce/threshold.c | 13 +++++++++++++
> 5 files changed, 39 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
> index 7d6588195d56..1cfbfff0be3f 100644
> --- a/arch/x86/include/asm/mce.h
> +++ b/arch/x86/include/asm/mce.h
> @@ -308,6 +308,12 @@ DECLARE_PER_CPU(struct mce, injectm);
> /* Disable CMCI/polling for MCA bank claimed by firmware */
> extern void mce_disable_bank(int bank);
>
> +#ifdef CONFIG_X86_MCE_THRESHOLD
> +void mce_save_apei_thr_limit(u32 thr_limit);
> +#else
> +static inline void mce_save_apei_thr_limit(u32 thr_limit) { }
> +#endif /* CONFIG_X86_MCE_THRESHOLD */
> +
> /*
> * Exception handler
> */
> diff --git a/arch/x86/kernel/acpi/apei.c b/arch/x86/kernel/acpi/apei.c
> index 0916f00a992e..e21419e686eb 100644
> --- a/arch/x86/kernel/acpi/apei.c
> +++ b/arch/x86/kernel/acpi/apei.c
> @@ -19,6 +19,8 @@ int arch_apei_enable_cmcff(struct acpi_hest_header *hest_hdr, void *data)
> if (!cmc->enabled)
> return 0;
>
> + mce_save_apei_thr_limit(cmc->notify.error_threshold_value);
> +
> /*
> * We expect HEST to provide a list of MC banks that report errors
> * in firmware first mode. Otherwise, return non-zero value to
> diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> index b895559e80ad..9b746080351f 100644
> --- a/arch/x86/kernel/cpu/mce/amd.c
> +++ b/arch/x86/kernel/cpu/mce/amd.c
> @@ -489,6 +489,18 @@ static void threshold_restart_bank(unsigned int bank, bool intr_en)
> }
> }
>
> +/* Try to use the threshold limit reported through APEI. */
> +static u16 get_thr_limit(void)
> +{
> + u32 thr_limit = mce_get_apei_thr_limit();
> +
> + /* Fallback to old default if APEI limit is not available. */
> + if (!thr_limit)
> + return THRESHOLD_MAX;
> +
> + return min(thr_limit, THRESHOLD_MAX);
> +}
> +
> static void mce_threshold_block_init(struct threshold_block *b, int offset)
> {
> struct thresh_restart tr = {
> @@ -497,7 +509,7 @@ static void mce_threshold_block_init(struct threshold_block *b, int offset)
> .lvt_off = offset,
> };
>
> - b->threshold_limit = THRESHOLD_MAX;
> + b->threshold_limit = get_thr_limit();
> threshold_restart_block(&tr);
> };
>
> @@ -1071,7 +1083,7 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
> b->address = address;
> b->interrupt_enable = 0;
> b->interrupt_capable = lvt_interrupt_supported(bank, high);
> - b->threshold_limit = THRESHOLD_MAX;
> + b->threshold_limit = get_thr_limit();
>
> if (b->interrupt_capable) {
> default_attrs[2] = &interrupt_enable.attr;
> @@ -1082,6 +1094,8 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
>
> list_add(&b->miscj, &tb->miscj);
>
> + mce_threshold_block_init(b, (high & MASK_LVTOFF_HI) >> 20);
Why is this necessary? Shouldn't this patch consist of mainly
s/THRESHOLD_MAX/get_thr_limit();
In allocate_threshold_block have already properly set threshold_limit.
So this change really ensures threshold_restart_block is being called
for the given block being initialized. Ignoring the changed threshold
limit logic, why is this extra call necessary now and wasn't before?
> +
> err = kobject_init_and_add(&b->kobj, &threshold_ktype, tb->kobj, get_name(cpu, bank, b));
> if (err)
> goto out_free;
> diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
> index 9920ee5fb34c..a31cf984619c 100644
> --- a/arch/x86/kernel/cpu/mce/internal.h
> +++ b/arch/x86/kernel/cpu/mce/internal.h
> @@ -67,6 +67,7 @@ void mce_track_storm(struct mce *mce);
> void mce_inherit_storm(unsigned int bank);
> bool mce_get_storm_mode(void);
> void mce_set_storm_mode(bool storm);
> +u32 mce_get_apei_thr_limit(void);
> #else
> static inline void cmci_storm_begin(unsigned int bank) {}
> static inline void cmci_storm_end(unsigned int bank) {}
> @@ -74,6 +75,7 @@ static inline void mce_track_storm(struct mce *mce) {}
> static inline void mce_inherit_storm(unsigned int bank) {}
> static inline bool mce_get_storm_mode(void) { return false; }
> static inline void mce_set_storm_mode(bool storm) {}
> +static inline u32 mce_get_apei_thr_limit(void) { return 0; }
> #endif
>
> /*
> diff --git a/arch/x86/kernel/cpu/mce/threshold.c b/arch/x86/kernel/cpu/mce/threshold.c
> index 45144598ec74..d00d5bf9959d 100644
> --- a/arch/x86/kernel/cpu/mce/threshold.c
> +++ b/arch/x86/kernel/cpu/mce/threshold.c
> @@ -13,6 +13,19 @@
>
> #include "internal.h"
>
> +static u32 mce_apei_thr_limit;
> +
> +void mce_save_apei_thr_limit(u32 thr_limit)
> +{
> + mce_apei_thr_limit = thr_limit;
> + pr_info("HEST: Corrected error threshold limit = %u\n", thr_limit);
> +}
> +
> +u32 mce_get_apei_thr_limit(void)
> +{
> + return mce_apei_thr_limit;
> +}
> +
> static void default_threshold_interrupt(void)
> {
> pr_err("Unexpected threshold interrupt at vector %x\n",
>
^ permalink raw reply [flat|nested] 37+ messages in thread* Re: [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit
2025-09-11 17:01 ` Nikolay Borisov
@ 2025-09-15 17:33 ` Yazen Ghannam
2025-09-19 10:42 ` Nikolay Borisov
0 siblings, 1 reply; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-15 17:33 UTC (permalink / raw)
To: Nikolay Borisov
Cc: x86, Tony Luck, Rafael J. Wysocki, linux-kernel, linux-edac,
Smita.KoralahalliChannabasappa, Qiuxu Zhuo, linux-acpi
On Thu, Sep 11, 2025 at 08:01:17PM +0300, Nikolay Borisov wrote:
>
>
> On 9/8/25 18:40, Yazen Ghannam wrote:
> > The MCA threshold limit generally is not something that needs to change
> > during runtime. It is common for a system administrator to decide on a
> > policy for their managed systems.
> >
> > If MCA thresholding is OS-managed, then the threshold limit must be set
> > at every boot. However, many systems allow the user to set a value in
> > their BIOS. And this is reported through an APEI HEST entry even if
> > thresholding is not in FW-First mode.
> >
> > Use this value, if available, to set the OS-managed threshold limit.
> > Users can still override it through sysfs if desired for testing or
> > debug.
> >
> > APEI is parsed after MCE is initialized. So reset the thresholding
> > blocks later to pick up the threshold limit.
> >
> > Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
> > ---
> >
> > Notes:
> > Link:
> > https://lore.kernel.org/r/20250825-wip-mca-updates-v5-20-865768a2eef8@amd.com
> > v5->v6:
> > * No change.
> > v4->v5:
> > * No change.
> > v3->v4:
> > * New in v4.
> >
> > arch/x86/include/asm/mce.h | 6 ++++++
> > arch/x86/kernel/acpi/apei.c | 2 ++
> > arch/x86/kernel/cpu/mce/amd.c | 18 ++++++++++++++++--
> > arch/x86/kernel/cpu/mce/internal.h | 2 ++
> > arch/x86/kernel/cpu/mce/threshold.c | 13 +++++++++++++
> > 5 files changed, 39 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
> > index 7d6588195d56..1cfbfff0be3f 100644
> > --- a/arch/x86/include/asm/mce.h
> > +++ b/arch/x86/include/asm/mce.h
> > @@ -308,6 +308,12 @@ DECLARE_PER_CPU(struct mce, injectm);
> > /* Disable CMCI/polling for MCA bank claimed by firmware */
> > extern void mce_disable_bank(int bank);
> > +#ifdef CONFIG_X86_MCE_THRESHOLD
> > +void mce_save_apei_thr_limit(u32 thr_limit);
> > +#else
> > +static inline void mce_save_apei_thr_limit(u32 thr_limit) { }
> > +#endif /* CONFIG_X86_MCE_THRESHOLD */
> > +
> > /*
> > * Exception handler
> > */
> > diff --git a/arch/x86/kernel/acpi/apei.c b/arch/x86/kernel/acpi/apei.c
> > index 0916f00a992e..e21419e686eb 100644
> > --- a/arch/x86/kernel/acpi/apei.c
> > +++ b/arch/x86/kernel/acpi/apei.c
> > @@ -19,6 +19,8 @@ int arch_apei_enable_cmcff(struct acpi_hest_header *hest_hdr, void *data)
> > if (!cmc->enabled)
> > return 0;
> > + mce_save_apei_thr_limit(cmc->notify.error_threshold_value);
> > +
> > /*
> > * We expect HEST to provide a list of MC banks that report errors
> > * in firmware first mode. Otherwise, return non-zero value to
> > diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> > index b895559e80ad..9b746080351f 100644
> > --- a/arch/x86/kernel/cpu/mce/amd.c
> > +++ b/arch/x86/kernel/cpu/mce/amd.c
> > @@ -489,6 +489,18 @@ static void threshold_restart_bank(unsigned int bank, bool intr_en)
> > }
> > }
> > +/* Try to use the threshold limit reported through APEI. */
> > +static u16 get_thr_limit(void)
> > +{
> > + u32 thr_limit = mce_get_apei_thr_limit();
> > +
> > + /* Fallback to old default if APEI limit is not available. */
> > + if (!thr_limit)
> > + return THRESHOLD_MAX;
> > +
> > + return min(thr_limit, THRESHOLD_MAX);
> > +}
> > +
> > static void mce_threshold_block_init(struct threshold_block *b, int offset)
> > {
> > struct thresh_restart tr = {
> > @@ -497,7 +509,7 @@ static void mce_threshold_block_init(struct threshold_block *b, int offset)
> > .lvt_off = offset,
> > };
> > - b->threshold_limit = THRESHOLD_MAX;
> > + b->threshold_limit = get_thr_limit();
> > threshold_restart_block(&tr);
> > };
> > @@ -1071,7 +1083,7 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
> > b->address = address;
> > b->interrupt_enable = 0;
> > b->interrupt_capable = lvt_interrupt_supported(bank, high);
> > - b->threshold_limit = THRESHOLD_MAX;
> > + b->threshold_limit = get_thr_limit();
> > if (b->interrupt_capable) {
> > default_attrs[2] = &interrupt_enable.attr;
> > @@ -1082,6 +1094,8 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
> > list_add(&b->miscj, &tb->miscj);
> > + mce_threshold_block_init(b, (high & MASK_LVTOFF_HI) >> 20);
>
> Why is this necessary? Shouldn't this patch consist of mainly
> s/THRESHOLD_MAX/get_thr_limit();
>
>
> In allocate_threshold_block have already properly set threshold_limit. So
> this change really ensures threshold_restart_block is being called for the
> given block being initialized. Ignoring the changed threshold limit logic,
> why is this extra call necessary now and wasn't before?
>
It is necessary to apply the threshold limit to the hardware register.
The MCA thresholding registers are accessed in two passes: first during
per-CPU init, and second when the MCE subsystem devices are created.
The hardware registers are updated in the first pass, and they are left
as-is in the second pass assuming no configuration has changed. That's
why there isn't a "reset" in the second pass.
The APEI tables are parsed between the first and second passes. So now
we need to update the registers during the second pass to apply the
value found from HEST.
Thanks,
Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread* Re: [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit
2025-09-15 17:33 ` Yazen Ghannam
@ 2025-09-19 10:42 ` Nikolay Borisov
2025-09-22 13:58 ` Yazen Ghannam
0 siblings, 1 reply; 37+ messages in thread
From: Nikolay Borisov @ 2025-09-19 10:42 UTC (permalink / raw)
To: Yazen Ghannam
Cc: x86, Tony Luck, Rafael J. Wysocki, linux-kernel, linux-edac,
Smita.KoralahalliChannabasappa, Qiuxu Zhuo, linux-acpi
On 9/15/25 20:33, Yazen Ghannam wrote:
> On Thu, Sep 11, 2025 at 08:01:17PM +0300, Nikolay Borisov wrote:
>>
>>
>> On 9/8/25 18:40, Yazen Ghannam wrote:
>>> The MCA threshold limit generally is not something that needs to change
>>> during runtime. It is common for a system administrator to decide on a
>>> policy for their managed systems.
>>>
>>> If MCA thresholding is OS-managed, then the threshold limit must be set
>>> at every boot. However, many systems allow the user to set a value in
>>> their BIOS. And this is reported through an APEI HEST entry even if
>>> thresholding is not in FW-First mode.
>>>
>>> Use this value, if available, to set the OS-managed threshold limit.
>>> Users can still override it through sysfs if desired for testing or
>>> debug.
>>>
>>> APEI is parsed after MCE is initialized. So reset the thresholding
>>> blocks later to pick up the threshold limit.
>>>
>>> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
>>> ---
>>>
>>> Notes:
>>> Link:
>>> https://lore.kernel.org/r/20250825-wip-mca-updates-v5-20-865768a2eef8@amd.com
>>> v5->v6:
>>> * No change.
>>> v4->v5:
>>> * No change.
>>> v3->v4:
>>> * New in v4.
>>>
>>> arch/x86/include/asm/mce.h | 6 ++++++
>>> arch/x86/kernel/acpi/apei.c | 2 ++
>>> arch/x86/kernel/cpu/mce/amd.c | 18 ++++++++++++++++--
>>> arch/x86/kernel/cpu/mce/internal.h | 2 ++
>>> arch/x86/kernel/cpu/mce/threshold.c | 13 +++++++++++++
>>> 5 files changed, 39 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
>>> index 7d6588195d56..1cfbfff0be3f 100644
>>> --- a/arch/x86/include/asm/mce.h
>>> +++ b/arch/x86/include/asm/mce.h
>>> @@ -308,6 +308,12 @@ DECLARE_PER_CPU(struct mce, injectm);
>>> /* Disable CMCI/polling for MCA bank claimed by firmware */
>>> extern void mce_disable_bank(int bank);
>>> +#ifdef CONFIG_X86_MCE_THRESHOLD
>>> +void mce_save_apei_thr_limit(u32 thr_limit);
>>> +#else
>>> +static inline void mce_save_apei_thr_limit(u32 thr_limit) { }
>>> +#endif /* CONFIG_X86_MCE_THRESHOLD */
>>> +
>>> /*
>>> * Exception handler
>>> */
>>> diff --git a/arch/x86/kernel/acpi/apei.c b/arch/x86/kernel/acpi/apei.c
>>> index 0916f00a992e..e21419e686eb 100644
>>> --- a/arch/x86/kernel/acpi/apei.c
>>> +++ b/arch/x86/kernel/acpi/apei.c
>>> @@ -19,6 +19,8 @@ int arch_apei_enable_cmcff(struct acpi_hest_header *hest_hdr, void *data)
>>> if (!cmc->enabled)
>>> return 0;
>>> + mce_save_apei_thr_limit(cmc->notify.error_threshold_value);
>>> +
>>> /*
>>> * We expect HEST to provide a list of MC banks that report errors
>>> * in firmware first mode. Otherwise, return non-zero value to
>>> diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
>>> index b895559e80ad..9b746080351f 100644
>>> --- a/arch/x86/kernel/cpu/mce/amd.c
>>> +++ b/arch/x86/kernel/cpu/mce/amd.c
>>> @@ -489,6 +489,18 @@ static void threshold_restart_bank(unsigned int bank, bool intr_en)
>>> }
>>> }
>>> +/* Try to use the threshold limit reported through APEI. */
>>> +static u16 get_thr_limit(void)
>>> +{
>>> + u32 thr_limit = mce_get_apei_thr_limit();
>>> +
>>> + /* Fallback to old default if APEI limit is not available. */
>>> + if (!thr_limit)
>>> + return THRESHOLD_MAX;
>>> +
>>> + return min(thr_limit, THRESHOLD_MAX);
>>> +}
>>> +
>>> static void mce_threshold_block_init(struct threshold_block *b, int offset)
>>> {
>>> struct thresh_restart tr = {
>>> @@ -497,7 +509,7 @@ static void mce_threshold_block_init(struct threshold_block *b, int offset)
>>> .lvt_off = offset,
>>> };
>>> - b->threshold_limit = THRESHOLD_MAX;
>>> + b->threshold_limit = get_thr_limit();
>>> threshold_restart_block(&tr);
>>> };
>>> @@ -1071,7 +1083,7 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
>>> b->address = address;
>>> b->interrupt_enable = 0;
>>> b->interrupt_capable = lvt_interrupt_supported(bank, high);
>>> - b->threshold_limit = THRESHOLD_MAX;
>>> + b->threshold_limit = get_thr_limit();
>>> if (b->interrupt_capable) {
>>> default_attrs[2] = &interrupt_enable.attr;
>>> @@ -1082,6 +1094,8 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
>>> list_add(&b->miscj, &tb->miscj);
>>> + mce_threshold_block_init(b, (high & MASK_LVTOFF_HI) >> 20);
>>
>> Why is this necessary? Shouldn't this patch consist of mainly
>> s/THRESHOLD_MAX/get_thr_limit();
>>
>>
>> In allocate_threshold_block have already properly set threshold_limit. So
>> this change really ensures threshold_restart_block is being called for the
>> given block being initialized. Ignoring the changed threshold limit logic,
>> why is this extra call necessary now and wasn't before?
>>
>
> It is necessary to apply the threshold limit to the hardware register.
> The MCA thresholding registers are accessed in two passes: first during
> per-CPU init, and second when the MCE subsystem devices are created.
>
> The hardware registers are updated in the first pass, and they are left
> as-is in the second pass assuming no configuration has changed. That's
> why there isn't a "reset" in the second pass.
>
> The APEI tables are parsed between the first and second passes. So now
> we need to update the registers during the second pass to apply the
> value found from HEST.
So APEI is initialized as part of the subsys_initcall which is processed
via:
start_kernel
rest_init
kernel_init
kernel_init_freeable
do_basic_setup
do_initcalls
And the first mce_threshold_block_init() happens from :
start_kernel
arch_cpu_finalize_init <---- way before rest_init()
identify_boot_cpu
identify_cpu
mcheck_cpu_init
mcheck_cpu_init_vendor
mce_amd_feature_init
prepare_threshold_block
mce_threshold_block_init
Finally the per-cpu hotplug callback is installed via:
mcheck_init_device <- initiated from a device_initcall, happens after
APEI subsys init.
mce_cpu_online - called on every cpu from the HP machinery
mce_threshold_create_device
threshold_create_bank
allocate_threshold_blocks
mce_threshold_block_init - newly added call in alloc block, used to
update the register with the new limit
Given that mce_cpu_online is already called on every online cpu at the
time of installation of the callback, and every subsequent cpu that will
come online I can't help but wonder why do we have to do the mce
initialization from start_kernel, can't we move the mcheck_cpu_init call
into mce_cpu_online, or have the latter subsume the former?
Sorry if I'm being too nitpicky, I just want to have proper
understanding of the subsystem and the various (implicit) requirements
it has.
At the very least I believe this commit message should at least allude
to the fact that mce threshold devices have a strict requirement to be
created after the APEI subsys has been created.
>
> Thanks,
> Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread* Re: [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit
2025-09-19 10:42 ` Nikolay Borisov
@ 2025-09-22 13:58 ` Yazen Ghannam
0 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-22 13:58 UTC (permalink / raw)
To: Nikolay Borisov
Cc: x86, Tony Luck, Rafael J. Wysocki, linux-kernel, linux-edac,
Smita.KoralahalliChannabasappa, Qiuxu Zhuo, linux-acpi
On Fri, Sep 19, 2025 at 01:42:57PM +0300, Nikolay Borisov wrote:
>
>
> On 9/15/25 20:33, Yazen Ghannam wrote:
> > On Thu, Sep 11, 2025 at 08:01:17PM +0300, Nikolay Borisov wrote:
> > >
> > >
> > > On 9/8/25 18:40, Yazen Ghannam wrote:
> > > > The MCA threshold limit generally is not something that needs to change
> > > > during runtime. It is common for a system administrator to decide on a
> > > > policy for their managed systems.
> > > >
> > > > If MCA thresholding is OS-managed, then the threshold limit must be set
> > > > at every boot. However, many systems allow the user to set a value in
> > > > their BIOS. And this is reported through an APEI HEST entry even if
> > > > thresholding is not in FW-First mode.
> > > >
> > > > Use this value, if available, to set the OS-managed threshold limit.
> > > > Users can still override it through sysfs if desired for testing or
> > > > debug.
> > > >
> > > > APEI is parsed after MCE is initialized. So reset the thresholding
> > > > blocks later to pick up the threshold limit.
> > > >
> > > > Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
> > > > ---
> > > >
> > > > Notes:
> > > > Link:
> > > > https://lore.kernel.org/r/20250825-wip-mca-updates-v5-20-865768a2eef8@amd.com
> > > > v5->v6:
> > > > * No change.
> > > > v4->v5:
> > > > * No change.
> > > > v3->v4:
> > > > * New in v4.
> > > >
> > > > arch/x86/include/asm/mce.h | 6 ++++++
> > > > arch/x86/kernel/acpi/apei.c | 2 ++
> > > > arch/x86/kernel/cpu/mce/amd.c | 18 ++++++++++++++++--
> > > > arch/x86/kernel/cpu/mce/internal.h | 2 ++
> > > > arch/x86/kernel/cpu/mce/threshold.c | 13 +++++++++++++
> > > > 5 files changed, 39 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
> > > > index 7d6588195d56..1cfbfff0be3f 100644
> > > > --- a/arch/x86/include/asm/mce.h
> > > > +++ b/arch/x86/include/asm/mce.h
> > > > @@ -308,6 +308,12 @@ DECLARE_PER_CPU(struct mce, injectm);
> > > > /* Disable CMCI/polling for MCA bank claimed by firmware */
> > > > extern void mce_disable_bank(int bank);
> > > > +#ifdef CONFIG_X86_MCE_THRESHOLD
> > > > +void mce_save_apei_thr_limit(u32 thr_limit);
> > > > +#else
> > > > +static inline void mce_save_apei_thr_limit(u32 thr_limit) { }
> > > > +#endif /* CONFIG_X86_MCE_THRESHOLD */
> > > > +
> > > > /*
> > > > * Exception handler
> > > > */
> > > > diff --git a/arch/x86/kernel/acpi/apei.c b/arch/x86/kernel/acpi/apei.c
> > > > index 0916f00a992e..e21419e686eb 100644
> > > > --- a/arch/x86/kernel/acpi/apei.c
> > > > +++ b/arch/x86/kernel/acpi/apei.c
> > > > @@ -19,6 +19,8 @@ int arch_apei_enable_cmcff(struct acpi_hest_header *hest_hdr, void *data)
> > > > if (!cmc->enabled)
> > > > return 0;
> > > > + mce_save_apei_thr_limit(cmc->notify.error_threshold_value);
> > > > +
> > > > /*
> > > > * We expect HEST to provide a list of MC banks that report errors
> > > > * in firmware first mode. Otherwise, return non-zero value to
> > > > diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> > > > index b895559e80ad..9b746080351f 100644
> > > > --- a/arch/x86/kernel/cpu/mce/amd.c
> > > > +++ b/arch/x86/kernel/cpu/mce/amd.c
> > > > @@ -489,6 +489,18 @@ static void threshold_restart_bank(unsigned int bank, bool intr_en)
> > > > }
> > > > }
> > > > +/* Try to use the threshold limit reported through APEI. */
> > > > +static u16 get_thr_limit(void)
> > > > +{
> > > > + u32 thr_limit = mce_get_apei_thr_limit();
> > > > +
> > > > + /* Fallback to old default if APEI limit is not available. */
> > > > + if (!thr_limit)
> > > > + return THRESHOLD_MAX;
> > > > +
> > > > + return min(thr_limit, THRESHOLD_MAX);
> > > > +}
> > > > +
> > > > static void mce_threshold_block_init(struct threshold_block *b, int offset)
> > > > {
> > > > struct thresh_restart tr = {
> > > > @@ -497,7 +509,7 @@ static void mce_threshold_block_init(struct threshold_block *b, int offset)
> > > > .lvt_off = offset,
> > > > };
> > > > - b->threshold_limit = THRESHOLD_MAX;
> > > > + b->threshold_limit = get_thr_limit();
> > > > threshold_restart_block(&tr);
> > > > };
> > > > @@ -1071,7 +1083,7 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
> > > > b->address = address;
> > > > b->interrupt_enable = 0;
> > > > b->interrupt_capable = lvt_interrupt_supported(bank, high);
> > > > - b->threshold_limit = THRESHOLD_MAX;
> > > > + b->threshold_limit = get_thr_limit();
> > > > if (b->interrupt_capable) {
> > > > default_attrs[2] = &interrupt_enable.attr;
> > > > @@ -1082,6 +1094,8 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
> > > > list_add(&b->miscj, &tb->miscj);
> > > > + mce_threshold_block_init(b, (high & MASK_LVTOFF_HI) >> 20);
> > >
> > > Why is this necessary? Shouldn't this patch consist of mainly
> > > s/THRESHOLD_MAX/get_thr_limit();
> > >
> > >
> > > In allocate_threshold_block have already properly set threshold_limit. So
> > > this change really ensures threshold_restart_block is being called for the
> > > given block being initialized. Ignoring the changed threshold limit logic,
> > > why is this extra call necessary now and wasn't before?
> > >
> >
> > It is necessary to apply the threshold limit to the hardware register.
> > The MCA thresholding registers are accessed in two passes: first during
> > per-CPU init, and second when the MCE subsystem devices are created.
> >
> > The hardware registers are updated in the first pass, and they are left
> > as-is in the second pass assuming no configuration has changed. That's
> > why there isn't a "reset" in the second pass.
> >
> > The APEI tables are parsed between the first and second passes. So now
> > we need to update the registers during the second pass to apply the
> > value found from HEST.
>
> So APEI is initialized as part of the subsys_initcall which is processed
> via:
>
> start_kernel
> rest_init
> kernel_init
> kernel_init_freeable
> do_basic_setup
> do_initcalls
>
> And the first mce_threshold_block_init() happens from :
>
> start_kernel
> arch_cpu_finalize_init <---- way before rest_init()
> identify_boot_cpu
> identify_cpu
> mcheck_cpu_init
> mcheck_cpu_init_vendor
> mce_amd_feature_init
> prepare_threshold_block
> mce_threshold_block_init
>
>
> Finally the per-cpu hotplug callback is installed via:
>
> mcheck_init_device <- initiated from a device_initcall, happens after APEI
> subsys init.
> mce_cpu_online - called on every cpu from the HP machinery
> mce_threshold_create_device
> threshold_create_bank
> allocate_threshold_blocks
> mce_threshold_block_init - newly added call in alloc block, used to update
> the register with the new limit
>
>
> Given that mce_cpu_online is already called on every online cpu at the time
> of installation of the callback, and every subsequent cpu that will come
> online I can't help but wonder why do we have to do the mce initialization
> from start_kernel, can't we move the mcheck_cpu_init call into
> mce_cpu_online, or have the latter subsume the former?
>
> Sorry if I'm being too nitpicky, I just want to have proper understanding of
> the subsystem and the various (implicit) requirements it has.
>
No worries. It is a bit convoluted. I'd like to clean this up
eventually. I have an old attempt here:
https://github.com/AMDESE/linux/commit/640db92eca07804e889fac88856904552a4466bd
In general, I'd like to do the hardware and data structure init in a
single pass. And the sysfs interface can be optionally added separately.
>
> At the very least I believe this commit message should at least allude to
> the fact that mce threshold devices have a strict requirement to be created
> after the APEI subsys has been created.
>
>
The commit message has this:
"APEI is parsed after MCE is initialized. So reset the thresholding
blocks later to pick up the threshold limit."
The "devices" are already created after APEI subsys init. The missing
step is updating the hardware registers.
Thanks,
Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v6 00/15] AMD MCA interrupts rework
2025-09-08 15:40 [PATCH v6 00/15] AMD MCA interrupts rework Yazen Ghannam
` (14 preceding siblings ...)
2025-09-08 15:40 ` [PATCH v6 15/15] x86/mce: Save and use APEI corrected threshold limit Yazen Ghannam
@ 2025-09-08 16:10 ` Luck, Tony
2025-09-09 13:36 ` Yazen Ghannam
15 siblings, 1 reply; 37+ messages in thread
From: Luck, Tony @ 2025-09-08 16:10 UTC (permalink / raw)
To: Yazen Ghannam, x86@kernel.org, Rafael J. Wysocki
Cc: linux-kernel@vger.kernel.org, linux-edac@vger.kernel.org,
Smita.KoralahalliChannabasappa@amd.com, Zhuo, Qiuxu,
Nikolay Borisov, linux-acpi@vger.kernel.org
> This set unifies the AMD MCA interrupt handlers with common MCA code.
> The goal is to avoid duplicating functionality like reading and clearing
> MCA banks.
Still works fine on Intel Icelake system. Tested poison recovery, and CMCI
storm handling.
-Tony
^ permalink raw reply [flat|nested] 37+ messages in thread* Re: [PATCH v6 00/15] AMD MCA interrupts rework
2025-09-08 16:10 ` [PATCH v6 00/15] AMD MCA interrupts rework Luck, Tony
@ 2025-09-09 13:36 ` Yazen Ghannam
0 siblings, 0 replies; 37+ messages in thread
From: Yazen Ghannam @ 2025-09-09 13:36 UTC (permalink / raw)
To: Luck, Tony
Cc: x86@kernel.org, Rafael J. Wysocki, linux-kernel@vger.kernel.org,
linux-edac@vger.kernel.org,
Smita.KoralahalliChannabasappa@amd.com, Zhuo, Qiuxu,
Nikolay Borisov, linux-acpi@vger.kernel.org
On Mon, Sep 08, 2025 at 04:10:30PM +0000, Luck, Tony wrote:
> > This set unifies the AMD MCA interrupt handlers with common MCA code.
> > The goal is to avoid duplicating functionality like reading and clearing
> > MCA banks.
>
> Still works fine on Intel Icelake system. Tested poison recovery, and CMCI
> storm handling.
>
Thanks for testing!
-Yazen
^ permalink raw reply [flat|nested] 37+ messages in thread