From: Yazen Ghannam <yazen.ghannam@amd.com>
To: <x86@kernel.org>, Tony Luck <tony.luck@intel.com>,
"Rafael J. Wysocki" <rafael@kernel.org>
Cc: <linux-kernel@vger.kernel.org>, <linux-edac@vger.kernel.org>,
<Smita.KoralahalliChannabasappa@amd.com>,
Qiuxu Zhuo <qiuxu.zhuo@intel.com>,
Nikolay Borisov <nik.borisov@suse.com>,
<linux-acpi@vger.kernel.org>,
"Yazen Ghannam" <yazen.ghannam@amd.com>
Subject: [PATCH v5 15/20] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems
Date: Mon, 25 Aug 2025 17:33:12 +0000 [thread overview]
Message-ID: <20250825-wip-mca-updates-v5-15-865768a2eef8@amd.com> (raw)
In-Reply-To: <20250825-wip-mca-updates-v5-0-865768a2eef8@amd.com>
Scalable MCA systems have a per-CPU register that gives the APIC LVT
offset for the thresholding and deferred error interrupts.
Currently, this register is read once to set up the deferred error
interrupt and then read again for each thresholding block. Furthermore,
the APIC LVT registers are configured each time, but they only need to
be configured once per-CPU.
Move the APIC LVT setup to the early part of CPU init, so that the
registers are set up once. Also, this ensures that the kernel is ready
to service the interrupts before the individual error sources (each MCA
bank) are enabled.
Apply this change only to SMCA systems to avoid breaking any legacy
behavior. The deferred error interrupt is technically advertised by the
SUCCOR feature. However, this was first made available on SMCA systems.
Therefore, only set up the deferred error interrupt on SMCA systems and
simplify the code.
Guidance from hardware designers is that the LVT offsets provided from
the platform should be used. The kernel should not try to enforce
specific values. However, the kernel should check that an LVT offset is
not reused for multiple sources.
Therefore, remove the extra checking and value enforcement from the MCE
code. The "reuse/conflict" case is already handled in
setup_APIC_eilvt().
Tested-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Notes:
Link:
https://lore.kernel.org/r/20250415-wip-mca-updates-v3-14-8ffd9eb4aa56@amd.com
v4->v5:
* Added back to set.
* Updated commit message with more details.
v3->v4:
* Dropped from set.
v2->v3:
* Add tags from Tony.
v1->v2:
* Use new per-CPU struct.
* Don't set up interrupt vectors.
arch/x86/kernel/cpu/mce/amd.c | 113 ++++++++++++++++++------------------------
1 file changed, 48 insertions(+), 65 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 4a832c24d43b..44fa61cafb0d 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -43,9 +43,6 @@
/* Deferred error settings */
#define MSR_CU_DEF_ERR 0xC0000410
#define MASK_DEF_LVTOFF 0x000000F0
-#define MASK_DEF_INT_TYPE 0x00000006
-#define DEF_LVT_OFF 0x2
-#define DEF_INT_TYPE_APIC 0x2
/* Scalable MCA: */
@@ -57,6 +54,8 @@ static bool thresholding_irq_en;
struct mce_amd_cpu_data {
mce_banks_t thr_intr_banks;
mce_banks_t dfr_intr_banks;
+ bool thr_intr_en;
+ bool dfr_intr_en;
};
static DEFINE_PER_CPU_READ_MOSTLY(struct mce_amd_cpu_data, mce_amd_data);
@@ -271,6 +270,7 @@ void (*deferred_error_int_vector)(void) = default_deferred_error_interrupt;
static void smca_configure(unsigned int bank, unsigned int cpu)
{
+ struct mce_amd_cpu_data *data = this_cpu_ptr(&mce_amd_data);
u8 *bank_counts = this_cpu_ptr(smca_bank_counts);
const struct smca_hwid *s_hwid;
unsigned int i, hwid_mcatype;
@@ -301,8 +301,8 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
* APIC based interrupt. First, check that no interrupt has been
* set.
*/
- if ((low & BIT(5)) && !((high >> 5) & 0x3)) {
- __set_bit(bank, this_cpu_ptr(&mce_amd_data)->dfr_intr_banks);
+ if ((low & BIT(5)) && !((high >> 5) & 0x3) && data->dfr_intr_en) {
+ __set_bit(bank, data->dfr_intr_banks);
high |= BIT(5);
}
@@ -377,6 +377,14 @@ static bool lvt_off_valid(struct threshold_block *b, int apic, u32 lo, u32 hi)
{
int msr = (hi & MASK_LVTOFF_HI) >> 20;
+ /*
+ * On SMCA CPUs, LVT offset is programmed at a different MSR, and
+ * the BIOS provides the value. The original field where LVT offset
+ * was set is reserved. Return early here:
+ */
+ if (mce_flags.smca)
+ return false;
+
if (apic < 0) {
pr_err(FW_BUG "cpu %d, failed to setup threshold interrupt "
"for bank %d, block %d (MSR%08X=0x%x%08x)\n", b->cpu,
@@ -385,14 +393,6 @@ static bool lvt_off_valid(struct threshold_block *b, int apic, u32 lo, u32 hi)
}
if (apic != msr) {
- /*
- * On SMCA CPUs, LVT offset is programmed at a different MSR, and
- * the BIOS provides the value. The original field where LVT offset
- * was set is reserved. Return early here:
- */
- if (mce_flags.smca)
- return false;
-
pr_err(FW_BUG "cpu %d, invalid threshold interrupt offset %d "
"for bank %d, block %d (MSR%08X=0x%x%08x)\n",
b->cpu, apic, b->bank, b->block, b->address, hi, lo);
@@ -473,41 +473,6 @@ static int setup_APIC_mce_threshold(int reserved, int new)
return reserved;
}
-static int setup_APIC_deferred_error(int reserved, int new)
-{
- if (reserved < 0 && !setup_APIC_eilvt(new, DEFERRED_ERROR_VECTOR,
- APIC_EILVT_MSG_FIX, 0))
- return new;
-
- return reserved;
-}
-
-static void deferred_error_interrupt_enable(struct cpuinfo_x86 *c)
-{
- u32 low = 0, high = 0;
- int def_offset = -1, def_new;
-
- if (rdmsr_safe(MSR_CU_DEF_ERR, &low, &high))
- return;
-
- def_new = (low & MASK_DEF_LVTOFF) >> 4;
- if (!(low & MASK_DEF_LVTOFF)) {
- pr_err(FW_BUG "Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly.\n");
- def_new = DEF_LVT_OFF;
- low = (low & ~MASK_DEF_LVTOFF) | (DEF_LVT_OFF << 4);
- }
-
- def_offset = setup_APIC_deferred_error(def_offset, def_new);
- if ((def_offset == def_new) &&
- (deferred_error_int_vector != amd_deferred_error_interrupt))
- deferred_error_int_vector = amd_deferred_error_interrupt;
-
- if (!mce_flags.smca)
- low = (low & ~MASK_DEF_INT_TYPE) | DEF_INT_TYPE_APIC;
-
- wrmsr(MSR_CU_DEF_ERR, low, high);
-}
-
static u32 smca_get_block_address(unsigned int bank, unsigned int block, u32 low)
{
if (!block)
@@ -552,7 +517,6 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
int offset, u32 misc_high)
{
unsigned int cpu = smp_processor_id();
- u32 smca_low, smca_high;
struct threshold_block b;
int new;
@@ -572,18 +536,10 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
__set_bit(bank, this_cpu_ptr(&mce_amd_data)->thr_intr_banks);
b.interrupt_enable = 1;
- if (!mce_flags.smca) {
- new = (misc_high & MASK_LVTOFF_HI) >> 20;
- goto set_offset;
- }
-
- /* Gather LVT offset for thresholding: */
- if (rdmsr_safe(MSR_CU_DEF_ERR, &smca_low, &smca_high))
- goto out;
-
- new = (smca_low & SMCA_THR_LVT_OFF) >> 12;
+ if (mce_flags.smca)
+ goto done;
-set_offset:
+ new = (misc_high & MASK_LVTOFF_HI) >> 20;
offset = setup_APIC_mce_threshold(offset, new);
if (offset == new)
thresholding_irq_en = true;
@@ -591,7 +547,6 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
done:
mce_threshold_block_init(&b, offset);
-out:
return offset;
}
@@ -682,6 +637,32 @@ static void amd_apply_cpu_quirks(struct cpuinfo_x86 *c)
mce_banks[0].ctl = 0;
}
+/*
+ * Enable the APIC LVT interrupt vectors once per-CPU. This should be done before hardware is
+ * ready to send interrupts.
+ *
+ * Individual error sources are enabled later during per-bank init.
+ */
+static void smca_enable_interrupt_vectors(void)
+{
+ struct mce_amd_cpu_data *data = this_cpu_ptr(&mce_amd_data);
+ u64 mca_intr_cfg, offset;
+
+ if (!mce_flags.smca || !mce_flags.succor)
+ return;
+
+ if (rdmsrq_safe(MSR_CU_DEF_ERR, &mca_intr_cfg))
+ return;
+
+ offset = (mca_intr_cfg & SMCA_THR_LVT_OFF) >> 12;
+ if (!setup_APIC_eilvt(offset, THRESHOLD_APIC_VECTOR, APIC_EILVT_MSG_FIX, 0))
+ data->thr_intr_en = true;
+
+ offset = (mca_intr_cfg & MASK_DEF_LVTOFF) >> 4;
+ if (!setup_APIC_eilvt(offset, DEFERRED_ERROR_VECTOR, APIC_EILVT_MSG_FIX, 0))
+ data->dfr_intr_en = true;
+}
+
/* cpu init entry point, called from mce.c with preempt off */
void mce_amd_feature_init(struct cpuinfo_x86 *c)
{
@@ -692,11 +673,16 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
amd_apply_cpu_quirks(c);
mce_flags.amd_threshold = 1;
+ smca_enable_interrupt_vectors();
for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
- if (mce_flags.smca)
+ if (mce_flags.smca) {
smca_configure(bank, cpu);
+ if (!this_cpu_ptr(&mce_amd_data)->thr_intr_en)
+ continue;
+ }
+
disable_err_thresholding(c, bank);
for (block = 0; block < NR_BLOCKS; ++block) {
@@ -717,9 +703,6 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
offset = prepare_threshold_block(bank, block, address, offset, high);
}
}
-
- if (mce_flags.succor)
- deferred_error_interrupt_enable(c);
}
void smca_bsp_init(void)
--
2.51.0
next prev parent reply other threads:[~2025-08-25 17:33 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-25 17:32 [PATCH v5 00/20] AMD MCA interrupts rework Yazen Ghannam
2025-08-25 17:32 ` [PATCH v5 01/20] x86/mce/amd: Rename threshold restart function Yazen Ghannam
2025-08-25 17:32 ` [PATCH v5 02/20] x86/mce/amd: Remove return value for mce_threshold_{create,remove}_device() Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 03/20] x86/mce/amd: Remove smca_banks_map Yazen Ghannam
2025-08-25 18:19 ` Borislav Petkov
2025-08-25 19:54 ` Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 04/20] x86/mce/amd: Put list_head in threshold_bank Yazen Ghannam
2025-09-01 15:41 ` Nikolay Borisov
2025-09-01 16:41 ` Borislav Petkov
2025-08-25 17:33 ` [PATCH v5 05/20] x86/mce: Cleanup bank processing on init Yazen Ghannam
2025-08-26 12:35 ` Borislav Petkov
2025-08-26 13:47 ` Yazen Ghannam
2025-08-26 14:33 ` Borislav Petkov
2025-08-25 17:33 ` [PATCH v5 06/20] x86/mce: Remove __mcheck_cpu_init_early() Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 07/20] x86/mce: Reorder __mcheck_cpu_init_generic() call Yazen Ghannam
2025-09-01 17:07 ` Borislav Petkov
2025-09-02 13:30 ` Yazen Ghannam
2025-09-02 16:26 ` Borislav Petkov
2025-09-02 17:14 ` Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 08/20] x86/mce: Define BSP-only init Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 09/20] x86/mce: Define BSP-only SMCA init Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 10/20] x86/mce: Do 'UNKNOWN' vendor check early Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 11/20] x86/mce: Separate global and per-CPU quirks Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 12/20] x86/mce: Move machine_check_poll() status checks to helper functions Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 13/20] x86/mce: Unify AMD THR handler with MCA Polling Yazen Ghannam
2025-09-02 11:10 ` Borislav Petkov
2025-09-02 13:37 ` Yazen Ghannam
2025-09-02 17:04 ` Borislav Petkov
2025-09-02 17:25 ` Yazen Ghannam
2025-09-03 9:48 ` Borislav Petkov
2025-08-25 17:33 ` [PATCH v5 14/20] x86/mce: Unify AMD DFR " Yazen Ghannam
2025-08-25 17:33 ` Yazen Ghannam [this message]
2025-09-03 10:03 ` [PATCH v5 15/20] x86/mce/amd: Enable interrupt vectors once per-CPU on SMCA systems Borislav Petkov
2025-09-03 14:00 ` Yazen Ghannam
2025-09-03 15:39 ` Borislav Petkov
2025-08-25 17:33 ` [PATCH v5 16/20] x86/mce/amd: Support SMCA Corrected Error Interrupt Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 17/20] x86/mce/amd: Remove redundant reset_block() Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 18/20] x86/mce/amd: Define threshold restart function for banks Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 19/20] x86/mce: Handle AMD threshold interrupt storms Yazen Ghannam
2025-08-25 17:33 ` [PATCH v5 20/20] x86/mce: Save and use APEI corrected threshold limit Yazen Ghannam
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250825-wip-mca-updates-v5-15-865768a2eef8@amd.com \
--to=yazen.ghannam@amd.com \
--cc=Smita.KoralahalliChannabasappa@amd.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-edac@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nik.borisov@suse.com \
--cc=qiuxu.zhuo@intel.com \
--cc=rafael@kernel.org \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).