From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751576AbcAPKp3 (ORCPT ); Sat, 16 Jan 2016 05:45:29 -0500 Received: from mail.skyhub.de ([78.46.96.112]:52475 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751170AbcAPKp1 (ORCPT ); Sat, 16 Jan 2016 05:45:27 -0500 Date: Sat, 16 Jan 2016 11:45:07 +0100 From: Borislav Petkov To: Aravind Gopalakrishnan Cc: tony.luck@intel.com, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-edac@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH V2 4/5] x86/mcheck/AMD: Fix LVT offset configuration for thresholding Message-ID: <20160116104507.GB31869@pd.tnic> References: <1452901836-27632-1-git-send-email-Aravind.Gopalakrishnan@amd.com> <1452901836-27632-5-git-send-email-Aravind.Gopalakrishnan@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <1452901836-27632-5-git-send-email-Aravind.Gopalakrishnan@amd.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 15, 2016 at 05:50:35PM -0600, Aravind Gopalakrishnan wrote: > For processor families with SMCA feature, the LVT offset > for threshold interrupts is configured only in MSR 0xC0000410 > and not in each per bank MISC register as was done in earlier > families. > > Fixing the code here to obtain the LVT offset from the correct > MSR for those families which have SMCA feature enabled. > > Signed-off-by: Aravind Gopalakrishnan > --- > arch/x86/kernel/cpu/mcheck/mce_amd.c | 34 +++++++++++++++++++++++++++++++++- > 1 file changed, 33 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c > index e650fdc..29a7688 100644 > --- a/arch/x86/kernel/cpu/mcheck/mce_amd.c > +++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c > @@ -49,6 +49,15 @@ > #define DEF_LVT_OFF 0x2 > #define DEF_INT_TYPE_APIC 0x2 > > +/* > + * SMCA settings: > + * The following defines provide masks or bit positions of > + * MSRs that are applicable only to SMCA enabled processors > + */ > + > +/* Threshold LVT offset is at MSR0xC0000410[15:12] */ > +#define SMCA_THR_LVT_OFF 0xF000 > + > static const char * const th_names[] = { > "load_store", > "insn_fetch", > @@ -143,6 +152,15 @@ static int lvt_off_valid(struct threshold_block *b, int apic, u32 lo, u32 hi) > } > > if (apic != msr) { > + /* > + * For SMCA enabled processors, LVT offset is programmed at > + * different MSR and BIOS provides the value. > + * The original field where LVT offset was set is Reserved. > + * So, return early here. > + */ > + if (mce_flags.smca) > + return 0; > + > pr_err(FW_BUG "cpu %d, invalid threshold interrupt offset %d " > "for bank %d, block %d (MSR%08X=0x%x%08x)\n", > b->cpu, apic, b->bank, b->block, b->address, hi, lo); > @@ -301,7 +319,21 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c) > goto init; > > b.interrupt_enable = 1; > - new = (high & MASK_LVTOFF_HI) >> 20; > + > + if (mce_flags.smca) { > + u32 smca_low = 0, smca_high = 0; Those variables don't need to be initialized to 0 since you're reading into them right afterwards. I fixed that up. > + > + /* Gather LVT offset for thresholding */ > + if (rdmsr_safe(MSR_CU_DEF_ERR, > + &smca_low, > + &smca_high)) > + break; > + -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply.