From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63CAEC433F5 for ; Mon, 3 Oct 2022 06:20:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229569AbiJCGUQ (ORCPT ); Mon, 3 Oct 2022 02:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229548AbiJCGUP (ORCPT ); Mon, 3 Oct 2022 02:20:15 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F16601581A for ; Sun, 2 Oct 2022 23:20:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A73D8B80D89 for ; Mon, 3 Oct 2022 06:20:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1ABDAC433C1; Mon, 3 Oct 2022 06:20:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664778011; bh=TnajTBbvJF3TpEZrBDHONBO8JR8/3vW1LhZp6H1+WRw=; h=Subject:To:Cc:From:Date:From; b=l68SauuzU7FTJMzZk0i6vxW1HMB1hrfGHD3lcm5Oyx2gNLELYM+dVVOuu6xYw24Fo MFC38uw8+QN21z8RF7wHaa8fPXVve6+ez+JAodF3Kk2dZAhD2GnIHjpq+KMUFgqfwJ 3pQKYw2P50EpSX+yyFoi5qCgEdKuF5TNzfxpxNhQ= Subject: FAILED: patch "[PATCH] x86/cacheinfo: Add a cpu_llc_shared_mask() UP variant" failed to apply to 5.10-stable tree To: bp@suse.de, ssengar@linux.microsoft.com, stable@vger.kernel.org Cc: From: Date: Mon, 03 Oct 2022 08:20:33 +0200 Message-ID: <166477803339104@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch below does not apply to the 5.10-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . Possible dependencies: df5b035b5683 ("x86/cacheinfo: Add a cpu_llc_shared_mask() UP variant") 66558b730f25 ("sched: Add cluster scheduler level for x86") 9164d9493a79 ("x86/cpu: Add get_llc_id() helper function") 2c88d45edbb8 ("x86, sched: Treat Intel SNC topology as default, COD as exception") thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From df5b035b5683d6a25f077af889fb88e09827f8bc Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Fri, 19 Aug 2022 19:47:44 +0200 Subject: [PATCH] x86/cacheinfo: Add a cpu_llc_shared_mask() UP variant On a CONFIG_SMP=n kernel, the LLC shared mask is 0, which prevents __cache_amd_cpumap_setup() from doing the L3 masks setup, and more specifically from setting up the shared_cpu_map and shared_cpu_list files in sysfs, leading to lscpu from util-linux getting confused and segfaulting. Add a cpu_llc_shared_mask() UP variant which returns a mask with a single bit set, i.e., for CPU0. Fixes: 2b83809a5e6d ("x86/cpu/amd: Derive L3 shared_cpu_map from cpu_llc_shared_mask") Reported-by: Saurabh Sengar Signed-off-by: Borislav Petkov Cc: Link: https://lore.kernel.org/r/1660148115-302-1-git-send-email-ssengar@linux.microsoft.com diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index 81a0211a372d..a73bced40e24 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -21,16 +21,6 @@ DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id); DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id); DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number); -static inline struct cpumask *cpu_llc_shared_mask(int cpu) -{ - return per_cpu(cpu_llc_shared_map, cpu); -} - -static inline struct cpumask *cpu_l2c_shared_mask(int cpu) -{ - return per_cpu(cpu_l2c_shared_map, cpu); -} - DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid); @@ -172,6 +162,16 @@ extern int safe_smp_processor_id(void); # define safe_smp_processor_id() smp_processor_id() #endif +static inline struct cpumask *cpu_llc_shared_mask(int cpu) +{ + return per_cpu(cpu_llc_shared_map, cpu); +} + +static inline struct cpumask *cpu_l2c_shared_mask(int cpu) +{ + return per_cpu(cpu_l2c_shared_map, cpu); +} + #else /* !CONFIG_SMP */ #define wbinvd_on_cpu(cpu) wbinvd() static inline int wbinvd_on_all_cpus(void) @@ -179,6 +179,11 @@ static inline int wbinvd_on_all_cpus(void) wbinvd(); return 0; } + +static inline struct cpumask *cpu_llc_shared_mask(int cpu) +{ + return (struct cpumask *)cpumask_of(0); +} #endif /* CONFIG_SMP */ extern unsigned disabled_cpus;