From: Jiri Olsa <jolsa@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Andi Kleen <andi@firstfloor.org>,
linux-kernel@vger.kernel.org, Andi Kleen <ak@linux.intel.com>,
x86@kernel.org, Ingo Molnar <mingo@kernel.org>,
Frank Ramsay <framsay@redhat.com>,
Prarit Bhargava <prarit@redhat.com>
Subject: Re: [PATCH] x86/smp: Fix __max_logical_packages value setup
Date: Fri, 12 Aug 2016 14:24:57 +0200 [thread overview]
Message-ID: <20160812122457.GC8062@krava> (raw)
In-Reply-To: <20160811134651.GW30192@twins.programming.kicks-ass.net>
On Thu, Aug 11, 2016 at 03:46:51PM +0200, Peter Zijlstra wrote:
> On Thu, Aug 11, 2016 at 03:05:21PM +0200, Jiri Olsa wrote:
> > hum, so we either need some acpi solution to get number of all
> > sockets or
>
> This.. So the problem here is that the BIOS completely screws us over.
>
> It wrecks the ACPI-ID table with that option to limit the number of CPUs
> exposed to the OS (note that it didn't need to do that, it could have
> enumerated them as empty, instead of not there at all) while keeping the
> CPUID of the CPUs as reporting they have many (12? was it) cores.
>
> This results in inconsistent state, and we're left with nothing useful.
>
> > fix the uncore code to initialize pmu boxes on cpu hotplug as well
>
> Can't.. it uses the boxes at STARTING time, and we can't do allocs
> there. Not can we alloc earlier, because we don't know max_packages is
> going to increase.
I still need to test this, but would this be something
like you proposed on irc?
thanks,
jirka
---
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 2a6e84a30a54..4296beb8fdd3 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -100,10 +100,11 @@ EXPORT_PER_CPU_SYMBOL(cpu_info);
/* Logical package management. We might want to allocate that dynamically */
static int *physical_to_logical_pkg __read_mostly;
static unsigned long *physical_package_map __read_mostly;;
-static unsigned long *logical_package_map __read_mostly;
static unsigned int max_physical_pkg_id __read_mostly;
unsigned int __max_logical_packages __read_mostly;
EXPORT_SYMBOL(__max_logical_packages);
+static unsigned int logical_packages __read_mostly;
+static bool logical_packages_frozen __read_mostly;
/* Maximum number of SMT threads on any online core */
int __max_smt_threads __read_mostly;
@@ -277,14 +278,14 @@ int topology_update_package_map(unsigned int apicid, unsigned int cpu)
if (test_and_set_bit(pkg, physical_package_map))
goto found;
- new = find_first_zero_bit(logical_package_map, __max_logical_packages);
- if (new >= __max_logical_packages) {
+ if (logical_packages_frozen) {
physical_to_logical_pkg[pkg] = -1;
- pr_warn("APIC(%x) Package %u exceeds logical package map\n",
+ pr_warn("APIC(%x) Package %u exceeds logical package max\n",
apicid, pkg);
return -ENOSPC;
}
- set_bit(new, logical_package_map);
+
+ new = logical_packages++;
pr_info("APIC(%x) Converting physical %u to logical package %u\n",
apicid, pkg, new);
physical_to_logical_pkg[pkg] = new;
@@ -341,6 +342,7 @@ static void __init smp_init_package_map(void)
}
__max_logical_packages = DIV_ROUND_UP(total_cpus, ncpus);
+ logical_packages = 0;
/*
* Possibly larger than what we need as the number of apic ids per
@@ -352,10 +354,6 @@ static void __init smp_init_package_map(void)
memset(physical_to_logical_pkg, 0xff, size);
size = BITS_TO_LONGS(max_physical_pkg_id) * sizeof(unsigned long);
physical_package_map = kzalloc(size, GFP_KERNEL);
- size = BITS_TO_LONGS(__max_logical_packages) * sizeof(unsigned long);
- logical_package_map = kzalloc(size, GFP_KERNEL);
-
- pr_info("Max logical packages: %u\n", __max_logical_packages);
for_each_present_cpu(cpu) {
unsigned int apicid = apic->cpu_present_to_apicid(cpu);
@@ -369,6 +367,15 @@ static void __init smp_init_package_map(void)
set_cpu_possible(cpu, false);
set_cpu_present(cpu, false);
}
+
+ if (logical_packages > __max_logical_packages) {
+ pr_warn("Detected more packages (%u), then computed by BIOS data (%u).\n",
+ logical_packages, __max_logical_packages);
+ logical_packages_frozen = true;
+ __max_logical_packages = logical_packages;
+ }
+
+ pr_info("Max logical packages: %u\n", __max_logical_packages);
}
void __init smp_store_boot_cpu_info(void)
next prev parent reply other threads:[~2016-08-12 12:26 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-03 16:23 [RFC][PATCH] x86/smp: Fix __max_logical_packages value setup Jiri Olsa
2016-08-10 11:41 ` Jiri Olsa
2016-08-10 13:54 ` Peter Zijlstra
2016-08-10 14:00 ` Jiri Olsa
2016-08-10 14:15 ` Jiri Olsa
2016-08-10 15:52 ` Peter Zijlstra
2016-08-10 16:14 ` [PATCH] " Jiri Olsa
2016-08-11 12:48 ` Peter Zijlstra
2016-08-11 13:05 ` Jiri Olsa
2016-08-11 13:46 ` Peter Zijlstra
2016-08-12 12:24 ` Jiri Olsa [this message]
2016-08-12 13:12 ` Jiri Olsa
2016-08-15 9:04 ` Peter Zijlstra
2016-08-15 10:17 ` Jiri Olsa
2016-08-15 11:46 ` Prarit Bhargava
2016-08-18 10:50 ` [tip:x86/urgent] " tip-bot for Jiri Olsa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160812122457.GC8062@krava \
--to=jolsa@redhat.com \
--cc=ak@linux.intel.com \
--cc=andi@firstfloor.org \
--cc=framsay@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=prarit@redhat.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox