From: Chao Gao <chao.gao@intel.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
Jan Beulich <jbeulich@suse.com>, Chao Gao <chao.gao@intel.com>
Subject: [Patch v3 1/2] x86/smp: count the number of online physical processor in the system
Date: Wed, 9 May 2018 06:01:32 +0800 [thread overview]
Message-ID: <1525816893-36669-1-git-send-email-chao.gao@intel.com> (raw)
Mainly for the patch behind which relies on 'nr_phys_cpus' to estimate
the time needed for microcode update in the worst case.
Signed-off-by: Chao Gao <chao.gao@intel.com>
---
v3:
- new
---
xen/arch/x86/smpboot.c | 13 +++++++++++++
xen/include/asm-x86/smp.h | 3 +++
2 files changed, 16 insertions(+)
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 86fa410..c3c3558 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -67,6 +67,8 @@ unsigned int __read_mostly nr_sockets;
cpumask_t **__read_mostly socket_cpumask;
static cpumask_t *secondary_socket_cpumask;
+unsigned int __read_mostly nr_phys_cpus;
+
struct cpuinfo_x86 cpu_data[NR_CPUS];
u32 x86_cpu_to_apicid[NR_CPUS] __read_mostly =
@@ -262,6 +264,10 @@ static void set_cpu_sibling_map(int cpu)
cpumask_set_cpu(cpu, per_cpu(cpu_sibling_mask, cpu));
}
+ /* Increase physical processor count when a new cpu comes up */
+ if ( cpumask_weight(per_cpu(cpu_sibling_mask, cpu)) == 1 )
+ nr_phys_cpus++;
+
if ( c[cpu].x86_max_cores == 1 )
{
cpumask_copy(per_cpu(cpu_core_mask, cpu),
@@ -1156,6 +1162,13 @@ remove_siblinginfo(int cpu)
cpu_data[sibling].booted_cores--;
}
+ /*
+ * Decrease physical processor count when all threads of a physical
+ * processor go down
+ */
+ if ( cpumask_weight(per_cpu(cpu_sibling_mask, cpu)) == 1 )
+ nr_phys_cpus--;
+
for_each_cpu(sibling, per_cpu(cpu_sibling_mask, cpu))
cpumask_clear_cpu(cpu, per_cpu(cpu_sibling_mask, sibling));
cpumask_clear(per_cpu(cpu_sibling_mask, cpu));
diff --git a/xen/include/asm-x86/smp.h b/xen/include/asm-x86/smp.h
index 4e5f673..910888a 100644
--- a/xen/include/asm-x86/smp.h
+++ b/xen/include/asm-x86/smp.h
@@ -65,6 +65,9 @@ uint32_t get_cur_idle_nums(void);
*/
extern unsigned int nr_sockets;
+/* The number of online physical CPUs in this system */
+extern unsigned int nr_phys_cpus;
+
void set_nr_sockets(void);
/* Representing HT and core siblings in each socket. */
--
1.8.3.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next reply other threads:[~2018-05-08 22:01 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-08 22:01 Chao Gao [this message]
2018-05-08 22:01 ` [Patch v3 2/2] x86/microcode: Synchronize late microcode loading Chao Gao
2018-05-16 13:10 ` Jan Beulich
2018-05-16 13:25 ` Andrew Cooper
2018-05-16 13:46 ` Jan Beulich
2018-05-18 7:21 ` Chao Gao
2018-05-22 8:59 ` Chao Gao
2018-05-22 9:26 ` Jan Beulich
2018-05-22 20:14 ` Raj, Ashok
2018-11-13 9:08 ` Chao Gao
2018-11-13 9:09 ` Andrew Cooper
2018-05-16 12:54 ` [Patch v3 1/2] x86/smp: count the number of online physical processor in the system Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1525816893-36669-1-git-send-email-chao.gao@intel.com \
--to=chao.gao@intel.com \
--cc=andrew.cooper3@citrix.com \
--cc=jbeulich@suse.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).