From: Cyrill Gorcunov <gorcunov@gmail.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>,
Yinghai Lu <yhlu.kernel@gmail.com>,
Thomas Gleixner <tglx@linutronix.de>,
"H. Peter Anvin" <hpa@zytor.com>,
lkml <linux-kernel@vger.kernel.org>
Subject: Re: [RFC 1/2 -tip/master] x86, x2apic: minimize IPI register writes using cluster groups
Date: Mon, 14 Feb 2011 18:10:00 +0300 [thread overview]
Message-ID: <4D5945C8.4080108@gmail.com> (raw)
In-Reply-To: <20110214114515.GA9867@elte.hu>
On 02/14/2011 02:45 PM, Ingo Molnar wrote:
>
> * Cyrill Gorcunov<gorcunov@gmail.com> wrote:
>
>> In the case of x2apic cluster mode we can group IPI register writes based on the
>> cluster group instead of individual per-cpu destiantion messages. This reduces the
>> apic register writes and reduces the amount of IPI messages (in the best case we
>> can reduce it by a factor of 16).
>>
>> With this change, microbenchmark measuring the cost of flush_tlb_others(), with
>> the flush tlb IPI being sent from a cpu in the socket-1 to all the logical cpus in
>> socket-2 (on a Westmere-EX system that has 20 logical cpus in a socket) is 3x
>> times better now (compared to the former 'send one-by-one' algorithm).
>
> Pretty nice!
>
> I have a few structural and nitpicking comments:
Thanks a lot for review, Ingo! I'll address all the nits during this week.
...
>
>> +void x2apic_init_cpu_notifier(void)
>> +{
>> + int cpu = smp_processor_id();
>>
>> + zalloc_cpumask_var(&per_cpu(cpus_in_cluster, cpu), GFP_KERNEL);
>> + zalloc_cpumask_var(&per_cpu(ipi_mask, cpu), GFP_KERNEL);
>> + BUG_ON(!per_cpu(cpus_in_cluster, cpu) || !per_cpu(ipi_mask, cpu));
>
> Such a BUG_ON() is not particularly user friendly - and this could trigger during
> CPU hotplug events, i.e. while the system is fully booted up, right?
>
> Thanks,
>
> Ingo
Yup is not that much friendly but it's called during system bootup,
hotplug events are handled by
+static int __cpuinit
+cluster_setup(struct notifier_block *nfb, unsigned long action, void *hcpu)
+{
+ unsigned int cpu = (unsigned long)hcpu;
+ int err = 0;
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+ zalloc_cpumask_var(&per_cpu(cpus_in_cluster, cpu), GFP_KERNEL);
+ zalloc_cpumask_var(&per_cpu(ipi_mask, cpu), GFP_KERNEL);
+ if (!per_cpu(cpus_in_cluster, cpu) || !per_cpu(ipi_mask, cpu)) {
+ free_cpumask_var(per_cpu(cpus_in_cluster, cpu));
+ free_cpumask_var(per_cpu(ipi_mask, cpu));
+ err = -ENOMEM;
+ }
+ break;
so it returns -ENOMEM if failed. And btw just noted that we forgot to make
x2apic_init_cpu_notifier being in __init section.
Or I miss something?
--
Cyrill
next prev parent reply other threads:[~2011-02-14 15:10 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-03 21:03 [RFC 1/2 -tip/master] x86, x2apic: minimize IPI register writes using cluster groups Cyrill Gorcunov
2011-02-14 11:45 ` Ingo Molnar
2011-02-14 15:10 ` Cyrill Gorcunov [this message]
2011-02-15 3:22 ` Ingo Molnar
2011-02-15 8:39 ` Cyrill Gorcunov
2011-02-16 9:23 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D5945C8.4080108@gmail.com \
--to=gorcunov@gmail.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=suresh.b.siddha@intel.com \
--cc=tglx@linutronix.de \
--cc=yhlu.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox