From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: mingo@elte.hu, tglx@linutronix.de, hpa@zytor.com,
trenn@novell.com, prarit@redhat.com, tj@kernel.org,
rusty@rustcorp.com.au, akpm@linux-foundation.org,
torvalds@linux-foundation.org, linux-kernel@vger.kernel.org,
youquan.song@intel.com
Subject: Re: [patch 3/4] stop_machine: implement stop_machine_from_offline_cpu()
Date: Thu, 23 Jun 2011 11:25:19 +0200 [thread overview]
Message-ID: <1308821119.1022.84.camel@twins> (raw)
In-Reply-To: <20110622222044.038298780@sbsiddha-MOBL3.sc.intel.com>
On Wed, 2011-06-22 at 15:20 -0700, Suresh Siddha wrote:
> +int stop_machine_from_offline_cpu(int (*fn)(void *), void *data,
> + const struct cpumask *cpus)
> +{
> + struct stop_machine_data smdata = { .fn = fn, .data = data,
> + .active_cpus = cpus };
> + struct cpu_stop_done done;
> + int ret;
> +
> + /* Local CPU must be offline and CPU hotplug in progress. */
> + BUG_ON(cpu_online(raw_smp_processor_id()));
> + smdata.num_threads = num_online_cpus() + 1; /* +1 for local */
> +
> + /* No proper task established and can't sleep - busy wait for lock. */
> + while (!mutex_trylock(&stop_cpus_mutex))
> + cpu_relax();
> +
> + /* Schedule work on other CPUs and execute directly for local CPU */
> + set_state(&smdata, STOPMACHINE_PREPARE);
> + cpu_stop_init_done(&done, num_online_cpus());
> + queue_stop_cpus_work(cpu_online_mask, stop_machine_cpu_stop, &smdata,
> + &done);
> + ret = stop_machine_cpu_stop(&smdata);
> +
> + /* Busy wait for completion. */
> + while (!completion_done(&done.completion))
> + cpu_relax();
> +
> + mutex_unlock(&stop_cpus_mutex);
> + return ret ?: done.ret;
> +}
Damn thats ugly, I sure hope you're going to make those hardware folks
pay for this :-)
In commit d0af9eed5aa91b6b7b5049cae69e5ea956fd85c3 you mention that its
specific to HT, wouldn't it make sense to limit the stop-machine use in
the next patch to the sibling mask instead of the whole machine?
next prev parent reply other threads:[~2011-06-23 9:26 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-22 22:20 [patch 0/4] MTRR rendezvous deadlock fix and cleanups using stop_machine() Suresh Siddha
2011-06-22 22:20 ` [patch 1/4] x86, mtrr: lock stop machine during MTRR rendezvous sequence Suresh Siddha
2011-06-23 9:05 ` Peter Zijlstra
2011-06-23 9:33 ` Thomas Gleixner
2011-06-23 9:41 ` Peter Zijlstra
2011-06-23 18:16 ` Suresh Siddha
2011-06-22 22:20 ` [patch 2/4] stop_machine: reorganize stop_cpus() implementation Suresh Siddha
2011-06-22 22:20 ` [patch 3/4] stop_machine: implement stop_machine_from_offline_cpu() Suresh Siddha
2011-06-23 9:25 ` Peter Zijlstra [this message]
2011-06-23 9:28 ` Tejun Heo
2011-06-23 9:31 ` Peter Zijlstra
2011-06-23 18:19 ` Suresh Siddha
2011-06-24 7:45 ` Peter Zijlstra
2011-06-24 17:55 ` Suresh Siddha
2011-06-22 22:20 ` [patch 4/4] x86, mtrr: use stop_machine() for doing MTRR rendezvous Suresh Siddha
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1308821119.1022.84.camel@twins \
--to=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=prarit@redhat.com \
--cc=rusty@rustcorp.com.au \
--cc=suresh.b.siddha@intel.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=trenn@novell.com \
--cc=youquan.song@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox