From: Tejun Heo <tj@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: mingo@elte.hu, linux-kernel@vger.kernel.org, x86@kernel.org,
oleg@redhat.com, rusty@rustcorp.com.au, sivanich@sgi.com,
heiko.carstens@de.ibm.com, dipankar@in.ibm.com,
josh@freedesktop.org, paulmck@linux.vnet.ibm.com,
akpm@linux-foundation.org, arjan@linux.intel.com,
torvalds@linux-foundation.org
Subject: Re: [PATCH 3/4] scheduler: replace migration_thread with cpu_stop
Date: Tue, 04 May 2010 09:17:15 +0200 [thread overview]
Message-ID: <4BDFC9FB.4020607@kernel.org> (raw)
In-Reply-To: <1272893192.5605.122.camel@twins>
Hello,
On 05/03/2010 03:26 PM, Peter Zijlstra wrote:
> On Thu, 2010-04-22 at 18:09 +0200, Tejun Heo wrote:
>> @@ -2909,7 +2912,9 @@ redo:
>> }
>> raw_spin_unlock_irqrestore(&busiest->lock, flags);
>> if (active_balance)
>> - wake_up_process(busiest->migration_thread);
>> + stop_one_cpu_nowait(cpu_of(busiest),
>> + active_load_balance_cpu_stop, busiest,
>> + &busiest->active_balance_work);
>
> So who guarantees busiest->active_balance_work isn't already enqueued by
> some other cpu's load-balancer run?
>
Hmmm... maybe I'm mistaken but isn't that guaranteed by
busiest->active_balance which is protected by the rq lock?
active_load_balance_cpu_stop is scheduled iff busiest->active_balance
was changed from zero and only active_load_balance_cpu_stop() can
clear it at the end of its execution at which point the
active_balance_work is safe to reuse.
Thanks.
--
tejun
next prev parent reply other threads:[~2010-05-04 7:18 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-22 16:09 [PATCHSET sched/core] cpu_stop: implement and use cpu_stop Tejun Heo
2010-04-22 16:09 ` [PATCH 1/4] cpu_stop: implement stop_cpu[s]() Tejun Heo
2010-05-03 13:26 ` Peter Zijlstra
2010-05-04 6:36 ` Tejun Heo
2010-05-04 7:03 ` Tejun Heo
2010-05-04 8:43 ` Peter Zijlstra
2010-05-03 13:26 ` Peter Zijlstra
2010-05-04 6:36 ` Tejun Heo
2010-05-03 13:26 ` Peter Zijlstra
2010-05-04 6:40 ` Tejun Heo
2010-05-04 6:55 ` Tejun Heo
2010-04-22 16:09 ` [PATCH 2/4] stop_machine: reimplement using cpu_stop Tejun Heo
2010-04-22 16:09 ` [PATCH 3/4] scheduler: replace migration_thread with cpu_stop Tejun Heo
2010-05-03 13:26 ` Peter Zijlstra
2010-05-04 7:17 ` Tejun Heo [this message]
2010-05-04 12:45 ` Peter Zijlstra
2010-05-04 12:49 ` Tejun Heo
2010-04-22 16:09 ` [PATCH 4/4] scheduler: kill paranoia check in synchronize_sched_expedited() Tejun Heo
-- strict thread matches above, loose matches on Subject: below --
2010-05-04 13:47 [PATCHSET sched/core] cpu_stop: implement and use cpu_stop, take#2 Tejun Heo
2010-05-04 13:47 ` [PATCH 3/4] scheduler: replace migration_thread with cpu_stop Tejun Heo
2010-05-05 1:33 ` Paul E. McKenney
2010-05-05 7:28 ` Tejun Heo
2010-05-05 17:47 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BDFC9FB.4020607@kernel.org \
--to=tj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=arjan@linux.intel.com \
--cc=dipankar@in.ibm.com \
--cc=heiko.carstens@de.ibm.com \
--cc=josh@freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=oleg@redhat.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=rusty@rustcorp.com.au \
--cc=sivanich@sgi.com \
--cc=torvalds@linux-foundation.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).