From: Peter Zijlstra <peterz@infradead.org>
To: Byungchul Park <byungchul.park@lge.com>
Cc: mingo@kernel.org, linux-kernel@vger.kernel.org,
yuyang.du@intel.com, pjt@google.com, efault@gmx.de,
tglx@linutronix.de
Subject: Re: [PATCH v4 3/3] sched: optimize migration by forcing rmb() and updating to be called once
Date: Wed, 18 Nov 2015 00:55:10 +0100 [thread overview]
Message-ID: <20151117235510.GC3816@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <20151117233700.GD18234@byungchulpark-X58A-UD3R>
On Wed, Nov 18, 2015 at 08:37:00AM +0900, Byungchul Park wrote:
> Which one do you think to be fixed? The one above migrate_task_rq_fair()?
> I wonder if it would be ok even it does not hold pi_lock in
> migrate_task_rq_fair(). If you say *no problem*, I will try to fix the
> comment.
The one above migrate_task_rq_fair() is obviously broken, as
demonstrated by the move_queued_task() case.
Also, pretty much all runnable task migration code will not take
pi_lock, see also {pull,push}_{rt,dl}_task().
Note that this is done very much by design, task_rq_lock() is the thing
that fully serializes a task's scheduler state. Runnable tasks use
rq->lock, waking tasks use pi_lock.
> > I meant, if you call __set_task_cpu() before
> > sched_class::migrate_task_rq(), in that case task_rq_lock() will no
> > longer fully serialize against set_task_cpu().
> >
> > Because once you've called __set_task_cpu(), task_rq_lock() will acquire
> > the _other_ rq->lock. And we cannot rely on our rq->lock to serialize
> > things.
>
> I agree with you if migtrate_task_rq() can be serialized by rq->lock
> without holding pi_lock. (even though I am still wondering..)
move_queued_task() illustrates this.
> But I thought it was no problem if migrate_task_rq() was serialized only
> by pi_lock as the comment above the migrate_task_rq() describes, because
> breaking rq->lock does not affect the sericalization by pi_lock.
Right, but per the above, we cannot assume pi_lock is in fact held over
this.
next prev parent reply other threads:[~2015-11-17 23:55 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-23 16:16 [PATCH v4 0/3] sched: account fair load avg consistently byungchul.park
2015-10-23 16:16 ` [PATCH v4 1/3] sched/fair: make it possible to " byungchul.park
2015-12-04 11:54 ` [tip:sched/core] sched/fair: Make " tip-bot for Byungchul Park
2015-10-23 16:16 ` [PATCH v4 2/3] sched/fair: split the remove_entity_load_avg() into two functions byungchul.park
2015-10-23 16:16 ` [PATCH v4 3/3] sched: optimize migration by forcing rmb() and updating to be called once byungchul.park
2015-11-09 13:29 ` Peter Zijlstra
2015-11-10 1:09 ` Byungchul Park
2015-11-10 12:16 ` Peter Zijlstra
2015-11-10 23:51 ` Byungchul Park
2015-11-11 10:15 ` Byungchul Park
2015-11-16 12:53 ` Peter Zijlstra
2015-11-17 0:44 ` Byungchul Park
2015-11-17 11:21 ` Peter Zijlstra
2015-11-17 23:37 ` Byungchul Park
2015-11-17 23:55 ` Peter Zijlstra [this message]
2015-11-18 0:02 ` Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151117235510.GC3816@twins.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=byungchul.park@lge.com \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=pjt@google.com \
--cc=tglx@linutronix.de \
--cc=yuyang.du@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox