From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: LKML <linux-kernel@vger.kernel.org>,
cgroups <cgroups@vger.kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [PATCH 1/3] sched: fix cgroup movement of newly created process
Date: Tue, 13 Dec 2011 13:41:09 +0100 [thread overview]
Message-ID: <1323780069.9082.15.camel@twins> (raw)
In-Reply-To: <20111213155758.30d2787e.nishimura@mxp.nes.nec.co.jp>
On Tue, 2011-12-13 at 15:57 +0900, Daisuke Nishimura wrote:
> kernel/sched_fair.c | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
you blink you loose, that file doesn't exist anymore.
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 5c9e679..df145a9 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -4922,10 +4922,10 @@ static void task_move_group_fair(struct task_struct *p, int on_rq)
> * to another cgroup's rq. This does somewhat interfere with the
> * fair sleeper stuff for the first placement, but who cares.
> */
> - if (!on_rq)
> + if (!on_rq && p->state != TASK_RUNNING)
> p->se.vruntime -= cfs_rq_of(&p->se)->min_vruntime;
> set_task_rq(p, task_cpu(p));
> - if (!on_rq)
> + if (!on_rq && p->state != TASK_RUNNING)
> p->se.vruntime += cfs_rq_of(&p->se)->min_vruntime;
> }
> #endif
The much saner way of writing that is something like:
/*
* Comment explaining stuff..
*/
if (!on_rq && p->state == TASK_RUNNING)
on_rq = 1;
...
next prev parent reply other threads:[~2011-12-13 12:42 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-13 6:57 [PATCH 0/3] sched: some fixes for vruntime calculation related to cgroup movement Daisuke Nishimura
2011-12-13 6:57 ` [PATCH 1/3] sched: fix cgroup movement of newly created process Daisuke Nishimura
2011-12-13 12:01 ` Paul Turner
2011-12-14 1:04 ` Daisuke Nishimura
2011-12-13 12:41 ` Peter Zijlstra [this message]
2011-12-14 1:05 ` Daisuke Nishimura
2011-12-13 6:58 ` [PATCH 2/3] sched: fix cgroup movement of forking process Daisuke Nishimura
2011-12-13 12:22 ` Paul Turner
2011-12-13 6:59 ` [PATCH 3/3] sched: fix cgroup movement of waking process Daisuke Nishimura
2011-12-13 12:24 ` Paul Turner
2011-12-15 5:35 ` [PATCH -tip 0/3] sched: some fixes for vruntime calculation related to cgroup movement(v2) Daisuke Nishimura
2011-12-15 5:36 ` [PATCH 1/3] sched: fix cgroup movement of newly created process Daisuke Nishimura
2011-12-21 11:45 ` [tip:sched/core] sched: Fix " tip-bot for Daisuke Nishimura
2011-12-15 5:36 ` [PATCH 2/3] sched: fix cgroup movement of forking process Daisuke Nishimura
2011-12-21 11:44 ` [tip:sched/core] sched: Fix " tip-bot for Daisuke Nishimura
2011-12-21 17:26 ` Tejun Heo
2011-12-21 17:37 ` Tejun Heo
2011-12-22 1:54 ` Frederic Weisbecker
2011-12-22 2:01 ` Tejun Heo
2011-12-15 5:37 ` [PATCH 3/3] sched: fix cgroup movement of waking process Daisuke Nishimura
2011-12-21 11:45 ` [tip:sched/core] sched: Fix " tip-bot for Daisuke Nishimura
2011-12-19 2:55 ` [PATCH -tip 0/3] sched: some fixes for vruntime calculation related to cgroup movement(v2) Daisuke Nishimura
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1323780069.9082.15.camel@twins \
--to=a.p.zijlstra@chello.nl \
--cc=cgroups@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox