public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: Ken Chen <kenchen@google.com>
Cc: linux-kernel@vger.kernel.org, Ingo Molnar <mingo@elte.hu>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Dhaval Giani <dhaval@linux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@in.ibm.com>,
	Balbir Singh <balbir@linux.vnet.ibm.com>
Subject: Re: CFS group scheduler fairness broken starting from 2.6.29-rc1
Date: Fri, 24 Jul 2009 10:00:01 +0530	[thread overview]
Message-ID: <20090724043001.GC5304@in.ibm.com> (raw)
In-Reply-To: <b040c32a0907231517l265a9528w628d48fa3625e261@mail.gmail.com>

On Thu, Jul 23, 2009 at 03:17:18PM -0700, Ken Chen wrote:
> On Thu, Jul 23, 2009 at 12:57 AM, Bharata B
> Rao<bharata@linux.vnet.ibm.com> wrote:
> > Hi,
> >
> > Group scheduler fainess is broken since 2.6.29-rc1. git bisect led me
> > to this commit:
> >
> > commit ec4e0e2fe018992d980910db901637c814575914
> > Author: Ken Chen <kenchen@google.com>
> > Date:   Tue Nov 18 22:41:57 2008 -0800
> >
> >    sched: fix inconsistency when redistribute per-cpu tg->cfs_rq shares
> >
> >    Impact: make load-balancing more consistent
> > ....
> >
> > ======================================================================
> >                        % CPU time division b/n groups
> > Group           2.6.29-rc1              2.6.29-rc1 w/o the above patch
> > ======================================================================
> > a with 8 tasks  44                      31
> > b with 5 tasks  32                      34
> > c with 3 tasks  22                      34
> > ======================================================================
> > All groups had equal shares.
> 
> What value did you use for each task_group's share?  For very large
> value of tg->shares, it could be that all of the boost went to one CPU
> and subsequently causes load-balancer to shuffle tasks around.  Do you
> see any unexpected task migration?

Used default 1024 for each group.

Without your patch, each of the tasks see around 165 migrations during
a 60s run, but with your patch, they see 125 migrations (as per
se.nr_migrations). I am using a 8CPU machine here.

Regards,
Bharata.

  reply	other threads:[~2009-07-24  4:31 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-23  7:57 CFS group scheduler fairness broken starting from 2.6.29-rc1 Bharata B Rao
2009-07-23 22:17 ` Ken Chen
2009-07-24  4:30   ` Bharata B Rao [this message]
2009-07-27 12:09 ` Peter Zijlstra
2009-07-28  4:14   ` Bharata B Rao
2009-07-28  7:28     ` Peter Zijlstra
2009-08-02 13:12   ` [tip:sched/core] sched: Fix cgroup smp fairness tip-bot for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090724043001.GC5304@in.ibm.com \
    --to=bharata@linux.vnet.ibm.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=dhaval@linux.vnet.ibm.com \
    --cc=kenchen@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=vatsa@in.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox