From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Rick Lindsley <ricklind@us.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>, John Hawkes <hawkes@sgi.com>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: Scheduler balancing statistics
Date: Fri, 02 Apr 2004 17:52:43 +1000 [thread overview]
Message-ID: <406D1BCB.3090304@yahoo.com.au> (raw)
In-Reply-To: <200404020735.i327Zk604510@owlet.beaverton.ibm.com>
Rick Lindsley wrote:
> The important thing, as always, is that collecting the stats not impact
> the action being taken. If you stick with incrementing counters and
> not taking additional locks, then you've probably done what you can to
> minimize any impact.
>
Yes, they're all simple increments without the need for any
locking.
>>From an analysis standpoint it would be nice to know which of the major
> features are being activated for a particular load. So imbalance-driven
> moves, power-driven moves, and the number of times each domain tried
> to balance and failed would all be useful. I think your output covered
> those.
>
It doesn't get into the finer points of how the imbalance
is derived, but maybe it should...
> Another useful stat might be how many times the self-adjusting fields
> (min, max) adjusted themselves. That might yield some insights on
> whether that's working well (or correctly).
>
Might be a good idea.
> When I started thinking about these stats, I started thinking about how to
> identify the domains. "domain0" and "domain1" do uniquely identify some
> data structures, but especially as they get hierarchical, can we easily
> tie them to the cpus they manage? Perhaps the stats should include a
> bitmap of what cpus are covered by the domain too.
>
Well, every domain that is reported here will cover the entire
system because it simply takes the sum of statistics from all
domains.
It is a good overview, but it probably would be a good idea to
be able to break down the views and zoom in a bit.
> Looks very useful for those times when some workload causes the scheduler
> to burp -- between scheduler stats and domain stats we may find it much
> easier to track down issues.
>
> Would you say these would be in addition to the schedstats or would
> these replace them?
It will replace some of them, I think.
For example, all load_balance operations are done within the
context of a sched domain, so you would use the sched domain's
statistics there. However you have other statistics that are
specific to the runqueue, for example, which would stay where
they are.
Thanks
Nick
next prev parent reply other threads:[~2004-04-02 7:53 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-04-02 7:18 Scheduler balancing statistics Nick Piggin
2004-04-02 7:35 ` Rick Lindsley
2004-04-02 7:52 ` Nick Piggin [this message]
2004-04-02 8:53 ` Rick Lindsley
2004-04-02 9:59 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=406D1BCB.3090304@yahoo.com.au \
--to=nickpiggin@yahoo.com.au \
--cc=hawkes@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=ricklind@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox