linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Galbraith <umgwanakikbuti@gmail.com>
To: Kirill Tkhai <ktkhai@odin.com>
Cc: linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>
Subject: Re: [PATCH] sched/fair: Skip wake_affine() for core siblings
Date: Tue, 29 Sep 2015 04:03:34 +0200	[thread overview]
Message-ID: <1443492214.3201.34.camel@gmail.com> (raw)
In-Reply-To: <560992AC.7020909@odin.com>

On Mon, 2015-09-28 at 22:19 +0300, Kirill Tkhai wrote:
> >>  Imagine a situation, when we share a mutex
> >> with a task on another NUMA node. When the task is realising the mutex
> >> it is waking us, but we definitelly won't use affine logic in this case.
> > 
> > Why not?  A wakeup is a wakeup is a wakeup, they all do the same thing.
> > If wake_wide() doesn't NAK an affine wakeup, we ask wake_affine() for
> > its opinion, then look for an idle CPU near the waker's CPU if it says
> > OK, or near wakee's previous CPU if it says go away. 
> 
> But NUMA sd does not have SD_WAKE_AFFINE flag, so this case a new cpu won't
> be choosen from previous node. There will be choosen the highest domain
> of smp_processor_id(), which has SD_BALANCE_WAKE flag, and the cpu will
> be choosen from the idlest group/cpu. And we don't have a deal with old
> cache at all. This looks like a completely wrong behaviour...

SD_WAKE_AFFINE is enabled globally by default, and SD_BALANCE_WAKE is
disabled globally due to cost and whatnot.

wingenfelder:~/:[0]# tune-sched-domains
{cpu0/domain0:SMT} SD flag: 4783
+   1: SD_LOAD_BALANCE:          Do load balancing on this domain
+   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
+   4: SD_BALANCE_EXEC:          Balance on exec
+   8: SD_BALANCE_FORK:          Balance on fork, clone
-  16: SD_BALANCE_WAKE:          Wake to idle CPU on task wakeup
+  32: SD_WAKE_AFFINE:           Wake task to waking CPU
-  64:                           [unused]
+ 128: SD_SHARE_CPUCAPACITY:     Domain members share cpu power
- 256: SD_SHARE_POWERDOMAIN:     Domain members share power domain
+ 512: SD_SHARE_PKG_RESOURCES:   Domain members share cpu pkg resources
-1024: SD_SERIALIZE:             Only a single load balancing instance
-2048: SD_ASYM_PACKING:          Place busy groups earlier in the domain
+4096: SD_PREFER_SIBLING:        Prefer to place tasks in a sibling domain
-8192: SD_OVERLAP:               sched_domains of this level overlap
-16384: SD_NUMA:                 cross-node balancing
{cpu0/domain1:MC} SD flag: 4655
+   1: SD_LOAD_BALANCE:          Do load balancing on this domain
+   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
+   4: SD_BALANCE_EXEC:          Balance on exec
+   8: SD_BALANCE_FORK:          Balance on fork, clone
-  16: SD_BALANCE_WAKE:          Wake to idle CPU on task wakeup
+  32: SD_WAKE_AFFINE:           Wake task to waking CPU
-  64:                           [unused]
- 128: SD_SHARE_CPUCAPACITY:     Domain members share cpu power
- 256: SD_SHARE_POWERDOMAIN:     Domain members share power domain
+ 512: SD_SHARE_PKG_RESOURCES:   Domain members share cpu pkg resources
-1024: SD_SERIALIZE:             Only a single load balancing instance
-2048: SD_ASYM_PACKING:          Place busy groups earlier in the domain
+4096: SD_PREFER_SIBLING:        Prefer to place tasks in a sibling domain
-8192: SD_OVERLAP:               sched_domains of this level overlap
-16384: SD_NUMA:                 cross-node balancing
{cpu0/domain2:NUMA} SD flag: 25647
+   1: SD_LOAD_BALANCE:          Do load balancing on this domain
+   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
+   4: SD_BALANCE_EXEC:          Balance on exec
+   8: SD_BALANCE_FORK:          Balance on fork, clone
-  16: SD_BALANCE_WAKE:          Wake to idle CPU on task wakeup
+  32: SD_WAKE_AFFINE:           Wake task to waking CPU
-  64:                           [unused]
- 128: SD_SHARE_CPUCAPACITY:     Domain members share cpu power
- 256: SD_SHARE_POWERDOMAIN:     Domain members share power domain
- 512: SD_SHARE_PKG_RESOURCES:   Domain members share cpu pkg resources
+1024: SD_SERIALIZE:             Only a single load balancing instance
-2048: SD_ASYM_PACKING:          Place busy groups earlier in the domain
-4096: SD_PREFER_SIBLING:        Prefer to place tasks in a sibling domain
+8192: SD_OVERLAP:               sched_domains of this level overlap
+16384: SD_NUMA:                 cross-node balancing

	-Mike



  reply	other threads:[~2015-09-29  2:03 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-25 17:54 [PATCH] sched/fair: Skip wake_affine() for core siblings Kirill Tkhai
2015-09-26 15:25 ` Mike Galbraith
2015-09-28 10:28   ` Kirill Tkhai
2015-09-28 13:12     ` Mike Galbraith
2015-09-28 15:36       ` Kirill Tkhai
2015-09-28 15:49         ` Kirill Tkhai
2015-09-28 18:22         ` Mike Galbraith
2015-09-28 19:19           ` Kirill Tkhai
2015-09-29  2:03             ` Mike Galbraith [this message]
2015-09-29 14:55         ` Mike Galbraith
2015-09-29 16:00           ` Kirill Tkhai
2015-09-29 16:03             ` Kirill Tkhai
2015-09-29 17:29             ` Mike Galbraith
2015-09-30 19:16               ` Kirill Tkhai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1443492214.3201.34.camel@gmail.com \
    --to=umgwanakikbuti@gmail.com \
    --cc=ktkhai@odin.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).