From: Mike Galbraith <efault@gmx.de>
To: Peter Zijlstra <peterz@infradead.org>,
Atish Patra <atish.patra@oracle.com>
Cc: linux-kernel@vger.kernel.org, joelaf@google.com,
brendan.jackman@arm.com, jbacik@fb.com, mingo@redhat.com
Subject: Re: [PATCH RFC 1/2] sched: Minimize the idle cpu selection race window.
Date: Tue, 31 Oct 2017 09:48:25 +0100 [thread overview]
Message-ID: <1509439705.14765.16.camel@gmx.de> (raw)
In-Reply-To: <20171031082009.rxxa57goto6q5xld@hirez.programming.kicks-ass.net>
On Tue, 2017-10-31 at 09:20 +0100, Peter Zijlstra wrote:
> On Tue, Oct 31, 2017 at 12:27:41AM -0500, Atish Patra wrote:
> > Currently, multiple tasks can wakeup on same cpu from
> > select_idle_sibiling() path in case they wakeup simulatenously
> > and last ran on the same llc. This happens because an idle cpu
> > is not updated until idle task is scheduled out. Any task waking
> > during that period may potentially select that cpu for a wakeup
> > candidate.
> >
> > Introduce a per cpu variable that is set as soon as a cpu is
> > selected for wakeup for any task. This prevents from other tasks
> > to select the same cpu again. Note: This does not close the race
> > window but minimizes it to accessing the per-cpu variable. If two
> > wakee tasks access the per cpu variable at the same time, they may
> > select the same cpu again. But it minimizes the race window
> > considerably.
>
> The very most important question; does it actually help? What
> benchmarks, give what numbers?
I played with something ~similar (cmpxchg() idle cpu reservation) a
while back in the context of schbench, and it did help that, but for
generic fast mover benchmarks, the added overhead had the expected
effect, it shaved throughput a wee bit (rob Peter, pay Paul, repeat).
I still have the patch lying about in my rubbish heap, but didn't
bother to save any of the test results.
-Mike
next prev parent reply other threads:[~2017-10-31 8:49 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-31 5:27 [PATCH RFC 0/2] Fix race window during idle cpu selection Atish Patra
2017-10-31 5:27 ` [PATCH RFC 1/2] sched: Minimize the idle cpu selection race window Atish Patra
2017-10-31 8:20 ` Peter Zijlstra
2017-10-31 8:48 ` Mike Galbraith [this message]
2017-11-01 6:08 ` Atish Patra
2017-11-01 6:54 ` Mike Galbraith
2017-11-01 7:18 ` Mike Galbraith
2017-11-01 16:36 ` Atish Patra
2017-11-01 20:20 ` Mike Galbraith
2017-11-05 0:58 ` Joel Fernandes
2017-11-22 5:23 ` Atish Patra
2017-11-23 10:52 ` Uladzislau Rezki
2017-11-23 13:13 ` Mike Galbraith
2017-11-23 16:00 ` Josef Bacik
2017-11-23 17:40 ` Mike Galbraith
2017-11-23 21:11 ` Atish Patra
2017-11-24 10:26 ` Uladzislau Rezki
2017-11-24 18:46 ` Mike Galbraith
2017-11-26 20:58 ` Mike Galbraith
2017-11-28 9:34 ` Uladzislau Rezki
2017-11-28 10:49 ` Mike Galbraith
2017-11-29 10:41 ` Uladzislau Rezki
2017-11-29 18:15 ` Mike Galbraith
2017-11-30 12:30 ` Uladzislau Rezki
2017-10-31 5:27 ` [PATCH DEBUG 2/2] sched: Add a stat for " Atish Patra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1509439705.14765.16.camel@gmx.de \
--to=efault@gmx.de \
--cc=atish.patra@oracle.com \
--cc=brendan.jackman@arm.com \
--cc=jbacik@fb.com \
--cc=joelaf@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox