From: Mel Gorman <mgorman@techsingularity.net>
To: Aubrey Li <aubrey.li@intel.com>
Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com,
linux-kernel@vger.kernel.org, Andi Kleen <ak@linux.intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Aubrey Li <aubrey.li@linux.intel.com>
Subject: Re: [RFC PATCH v1] sched/fair: limit load balance redo times at the same sched_domain level
Date: Mon, 25 Jan 2021 09:06:29 +0000 [thread overview]
Message-ID: <20210125090628.GX3592@techsingularity.net> (raw)
In-Reply-To: <1611554578-6464-1-git-send-email-aubrey.li@intel.com>
On Mon, Jan 25, 2021 at 02:02:58PM +0800, Aubrey Li wrote:
> A long-tail load balance cost is observed on the newly idle path,
> this is caused by a race window between the first nr_running check
> of the busiest runqueue and its nr_running recheck in detach_tasks.
>
> Before the busiest runqueue is locked, the tasks on the busiest
> runqueue could be pulled by other CPUs and nr_running of the busiest
> runqueu becomes 1, this causes detach_tasks breaks with LBF_ALL_PINNED
> flag set, and triggers load_balance redo at the same sched_domain level.
>
> In order to find the new busiest sched_group and CPU, load balance will
> recompute and update the various load statistics, which eventually leads
> to the long-tail load balance cost.
>
> This patch introduces a variable(sched_nr_lb_redo) to limit load balance
> redo times, combined with sysctl_sched_nr_migrate, the max load balance
> cost is reduced from 100+ us to 70+ us, measured on a 4s x86 system with
> 192 logical CPUs.
>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>
> Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> Signed-off-by: Aubrey Li <aubrey.li@linux.intel.com>
If redo_max is a constant, why is it not a #define instead of increasing
the size of lb_env?
--
Mel Gorman
SUSE Labs
next prev parent reply other threads:[~2021-01-26 21:12 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-25 6:02 [RFC PATCH v1] sched/fair: limit load balance redo times at the same sched_domain level Aubrey Li
2021-01-25 9:06 ` Mel Gorman [this message]
2021-01-25 13:53 ` Li, Aubrey
2021-01-25 14:40 ` Mel Gorman
2021-01-25 10:56 ` Vincent Guittot
2021-01-25 14:00 ` Li, Aubrey
2021-01-25 14:51 ` Vincent Guittot
2021-01-26 1:40 ` Li, Aubrey
2021-02-23 5:41 ` Li, Aubrey
2021-02-23 17:33 ` Vincent Guittot
2021-02-24 2:55 ` Li, Aubrey
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210125090628.GX3592@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=ak@linux.intel.com \
--cc=aubrey.li@intel.com \
--cc=aubrey.li@linux.intel.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=rostedt@goodmis.org \
--cc=srinivas.pandruvada@linux.intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox