From: Peter Zijlstra <peterz@infradead.org>
To: bsegall@google.com
Cc: Dave Chiluk <chiluk+linux@indeed.com>,
Pqhil Auld <pauld@redhat.com>, Peter Oskolkov <posk@posk.io>,
Ingo Molnar <mingo@redhat.com>,
cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
Brendan Gregg <bgregg@netflix.com>, Kyle Anderson <kwa@yelp.com>,
Gabriel Munos <gmunoz@netflix.com>,
John Hammond <jhammond@indeed.com>,
Cong Wang <xiyou.wangcong@gmail.com>,
Jonathan Corbet <corbet@lwn.net>,
linux-doc@vger.kernel.org, Paul Turner <pjt@google.com>
Subject: Re: [PATCH v5 1/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices
Date: Thu, 11 Jul 2019 11:51:02 +0200 [thread overview]
Message-ID: <20190711095102.GX3402@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <xm26lfxhwlxr.fsf@bsegall-linux.svl.corp.google.com>
FWIW, good to see progress, still waiting for you guys to agree :-)
On Mon, Jul 01, 2019 at 01:15:44PM -0700, bsegall@google.com wrote:
> - Taking up-to-every rq->lock is bad and expensive and 5ms may be too
> short a delay for this. I haven't tried microbenchmarks on the cost of
> this vs min_cfs_rq_runtime = 0 vs baseline.
Yes, that's tricky, SGI/HPE have definite ideas about that.
> @@ -4781,12 +4790,41 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq)
> */
> static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
> {
> - u64 runtime = 0, slice = sched_cfs_bandwidth_slice();
> + u64 runtime = 0;
> unsigned long flags;
> u64 expires;
> + struct cfs_rq *cfs_rq, *temp;
> + LIST_HEAD(temp_head);
> +
> + local_irq_save(flags);
> +
> + raw_spin_lock(&cfs_b->lock);
> + cfs_b->slack_started = false;
> + list_splice_init(&cfs_b->slack_cfs_rq, &temp_head);
> + raw_spin_unlock(&cfs_b->lock);
> +
> +
> + /* Gather all left over runtime from all rqs */
> + list_for_each_entry_safe(cfs_rq, temp, &temp_head, slack_list) {
> + struct rq *rq = rq_of(cfs_rq);
> + struct rq_flags rf;
> +
> + rq_lock(rq, &rf);
> +
> + raw_spin_lock(&cfs_b->lock);
> + list_del_init(&cfs_rq->slack_list);
> + if (!cfs_rq->nr_running && cfs_rq->runtime_remaining > 0 &&
> + cfs_rq->runtime_expires == cfs_b->runtime_expires) {
> + cfs_b->runtime += cfs_rq->runtime_remaining;
> + cfs_rq->runtime_remaining = 0;
> + }
> + raw_spin_unlock(&cfs_b->lock);
> +
> + rq_unlock(rq, &rf);
> + }
But worse still, you take possibly every rq->lock without ever
re-enabling IRQs.
>
> /* confirm we're still not at a refresh boundary */
> - raw_spin_lock_irqsave(&cfs_b->lock, flags);
> + raw_spin_lock(&cfs_b->lock);
> cfs_b->slack_started = false;
> if (cfs_b->distribute_running) {
> raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
next prev parent reply other threads:[~2019-07-11 9:51 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-17 19:30 [PATCH] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu slices Dave Chiluk
2019-05-23 18:44 ` [PATCH v2 0/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices Dave Chiluk
2019-05-23 18:44 ` [PATCH v2 1/1] " Dave Chiluk
2019-05-23 21:01 ` Peter Oskolkov
2019-05-24 14:32 ` Phil Auld
2019-05-24 15:14 ` Dave Chiluk
2019-05-24 15:59 ` Phil Auld
2019-05-24 16:28 ` Peter Oskolkov
2019-05-24 21:35 ` Dave Chiluk
2019-05-24 22:07 ` Peter Oskolkov
2019-05-28 22:25 ` Dave Chiluk
2019-05-24 8:55 ` Peter Zijlstra
2019-05-29 19:08 ` [PATCH v3 0/1] " Dave Chiluk
2019-05-29 19:08 ` [PATCH v3 1/1] " Dave Chiluk
2019-05-29 19:28 ` Phil Auld
2019-05-29 19:50 ` bsegall
2019-05-29 21:05 ` bsegall
2019-05-30 17:53 ` Dave Chiluk
2019-05-30 20:44 ` bsegall
2019-06-27 19:09 ` [PATCH] " Dave Chiluk
2019-06-27 19:49 ` [PATCH v5 0/1] " Dave Chiluk
2019-06-27 19:49 ` [PATCH v5 1/1] " Dave Chiluk
2019-07-01 20:15 ` bsegall
2019-07-11 9:51 ` Peter Zijlstra [this message]
2019-07-11 17:46 ` bsegall
[not found] ` <CAC=E7cV4sO50NpYOZ06n_BkZTcBqf1KQp83prc+oave3ircBrw@mail.gmail.com>
2019-07-12 18:01 ` bsegall
2019-07-12 22:09 ` bsegall
2019-07-15 15:44 ` Dave Chiluk
2019-07-16 19:58 ` bsegall
2019-07-23 16:44 ` [PATCH v6 0/1] " Dave Chiluk
2019-07-23 16:44 ` [PATCH v6 1/1] " Dave Chiluk
2019-07-23 17:13 ` Phil Auld
2019-07-23 22:12 ` Dave Chiluk
2019-07-23 23:26 ` Phil Auld
2019-07-26 18:14 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190711095102.GX3402@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=bgregg@netflix.com \
--cc=bsegall@google.com \
--cc=cgroups@vger.kernel.org \
--cc=chiluk+linux@indeed.com \
--cc=corbet@lwn.net \
--cc=gmunoz@netflix.com \
--cc=jhammond@indeed.com \
--cc=kwa@yelp.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pauld@redhat.com \
--cc=pjt@google.com \
--cc=posk@posk.io \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).