linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Luca Abeni <luca.abeni@unitn.it>
Cc: linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@arm.com>
Subject: Re: [RFC 8/8] Do not reclaim the whole CPU bandwidth
Date: Fri, 15 Jan 2016 09:50:04 +0100	[thread overview]
Message-ID: <20160115085004.GE3421@worktop> (raw)
In-Reply-To: <5698ABFD.1040704@unitn.it>

On Fri, Jan 15, 2016 at 09:21:17AM +0100, Luca Abeni wrote:
> On 01/14/2016 08:59 PM, Peter Zijlstra wrote:
> >On Thu, Jan 14, 2016 at 04:24:53PM +0100, Luca Abeni wrote:
> >>Original GRUB tends to reclaim 100% of the CPU time... And this allows a
> >>"CPU hog" (i.e., a busy loop) to starve non-deadline tasks.
> >>To address this issue, allow the scheduler to reclaim only a specified
> >>fraction of CPU time.
> >>NOTE: the fraction of CPU time that cannot be reclaimed is currently
> >>hardcoded as (1 << 20) / 10 -> 90%, but it must be made configurable!
> >
> >So the alternative is an explicit SCHED_OTHER server which is
> >configurable.
> Yes, I have thought about something similar (actually, this is the strategy
> I implemented in my first CBS/GRUB scheduler. With the "old" 2.4 scheduler,
> this was easier :).
> But I think the solution I implemented in this patch is much simpler (it
> just requires a very simple modification to grub_reclaim()) and is more
> elegant from the theoretical point of view.

It is certainly simpler, agreed.

The trouble is with interfaces. Once we expose them we're stuck with
them. And from that POV I think an explicit SCHED_OTHER server (or a
minimum budget for a slack time scheme) makes more sense.

It provides this same information while also providing more benefit, no?

> >That would maybe fit in nicely with the DL based FIFO/RR servers from
> >this other pending project.
> Yes, this reminds me about the half-finished patch for RT throttling using
> SCHED_DEADLINE... But that patch needs much more work IMHO.

IIRC two years ago at RTLWS there was a presentation that the SMP issues
were 'solved' and they would be posting the patches 'soon'. 

  reply	other threads:[~2016-01-15  8:50 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-14 15:24 [RFC 0/8] CPU reclaiming for SCHED_DEADLINE Luca Abeni
2016-01-14 15:24 ` [RFC 1/8] Track the active utilisation Luca Abeni
2016-01-14 16:49   ` Peter Zijlstra
2016-01-15  6:37     ` Luca Abeni
2016-01-14 19:13   ` Peter Zijlstra
2016-01-15  8:07     ` Luca Abeni
2016-01-14 15:24 ` [RFC 2/8] Correctly track the active utilisation for migrating tasks Luca Abeni
2016-01-14 15:24 ` [RFC 3/8] sched/deadline: add some tracepoints Luca Abeni
2016-01-14 15:24 ` [RFC 4/8] Improve the tracking of active utilisation Luca Abeni
2016-01-14 17:16   ` Peter Zijlstra
2016-01-15  6:48     ` Luca Abeni
2016-01-14 19:43   ` Peter Zijlstra
2016-01-15  9:27     ` Luca Abeni
2016-01-19 12:20     ` Luca Abeni
2016-01-19 13:47       ` Peter Zijlstra
2016-01-27 13:36         ` Luca Abeni
2016-01-27 14:39           ` Peter Zijlstra
2016-01-27 14:45             ` Luca Abeni
2016-01-28 13:08               ` Vincent Guittot
     [not found]               ` <CAKfTPtAt0gTwk9aAZN238NT1O-zJvxVQDTh2QN_KxAnE61xMww@mail.gmail.com>
2016-01-28 13:48                 ` luca abeni
2016-01-28 13:56                   ` Vincent Guittot
2016-01-28 11:14             ` luca abeni
2016-01-28 12:21               ` Peter Zijlstra
2016-01-28 13:41                 ` luca abeni
2016-01-28 14:00                   ` Peter Zijlstra
2016-01-28 21:15                     ` Luca Abeni
2016-01-14 19:47   ` Peter Zijlstra
2016-01-15  8:10     ` Luca Abeni
2016-01-15  8:32       ` Peter Zijlstra
2016-01-14 15:24 ` [RFC 5/8] Track the "total rq utilisation" too Luca Abeni
2016-01-14 19:12   ` Peter Zijlstra
2016-01-15  8:04     ` Luca Abeni
2016-01-14 19:48   ` Peter Zijlstra
2016-01-15  6:50     ` Luca Abeni
2016-01-15  8:34       ` Peter Zijlstra
2016-01-15  9:15         ` Luca Abeni
2016-01-29 15:06           ` Peter Zijlstra
2016-01-29 21:21             ` Luca Abeni
2016-01-14 15:24 ` [RFC 6/8] GRUB accounting Luca Abeni
2016-01-14 19:50   ` Peter Zijlstra
2016-01-15  8:05     ` Luca Abeni
2016-01-14 15:24 ` [RFC 7/8] Make GRUB a task's flag Luca Abeni
2016-01-14 19:56   ` Peter Zijlstra
2016-01-15  8:15     ` Luca Abeni
2016-01-15  8:41       ` Peter Zijlstra
2016-01-15  9:08         ` Luca Abeni
2016-01-14 15:24 ` [RFC 8/8] Do not reclaim the whole CPU bandwidth Luca Abeni
2016-01-14 19:59   ` Peter Zijlstra
2016-01-15  8:21     ` Luca Abeni
2016-01-15  8:50       ` Peter Zijlstra [this message]
2016-01-15  9:49         ` Luca Abeni
2016-01-26 12:52         ` luca abeni
2016-01-27 14:44           ` Peter Zijlstra
2016-02-02 20:53             ` Luca Abeni
2016-02-03 11:30               ` Juri Lelli
2016-02-03 13:28                 ` luca abeni
2016-01-19 10:11 ` [RFC 0/8] CPU reclaiming for SCHED_DEADLINE Juri Lelli
2016-01-19 11:50   ` Luca Abeni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160115085004.GE3421@worktop \
    --to=peterz@infradead.org \
    --cc=juri.lelli@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luca.abeni@unitn.it \
    --cc=mingo@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).