From: luca abeni <luca.abeni@unitn.it>
To: Tommaso Cucinotta <tommaso.cucinotta@sssup.it>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Juri Lelli <juri.lelli@gmail.com>,
Steven Rostedt <rostedt@goodmis.org>,
Claudio Scordino <claudio@evidence.eu.com>,
Daniel Bistrot de Oliveira <danielbristot@gmail.com>,
Henrik Austad <henrik@austad.us>,
linux-kernel@vger.kernel.org,
"al.biondi@sssup.it" <al.biondi@sssup.it>
Subject: Re: [RFD] sched/deadline: Support single CPU affinity
Date: Thu, 10 Nov 2016 15:34:13 +0100 [thread overview]
Message-ID: <20161110153413.6900aa2b@sweethome> (raw)
In-Reply-To: <03a30c9c-eb42-c5c6-b94d-3a62048d8642@sssup.it>
On Thu, 10 Nov 2016 12:03:47 +0100
Tommaso Cucinotta <tommaso.cucinotta@sssup.it> wrote:
> On 10/11/2016 10:06, luca abeni wrote:
> > is equivalent to the "least laxity first" (LLF) algorithm.
> > Giving precedence to tasks with 0 laxity is a technique that is
> > often used to improve the schedulability on multi-processor
> > systems.
>
> EDZL (EDF / Zero Laxity first), right?
Yes, basically all the "ZL" algorithms (EDZL, but I think I've also
seen something like RMZL or similar).
> AFAICR, there's quite a lot of
> analysis on EDZL for multi-cores... eg, Insik Shin et al....
>
> http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6374195
Yes, this is why I mentined the 0-laxity thing... Of course, here the
situation is different (there are tasks that can be migrated, and tasks
that cannot), but maybe the 0-laxity analysis can be adapted to this
case?
> But, before going the EDZL way, isn't it worthwhile to consider
> just splitting tasks among 2 cpus
>
> https://people.mpi-sws.org/~bbb/papers/pdf/rtss16b.pdf
Yes, there are many possible different strategies that can be tested (I
think somewhere I saw some semi-partitioned algorithm that was even
optimal). I suspect everything depends on the trade-off between
implementation complexity and scheduling efficiency.
Luca
>
> ? ... we're working at RETIS on simpler ways to make the AC for
> these split tasks cases (cc-ing Alessandro) that doesn't need
> demand-bound complex analysis...
>
> My2c,
>
> T.
next prev parent reply other threads:[~2016-11-10 14:34 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-10 8:08 [RFD] sched/deadline: Support single CPU affinity Peter Zijlstra
2016-11-10 9:06 ` luca abeni
2016-11-10 10:59 ` Peter Zijlstra
2016-11-10 12:27 ` luca abeni
2016-11-10 11:03 ` Tommaso Cucinotta
2016-11-10 14:34 ` luca abeni [this message]
2016-11-10 10:01 ` Tommaso Cucinotta
2016-12-13 10:21 ` Peter Zijlstra
2016-12-15 11:30 ` Tommaso Cucinotta
2016-12-15 12:16 ` Peter Zijlstra
2016-11-10 12:21 ` Henrik Austad
2016-11-10 12:38 ` luca abeni
2016-11-10 12:56 ` Henrik Austad
2016-11-10 14:23 ` luca abeni
2016-11-10 12:56 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161110153413.6900aa2b@sweethome \
--to=luca.abeni@unitn.it \
--cc=al.biondi@sssup.it \
--cc=claudio@evidence.eu.com \
--cc=danielbristot@gmail.com \
--cc=henrik@austad.us \
--cc=juri.lelli@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=tommaso.cucinotta@sssup.it \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).