BPF List
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: Tejun Heo <tj@kernel.org>
Cc: David Vernet <void@manifault.com>,
	lsf-pc@lists.linux-foundation.org, bpf@vger.kernel.org,
	schatzberg.dan@gmail.com, andrea.righi@canonical.com,
	davemarchevsky@meta.com, changwoo@igalia.com,
	julia.lawall@inria.fr, himadrispandya@gmail.com
Subject: Re: [LSF/MM/BPF TOPIC] Discuss more features + use cases for sched_ext
Date: Mon, 19 Feb 2024 04:25:40 -0500	[thread overview]
Message-ID: <e06be767-419b-4026-a4e2-fb10c02df9f6@joelfernandes.org> (raw)
In-Reply-To: <ZbhV_NSMUaAknOMW@slm.duckdns.org>

On 1/29/2024 8:50 PM, Tejun Heo wrote:> On Mon, Jan 29, 2024 at 05:42:54PM
-0500, Joel Fernandes wrote:
>>> This is a great topic. I think integrating/merging such mechanism with the NEST
>>> scheduler could be useful too? You mentioned there is sched_ext implementation
>>> of NEST already? One reason that's interesting to me is the task-packing and
>>> less-spreading may have power benefits, this is exactly what EAS on ARM does,
>>> but it also uses an energy model to know when packing is a bad idea. Since we
>>> don't have fine grained control of frequency on Intel, I wonder what else can we
>>> do to know when the scheduler should pack and when to spread. Maybe something
>>> simple which does not need an energy model but packs based on some other
>>> signal/heuristic would be great in the short term.
>>>
>>> Maybe a signal can be the "Quality of service (QoS)" approach where tasks with
>>> lower QoS are packed more aggressively and higher QoS are spread more (?).
> 
> This was done for a different purpose (improving tail latencies on latency
> critical workload) but it uses soft-affinity based packing which maybe can
> translate to power-aware scheduling:
> 
>   https://github.com/sched-ext/scx/blob/case-studies/case-studies/scx_layered.md

Thanks! I am looking more into this (scx_layered) for the latency benefits as
well. David kindly gave me an introduction to it last week. It seems quite
similar to our approach with using RT (round-robin) for the higher tier (that is
have a higher tier of tasks that are fair scheduled over a lower one). There is
the issue of starvation though (a higher tier/layer starves a lower one), so
we're incorporating the DL server to help with that:
https://lore.kernel.org/all/cover.1699095159.git.bristot@kernel.org/
https://lore.kernel.org/all/20240216183108.1564958-1-joel@joelfernandes.org/

Interesting on the soft-affinity feature and yeah that help save power and might
be a better approach than say our usage of RT.

> I have a raptor lake-H laptop which has E and P cores and by default the
> threads are being spread across all CPUs which probably isn't best for power
> consumption. I was thinking about writing a scheduler which uses a similar
> strategy as scx_layered - pack the cores one by one overflowing to the next
> core from E to P when the average utilization crosses a set threshold. Most
> of the logic is already in scx_layered, so maybe it can just be a part of
> that. I'm curious whether whether and how much power can be saved with a
> generic approach like that.

Can the scx NEST scheduler be reused for this? AFAIR, it does similar task
packing. Though that is to keep more cores idle than to pack tasks to a certain
type of core, if I remember Julia's presentation.

thanks,

 - Joel

  reply	other threads:[~2024-02-19  9:25 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-26 21:59 [LSF/MM/BPF TOPIC] Discuss more features + use cases for sched_ext David Vernet
2024-01-29 22:41 ` Joel Fernandes
2024-01-29 22:42   ` Joel Fernandes
2024-01-30  0:15     ` David Vernet
2024-01-30  1:50     ` Tejun Heo
2024-02-19  9:25       ` Joel Fernandes [this message]
2024-02-19  8:48 ` Muhammad Usama Anjum
2024-02-19  9:11   ` Joel Fernandes
2024-02-19  9:14     ` Joel Fernandes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e06be767-419b-4026-a4e2-fb10c02df9f6@joelfernandes.org \
    --to=joel@joelfernandes.org \
    --cc=andrea.righi@canonical.com \
    --cc=bpf@vger.kernel.org \
    --cc=changwoo@igalia.com \
    --cc=davemarchevsky@meta.com \
    --cc=himadrispandya@gmail.com \
    --cc=julia.lawall@inria.fr \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=schatzberg.dan@gmail.com \
    --cc=tj@kernel.org \
    --cc=void@manifault.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox