BPF List
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Joel Fernandes <joel@joelfernandes.org>
Cc: David Vernet <void@manifault.com>,
	lsf-pc@lists.linux-foundation.org, bpf@vger.kernel.org,
	schatzberg.dan@gmail.com, andrea.righi@canonical.com,
	davemarchevsky@meta.com, changwoo@igalia.com,
	julia.lawall@inria.fr, himadrispandya@gmail.com
Subject: Re: [LSF/MM/BPF TOPIC] Discuss more features + use cases for sched_ext
Date: Mon, 29 Jan 2024 15:50:52 -1000	[thread overview]
Message-ID: <ZbhV_NSMUaAknOMW@slm.duckdns.org> (raw)
In-Reply-To: <47d47cd3-f49c-401e-9f45-b3de5a084b67@joelfernandes.org>

Hello, Joel.

On Mon, Jan 29, 2024 at 05:42:54PM -0500, Joel Fernandes wrote:
> > This is a great topic. I think integrating/merging such mechanism with the NEST
> > scheduler could be useful too? You mentioned there is sched_ext implementation
> > of NEST already? One reason that's interesting to me is the task-packing and
> > less-spreading may have power benefits, this is exactly what EAS on ARM does,
> > but it also uses an energy model to know when packing is a bad idea. Since we
> > don't have fine grained control of frequency on Intel, I wonder what else can we
> > do to know when the scheduler should pack and when to spread. Maybe something
> > simple which does not need an energy model but packs based on some other
> > signal/heuristic would be great in the short term.
> > 
> > Maybe a signal can be the "Quality of service (QoS)" approach where tasks with
> > lower QoS are packed more aggressively and higher QoS are spread more (?).

This was done for a different purpose (improving tail latencies on latency
critical workload) but it uses soft-affinity based packing which maybe can
translate to power-aware scheduling:

  https://github.com/sched-ext/scx/blob/case-studies/case-studies/scx_layered.md

I have a raptor lake-H laptop which has E and P cores and by default the
threads are being spread across all CPUs which probably isn't best for power
consumption. I was thinking about writing a scheduler which uses a similar
strategy as scx_layered - pack the cores one by one overflowing to the next
core from E to P when the average utilization crosses a set threshold. Most
of the logic is already in scx_layered, so maybe it can just be a part of
that. I'm curious whether whether and how much power can be saved with a
generic approach like that.

Thanks.

-- 
tejun

  parent reply	other threads:[~2024-01-30  1:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-26 21:59 [LSF/MM/BPF TOPIC] Discuss more features + use cases for sched_ext David Vernet
2024-01-29 22:41 ` Joel Fernandes
2024-01-29 22:42   ` Joel Fernandes
2024-01-30  0:15     ` David Vernet
2024-01-30  1:50     ` Tejun Heo [this message]
2024-02-19  9:25       ` Joel Fernandes
2024-02-19  8:48 ` Muhammad Usama Anjum
2024-02-19  9:11   ` Joel Fernandes
2024-02-19  9:14     ` Joel Fernandes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZbhV_NSMUaAknOMW@slm.duckdns.org \
    --to=tj@kernel.org \
    --cc=andrea.righi@canonical.com \
    --cc=bpf@vger.kernel.org \
    --cc=changwoo@igalia.com \
    --cc=davemarchevsky@meta.com \
    --cc=himadrispandya@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=julia.lawall@inria.fr \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=schatzberg.dan@gmail.com \
    --cc=void@manifault.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox