From: Andrea Righi <arighi@nvidia.com>
To: Christian Loehle <christian.loehle@arm.com>
Cc: Qiliang Yuan <realwujing@gmail.com>, Tejun Heo <tj@kernel.org>,
David Vernet <void@manifault.com>,
Changwoo Min <changwoo@igalia.com>,
linux-kernel@vger.kernel.org, sched-ext@lists.linux.dev,
bpf@vger.kernel.org
Subject: Re: [PATCH] sched_ext: Add scx_ai_numa scheduler example for AI workloads
Date: Fri, 8 May 2026 11:37:23 +0200 [thread overview]
Message-ID: <af2u03T-pyvoDgYK@gpd4> (raw)
In-Reply-To: <26f2d829-1083-4cff-b737-a3701c7ddd85@arm.com>
Hi Christian,
On Fri, May 08, 2026 at 10:29:24AM +0100, Christian Loehle wrote:
> On 5/8/26 08:56, Andrea Righi wrote:
> > Hi Qiliang,
> >
> > On Fri, May 08, 2026 at 03:51:35PM +0800, Qiliang Yuan wrote:
> >> Implement an AI-focused NUMA-aware scheduler that optimizes task dispatch for
> >> GPU-accelerated AI training. The scheduler maintains per-NUMA-node dispatch
> >> queues to preserve L3 cache warmth and minimize remote DRAM accesses that
> >> would stall GPU kernel launches waiting on CPU preprocessing.
> >>
> >> Key features:
> >> - Per-NUMA-node DSQs (dispatch queues) to maintain cache locality
> >> - Idle fast path that bypasses DSQ for minimum latency
> >> - Per-task NUMA affinity tracking to remember task placement
> >> - Work stealing across nodes to prevent starvation during load imbalance
> >>
> >> The BPF component (scx_ai_numa.bpf.c) implements the core scheduler
> >> callbacks, while the userspace loader (scx_ai_numa.c) detects NUMA
> >> topology, installs the BPF program, and reports per-node dispatch
> >> statistics every second.
> >>
> >> This scheduler is suitable for AI training workloads where GPU command
> >> launches depend on rapid CPU preprocessing with minimal scheduling latency.
> >>
> >> Signed-off-by: Qiliang Yuan <realwujing@gmail.com>
> >
> > I think this would be more appropriate for inclusion in
> > https://github.com/sched-ext/scx.
>
> That repo no longer hosts C schedulers though, no?
> I guess it's trivial to convert this particular one to rust.
Correct, it shouldn't be too difficult to convert it to Rust and the scx repo
seems a better place for this scheduler.
In general, I think we don't want to add too many example schedulers in the
kernel, those in tools/sched_ext shouldn't be focused on production goals, they
are there to test certain specific sched_ext functionality and show how to use
the framework.
Thanks,
-Andrea
next prev parent reply other threads:[~2026-05-08 9:37 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 7:51 [PATCH] sched_ext: Add scx_ai_numa scheduler example for AI workloads Qiliang Yuan
2026-05-08 7:56 ` Andrea Righi
2026-05-08 9:29 ` Christian Loehle
2026-05-08 9:37 ` Andrea Righi [this message]
2026-05-08 19:08 ` sashiko-bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=af2u03T-pyvoDgYK@gpd4 \
--to=arighi@nvidia.com \
--cc=bpf@vger.kernel.org \
--cc=changwoo@igalia.com \
--cc=christian.loehle@arm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=realwujing@gmail.com \
--cc=sched-ext@lists.linux.dev \
--cc=tj@kernel.org \
--cc=void@manifault.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox