From: "Martin J. Bligh" <mbligh@aracnet.com>
To: Andrew Theurer <habanero@us.ibm.com>,
Nick Piggin <piggin@cyberone.com.au>
Cc: Rusty Russell <rusty@rustcorp.com.au>, linux-kernel@vger.kernel.org
Subject: Re: New NUMA scheduler and hotplug CPU
Date: Mon, 26 Jan 2004 16:09:37 -0800 [thread overview]
Message-ID: <35060000.1075162177@flay> (raw)
In-Reply-To: <200401261740.12657.habanero@us.ibm.com>
> Call me crazy, but why not let the topology be determined via userspace at a
> more appropriate time? When you hotplug, you tell it where in the scheduler
> to plug it. Have structures in the scheduler which represent the
> nodes-runqueues-cpus topology (in the past I tried a node/rq/cpu structs with
> simple pointers), but let the topology be built based on user's desires thru
> hotplug.
Well, I agree with the "at a more appropriate time" bit. But there's no
real need to make a bunch of complicated stuff out in userspace for this -
we're trying to lay out the scheduler domains according to the hardware
topology of the machine. It's not a userspace namespace or anything.
Having userspace fishing down way deep in hardware specific stuff is
silly - the kernel is there as a hardware abstraction layer.
Now if you wanted to use sched domains for workload management or something
and involve userspace, then yes ... that'd be more appropriate.
> For example, you boot on just the boot cpu, which by default is in the first
> node on the first runqueue. All other cpus, whether being "booted" for the
> for the first time or hotplugged (maybe now there's really no difference),
> the hotplugging tells where the cpu should be, in what node and what
> runqueue. HT cpus work even better, because you can hotplug siblings, once
> at a time if you wanted, to the same runqueue. Or you have cpus sharing a
> die, same thing, lots of choices here. This removes any per-arch updates to
> the kernel for things like scheduler topology, and lets them go somewhere
> else more easily changes, like userspace.
Ummm ... but *none* of that is dictated as policy stuff - it's all just
the hardware layout of the machine. You cannot "decide" as the sysadmin
which node a CPU is in, or which HT sibling it has. It's just there ;-)
The only thing you could possibly dictate is the CPU number you want
assigned to the new CPU, which frankly, I think is pointless - they're
arbitrary tags, and always have been.
M.
next prev parent reply other threads:[~2004-01-27 0:09 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-01-25 23:50 New NUMA scheduler and hotplug CPU Rusty Russell
2004-01-26 8:26 ` Nick Piggin
2004-01-26 16:34 ` Martin J. Bligh
2004-01-26 23:01 ` Nick Piggin
2004-01-26 23:24 ` Martin J. Bligh
2004-01-26 23:40 ` Nick Piggin
2004-01-27 2:36 ` Rusty Russell
2004-01-27 4:38 ` Martin J. Bligh
2004-01-27 5:39 ` Nick Piggin
2004-01-27 7:19 ` Martin J. Bligh
2004-01-27 15:27 ` Martin J. Bligh
2004-01-28 0:23 ` Rusty Russell
2004-01-26 23:40 ` Andrew Theurer
2004-01-27 0:07 ` Nick Piggin
2004-01-27 2:21 ` Andrew Theurer
2004-01-27 2:40 ` Nick Piggin
2004-01-27 0:09 ` Martin J. Bligh [this message]
2004-01-27 2:19 ` Andrew Theurer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=35060000.1075162177@flay \
--to=mbligh@aracnet.com \
--cc=habanero@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=piggin@cyberone.com.au \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox