From: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
To: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Paul Jackson <pj@sgi.com>, Robin Holt <holt@sgi.com>,
suresh.b.siddha@intel.com, dino@in.ibm.com, menage@google.com,
Simon.Derr@bull.net, linux-kernel@vger.kernel.org,
mbligh@google.com, rohitseth@google.com, dipankar@in.ibm.com
Subject: Re: exclusive cpusets broken with cpu hotplug
Date: Wed, 18 Oct 2006 07:14:47 -0700 [thread overview]
Message-ID: <20061018071447.A25760@unix-os.sc.intel.com> (raw)
In-Reply-To: <45361B32.8040604@yahoo.com.au>; from nickpiggin@yahoo.com.au on Wed, Oct 18, 2006 at 10:16:50PM +1000
On Wed, Oct 18, 2006 at 10:16:50PM +1000, Nick Piggin wrote:
> Paul Jackson wrote:
> > 1) I don't know how to tell what sched domains/groups a system has, nor
Paul, atleast for debugging one can know that by defining SCHED_DOMAIN_DEBUG
> > how to tell my customers how to see what sched domains they have, and
>
> I don't know if you want customers do know what domains they have. I think
At the first glance, I have to agree with Nick. All the customer wants is a
mechanism to specify group these cpus together for scheduling...
But looking at how cpusets interact with sched-domains and especially for
large systems, it will probably be useful if we export the topology through /sys
> cpusets is the only thing that messes with sched-domains (excluding the
> isolcpus -- that seems to require a small change to partition_sched_domains,
> but forget that for now).
>
> And so you should know what partitioning to build at any point when asked.
> So we could have a call to cpusets at the end of arch_init_sched_domains,
> which asks for the domains to be partitioned, no?
yes.
Robin, Right now everyone is calling arch_init_sched_domain() with
cpu_online_map. We can remove this argument and in the presence of cpusets,
this routine can go through exclusive cpusets and partition the domains
accordinly. Otherwise we can simply build one domain partition with
cpu_online_map.
thanks,
suresh
next prev parent reply other threads:[~2006-10-18 14:34 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-10-18 2:25 exclusive cpusets broken with cpu hotplug Siddha, Suresh B
2006-10-18 7:14 ` Paul Jackson
2006-10-18 9:56 ` Robin Holt
2006-10-18 10:10 ` Paul Jackson
2006-10-18 10:53 ` Robin Holt
2006-10-18 21:07 ` Paul Jackson
2006-10-19 5:56 ` Paul Jackson
2006-10-18 12:16 ` Nick Piggin
2006-10-18 14:14 ` Siddha, Suresh B [this message]
2006-10-18 14:51 ` Nick Piggin
2006-10-19 6:15 ` Paul Jackson
2006-10-19 6:35 ` Nick Piggin
2006-10-19 6:57 ` Paul Jackson
2006-10-19 7:04 ` Nick Piggin
2006-10-19 7:33 ` Paul Jackson
2006-10-19 8:16 ` Nick Piggin
2006-10-19 8:31 ` Paul Jackson
2006-10-19 7:34 ` Paul Jackson
2006-10-19 8:07 ` Nick Piggin
2006-10-19 8:11 ` Paul Jackson
2006-10-19 8:22 ` Nick Piggin
2006-10-19 8:42 ` Paul Jackson
2006-10-18 17:54 ` Dinakar Guniguntala
2006-10-18 18:05 ` Paul Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061018071447.A25760@unix-os.sc.intel.com \
--to=suresh.b.siddha@intel.com \
--cc=Simon.Derr@bull.net \
--cc=dino@in.ibm.com \
--cc=dipankar@in.ibm.com \
--cc=holt@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mbligh@google.com \
--cc=menage@google.com \
--cc=nickpiggin@yahoo.com.au \
--cc=pj@sgi.com \
--cc=rohitseth@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox