From: Max Krasnyansky <maxk@qualcomm.com>
To: Nish Aravamudan <nish.aravamudan@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Gregory Haskins <ghaskins@novell.com>,
Dimitri Sivanich <sivanich@sgi.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@elte.hu>
Subject: Re: Using cpusets for configuration/isolation [Was Re: RT sched: cpupri_vec lock contention with def_root_domain and no load balance]
Date: Tue, 18 Nov 2008 21:14:16 -0800 [thread overview]
Message-ID: <4923A0A8.5050009@qualcomm.com> (raw)
In-Reply-To: <29495f1d0811181811r1a7476ceyb5cb4a86e11e7651@mail.gmail.com>
Nish Aravamudan wrote:
> On Tue, Nov 18, 2008 at 5:59 PM, Max Krasnyansky <maxk@qualcomm.com> wrote:
>> I do not see how 'partfs' that you described would be different from
>> 'cpusets' that we have now. Just ignore 'tasks' files in the cpusets and you
>> already have your 'partfs'. You do _not_ have to use cpuset for assigning
>> tasks if you do not want to. Just use them to define sets of cpus and keep
>> all the tasks in the 'root' set. You can then explicitly pin your threads
>> down with pthread_set_affinity().
>
> I guess you're right. It still feels a bit kludgy, but that is probably just me.
>
> I have wondered, though, if it makes sense to provide an "isolated"
> file in /sys/devices/system/cpu/cpuX/ to do most of the offline
> sequence, break sched_domains and remove a CPU from the load balancer
> (rather than turning the load balancer off), rather than requiring a
> user to explicitly do an offline/online.
I do not see any benefits in exposing a special 'isolated' bit and have it do
the same thing that the cpu hotplug already does. As I explained in other
threads cpu hotplug is a _perfect_ fit for the isolation purposes. In order to
isolate a CPU dynamically (ie at runtime) we need to flush pending work, flush
chaches, move tasks and timers, etc. Which is _exactly_ what cpu hotplug code
does when it brings CPU down. There is no point in reimplementing it.
btw It sounds like you misunderstood the meaning of the
cpuset.sched_load_balance flag. It's does not turn really turn load balancer
off, it simply causes cpus in different cpusets to be put into separate sched
domains. In other words it already does exactly what you're asking for.
> I guess it can all be rather
> transparently masked via a userspace tool, but we don't have a common
> one yet.
I do :). It's called 'syspart'
http://git.kernel.org/?p=linux/kernel/git/maxk/syspart.git;a=summary
I'll push an updated version in a couple of days.
> I do have a question, though: is your recommendation to just turn the
> load balancer off in the cpuset you create that has the isolated CPUs?
> I guess the conceptual issue I was having was that the root cpuset (I
> think) always contains all CPUs and all memory nodes. So even if you
> put some CPUs in a cpuset under the root one, and isolate them using
> hotplug + disabling the load balancer in that cpuset, those CPUs are
> still available to tasks in the root cpuset? Maybe I'm just missing a
> step in the configuration, but it seems like as long as the global
> (root cpuset) load balancer is on, a CPU can't be guaranteed to stay
> isolated?
Take a look at what 'syspart' does. In short yes, of course we need to set
sched_load_balance flag in root cpuset to 0.
Max
next prev parent reply other threads:[~2008-11-19 5:14 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-11-07 19:23 Using cpusets for configuration/isolation [Was Re: RT sched: cpupri_vec lock contention with def_root_domain and no load balance] Nish Aravamudan
2008-11-19 1:59 ` Max Krasnyansky
2008-11-19 2:11 ` Nish Aravamudan
2008-11-19 5:14 ` Max Krasnyansky [this message]
2008-11-19 12:30 ` Gregory Haskins
2008-11-19 16:28 ` Max Krasnyansky
2008-11-19 22:11 ` Nish Aravamudan
2008-11-19 12:51 ` Ingo Molnar
2008-11-19 16:31 ` Max Krasnyansky
2008-11-19 17:44 ` Ingo Molnar
2008-11-19 20:01 ` Max Krasnyansky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4923A0A8.5050009@qualcomm.com \
--to=maxk@qualcomm.com \
--cc=ghaskins@novell.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=nish.aravamudan@gmail.com \
--cc=peterz@infradead.org \
--cc=sivanich@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).