public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Paul Jackson <pj@sgi.com>
Cc: dino@in.ibm.com, akpm@osdl.org, mbligh@google.com,
	menage@google.com, Simon.Derr@bull.net,
	linux-kernel@vger.kernel.org, rohitseth@google.com, holt@sgi.com,
	dipankar@in.ibm.com, suresh.b.siddha@intel.com
Subject: Re: [RFC] cpuset: add interface to isolated cpus
Date: Mon, 23 Oct 2006 16:17:27 +1000	[thread overview]
Message-ID: <453C5E77.2050905@yahoo.com.au> (raw)
In-Reply-To: <20061022225108.21716614.pj@sgi.com>

Paul Jackson wrote:
> Nick wrote:
> 
>>Did you send resend the patch to remove sched-domain partitioning?
>>After clearing up my confusion, IMO that is needed and could probably
>>go into 2.6.19.
> 
> 
> The patch titled
>      cpuset: remove sched domain hooks from cpusets
> went into *-mm on Friday, 20 Oct.
> 
> Is that the patch you mean?

Yes.

> It's just the first step - unplugging the old.
> 
> Now we need the new:
>  1) Ability at runtime to isolate cpus for real-time and such.
>  2) Big systems perform better if we can avoid load balancing across
>     zillions of cpus.

These are both part of the same larger solution, which is to
partition domains. isolated CPUs are just the case of 1 CPU in
its own domain (and that's how they are implemented now).

So we need the interface or some driver to do this partitioning.

>>A cool option would be to determine the partitions according to the
>>disjoint set of unions of cpus_allowed masks of all tasks. I see this
>>getting computationally expensive though, probably O(tasks*CPUs)... I
>>guess that isn't too bad.
> 
> 
> Yeah - if that would work, from the practical perspective of providing
> us with a useful partitioning (get those humongous sched domains carved
> down to a reasonable size) then that would be cool.
> 
> I'm guessing that in practice, it would be annoying to use.  One would
> end up with stray tasks that happened to be sitting in one of the bigger
> cpusets and that did not have their cpus_allowed narrowed, stopping us
> from getting a useful partitioning.  Perhaps anilliary tasks associated
> with the batch scheduler, or some paused tasks in an inactive job that
> were it active would need load balancing across a big swath of cpus.
> These would be tasks that we really didn't need to load balance, but they
> would appear as if they needed it because of their fat cpus_allowed.

But we simply can't make a partition for them because they have asked
to use all CPUs. We can't know if this is something that should be
partitioned or not, can we?

> Users (admins) would have to hunt down these tasks that were getting in
> the way of a nice partitioning and whack their cpus_allowed down to
> size.

In the sense that they get what they ask for, yes. The obvious route
for the big SGI systems is to put them in the right cpuset. The job
managers (or admins) on those things are surely up to the task.

This leaves unbound and transient kernel threads like pdflush as a
remaining problem. Not quite sure what to do about that yet. I see
you have a little hack in there...

> So essentially, one would end up with another userspace API, backdoor
> again.  Like those magic doors in the libraries of wealthy protagonists
> in mystery novels, where you have to open a particular book and pull
> the lamp cord to get the door to appear and open.
> 
> Automatic chokes and transmissions are great - if they work.  If not,
> give me a knob and a stick.

But your knob is just going to be some mechanism to say that you don't
care about such and such a task, or you want to put task x into domain
y.

> ===
> 
> Another idea for a cpuset-based API to this ...
> 
>>From our internal perspective, it's all about getting the sched domain
> partitions cut down to a reasonable size, for performance reasons.
> 
> But from the users perspective, the deal we are asking them to
> consider is to trade in fully automatic, all tasks across all cpus,
> load balancing, in turn for better performance.
> 
> Big system admins would often be quite happy to mark the top cpuset
> as "no need to load balance tasks in this cpuset."  They would
> take responsibility for moving any non-trivial, unpinned tasks into
> lower cpusets (or not be upset if something left behind wasn't load
> balancing.)
> 
> And the batch scheduler would be quite happy to mark its top cpuset as
> "no need to load balance".  It could mark any cpusets holding inactive
> jobs the same way.
> 
> This "no need to load balance" flag would be advisory.  The kernel
> might load balance anyway.  For example if the batch scheduler were
> running under a top cpuset that was -not- so marked, we'd still have
> to load balance everyone.  The batch scheduler wouldn't care.  It would
> have done its duty, to mark which of its cpusets didn't need balancing.
> 
> All we need from them is the ok to not load balance certain cpusets,
> and the rest is easy enough.  If they give us such ok on enough of the
> big cpusets, we give back a nice performance improvement.

I think this is much more of an automatic behind your back thing. If
they don't want any load balancing to happen, they could pin the tasks
in that cpuset to the cpu they're currently on, for example.

It would be trivial to make such a script to parse the root cpuset and
do exactly this, wouldn't it?

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 

  parent reply	other threads:[~2006-10-23  6:17 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-10-19  9:26 [RFC] cpuset: add interface to isolated cpus Paul Jackson
2006-10-19 10:17 ` Nick Piggin
2006-10-19 17:55   ` Paul Jackson
2006-10-19 18:07     ` Nick Piggin
2006-10-19 18:56       ` Paul Jackson
2006-10-19 19:03         ` Nick Piggin
2006-10-20  3:37           ` Paul Jackson
2006-10-20  8:02             ` Nick Piggin
2006-10-20 14:52               ` Nick Piggin
2006-10-20 20:03                 ` Paul Jackson
2006-10-20 19:59               ` Paul Jackson
2006-10-20 20:01               ` Paul Jackson
2006-10-20 20:59                 ` Siddha, Suresh B
2006-10-21  1:33                   ` Paul Jackson
2006-10-21  6:14                 ` Nick Piggin
2006-10-21  7:24                   ` Paul Jackson
2006-10-21 10:51                     ` Nick Piggin
2006-10-22  4:54                       ` Paul Jackson
2006-10-20 21:04 ` Dinakar Guniguntala
2006-10-23  3:18   ` Paul Jackson
2006-10-23  5:07     ` Nick Piggin
2006-10-23  5:51       ` Paul Jackson
2006-10-23  5:40         ` Siddha, Suresh B
2006-10-23  6:06           ` Paul Jackson
2006-10-23  6:07           ` Paul Jackson
2006-10-23  6:17         ` Nick Piggin [this message]
2006-10-23  6:41           ` Paul Jackson
2006-10-23  6:49             ` Nick Piggin
2006-10-23  6:48           ` Paul Jackson
2006-10-23 20:58           ` Paul Jackson
2006-10-23 19:50       ` Dinakar Guniguntala
2006-10-23 20:47         ` Paul Jackson
2006-10-24 15:44           ` Paul Jackson
2006-10-25 19:40         ` Paul Jackson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=453C5E77.2050905@yahoo.com.au \
    --to=nickpiggin@yahoo.com.au \
    --cc=Simon.Derr@bull.net \
    --cc=akpm@osdl.org \
    --cc=dino@in.ibm.com \
    --cc=dipankar@in.ibm.com \
    --cc=holt@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mbligh@google.com \
    --cc=menage@google.com \
    --cc=pj@sgi.com \
    --cc=rohitseth@google.com \
    --cc=suresh.b.siddha@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox