From: Paul Jackson <pj@sgi.com>
To: Martin Bligh <mbligh@google.com>
Cc: nickpiggin@yahoo.com.au, akpm@osdl.org, menage@google.com,
Simon.Derr@bull.net, linux-kernel@vger.kernel.org,
dino@in.ibm.com, rohitseth@google.com, holt@sgi.com,
dipankar@in.ibm.com, suresh.b.siddha@intel.com
Subject: Re: [RFC] cpuset: remove sched domain hooks from cpusets
Date: Sun, 22 Oct 2006 03:51:35 -0700 [thread overview]
Message-ID: <20061022035135.2c450147.pj@sgi.com> (raw)
In-Reply-To: <4537D6E8.8020501@google.com>
Martin wrote:
> We (Google) are planning to use it to do some partitioning, albeit on
> much smaller machines. I'd really like to NOT use cpus_allowed from
> previous experience - if we can get it to to partition using separated
> sched domains, that would be much better.
Why not use cpus_allowed for this, via cpusets and/or sched_setaffinity?
In the followup to this between Paul M. and myself, I wrote:
> As best as I can tell, the two motivations for explicity setting
> sched domain partitions are:
> 1) isolating cpus for real time uses very sensitive to any interference,
> 2) handling load balancing on huge CPU counts, where the worse than linear
> algorithms start to hurt.
> ...
> How many CPUs are you juggling?
And Paul M. replied:
> Not many by your standards - less than eight in general.
So ... it would seem you have neither huge CPU counts nor real time
sensitivities.
So why not use cpus_allowed?
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
next prev parent reply other threads:[~2006-10-22 10:52 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-10-19 9:23 [RFC] cpuset: remove sched domain hooks from cpusets Paul Jackson
2006-10-19 10:24 ` Nick Piggin
2006-10-19 19:03 ` Paul Jackson
2006-10-19 19:21 ` Nick Piggin
2006-10-19 19:50 ` Martin Bligh
2006-10-20 0:14 ` Paul Jackson
2006-10-20 16:03 ` Nick Piggin
2006-10-20 17:29 ` Siddha, Suresh B
2006-10-20 19:19 ` Paul Jackson
2006-10-20 19:00 ` Paul Jackson
2006-10-20 20:30 ` Dinakar Guniguntala
2006-10-20 21:41 ` Paul Jackson
2006-10-20 22:35 ` Dinakar Guniguntala
2006-10-20 23:14 ` Siddha, Suresh B
2006-10-21 5:37 ` Paul Jackson
2006-10-23 4:31 ` Siddha, Suresh B
2006-10-23 5:59 ` Paul Jackson
2006-10-21 23:05 ` Paul Jackson
2006-10-22 12:02 ` Paul Jackson
2006-10-23 3:09 ` Paul Jackson
2006-10-20 21:46 ` Paul Jackson
2006-10-21 18:23 ` Paul Menage
2006-10-21 20:55 ` Paul Jackson
2006-10-21 20:59 ` Paul Menage
2006-10-22 10:51 ` Paul Jackson [this message]
2006-10-23 5:26 ` Siddha, Suresh B
2006-10-23 5:54 ` Paul Jackson
2006-10-23 5:43 ` Siddha, Suresh B
2006-10-23 6:02 ` Nick Piggin
2006-10-23 6:16 ` Paul Jackson
2006-10-23 16:03 ` Christoph Lameter
2006-11-09 10:59 ` Paul Jackson
2006-10-23 16:01 ` Christoph Lameter
-- strict thread matches above, loose matches on Subject: below --
2006-10-30 21:26 [RFC] cpuset: Remove " Dinakar Guniguntala
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061022035135.2c450147.pj@sgi.com \
--to=pj@sgi.com \
--cc=Simon.Derr@bull.net \
--cc=akpm@osdl.org \
--cc=dino@in.ibm.com \
--cc=dipankar@in.ibm.com \
--cc=holt@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mbligh@google.com \
--cc=menage@google.com \
--cc=nickpiggin@yahoo.com.au \
--cc=rohitseth@google.com \
--cc=suresh.b.siddha@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox