From: Andrew Morton <akpm@linux-foundation.org>
To: Cliff Wickman <cpw@sgi.com>
Cc: ego@in.ibm.com, mingo@elte.hu, vatsa@in.ibm.com, oleg@tv-sign.ru,
pj@sgi.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/1] V3: hotplug cpu: migrate a task within its cpuset
Date: Mon, 27 Aug 2007 09:25:00 -0700 [thread overview]
Message-ID: <20070827092500.be762139.akpm@linux-foundation.org> (raw)
In-Reply-To: <20070827160703.GA2446@sgi.com>
On Mon, 27 Aug 2007 11:07:03 -0500 Cliff Wickman <cpw@sgi.com> wrote:
>
> Version 3 adds a missing task_rq_lock()/task_rq_unlock() pair. (Oleg found)
>
> There was discussion about this patch among:
> Andrew Morton, Oleg Nesterov, Gautham Shenoy, Rusty Russell
> regarding other approaches:
> refusing to offline a cpu with tasks pinned to it, or
> providing an administrator the ability to assign such tasks to other cpus
>
> There is indeed an "assumption" in my patch that the cpuset containing a
> pinned task's cpu is a better choice than any online cpu. I think that is
> a reasonable assumption given the typical construction of a cpuset and the
> reason a task is running in a cpuset.
>
> And there will be coming cases, at least on some architectures, where a
> cpu is offlined as a kernel reaction to a hardware error. In that case
> would it not be preferrable to re-pin such tasks and let them proceed?
>
>
>
> When a cpu is disabled, move_task_off_dead_cpu() is called for tasks
> that have been running on that cpu.
>
> Currently, such a task is migrated:
> 1) to any cpu on the same node as the disabled cpu, which is both online
> and among that task's cpus_allowed
> 2) to any cpu which is both online and among that task's cpus_allowed
>
> It is typical of a multithreaded application running on a large NUMA system
> to have its tasks confined to a cpuset so as to cluster them near the
> memory that they share. Furthermore, it is typical to explicitly place such
> a task on a specific cpu in that cpuset. And in that case the task's
> cpus_allowed includes only a single cpu.
>
> This patch would insert a preference to migrate such a task to some cpu within
> its cpuset (and set its cpus_allowed to its entire cpuset).
>
> With this patch, migrate the task to:
> 1) to any cpu on the same node as the disabled cpu, which is both online
> and among that task's cpus_allowed
> 2) to any online cpu within the task's cpuset
> 3) to any cpu which is both online and among that task's cpus_allowed
>
<looks at the No more Mr. Nice Guy. code>
OK, so we're no worse than we used to be, really.
> include/linux/cpuset.h | 5 +++++
> kernel/cpuset.c | 15 ++++++++++++++-
> kernel/sched.c | 16 ++++++++++++++++
How do we communicate this new design/feature to our users?
Documentation/cpusets.txt, perhaps? Documentation/cpu-hotplug.txt?
git-log? ;)
next prev parent reply other threads:[~2007-08-27 16:25 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-27 16:07 [PATCH 1/1] V3: hotplug cpu: migrate a task within its cpuset Cliff Wickman
2007-08-27 16:25 ` Andrew Morton [this message]
2007-08-27 16:25 ` Oleg Nesterov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070827092500.be762139.akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=cpw@sgi.com \
--cc=ego@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=oleg@tv-sign.ru \
--cc=pj@sgi.com \
--cc=vatsa@in.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox