From: Paul Jackson <pj@sgi.com>
To: Paul Jackson <pj@sgi.com>
Cc: dino@in.ibm.com, Simon.Derr@bull.net,
lse-tech@lists.sourceforge.net, akpm@osdl.org,
nickpiggin@yahoo.com.au, vatsa@in.ibm.com,
linux-kernel@vger.kernel.org
Subject: Re: [Lse-tech] [PATCH] cpusets+hotplug+preepmt broken
Date: Wed, 11 May 2005 12:55:08 -0700 [thread overview]
Message-ID: <20050511125508.20bf44ec.pj@sgi.com> (raw)
In-Reply-To: <20050511122543.6e9f6097.pj@sgi.com>
pj wrote:
> Could you explain why this is -- what is the deadlock?
On further reading, I see it. You're right.
Deep in the bowels of the hotplug code, when removing a dead cpu, while
holding the runqueue lock (task_rq_lock), the hotplug code might need to
walk up the cpuset hierarchy, to find the nearest enclosing cpuset that
still has online cpus, as part of figuring where to run a task that is
being kicked off the dead cpu. The runqueue lock is atomic, but getting
the cpuset_sem (needed to walk up the cpuset hierarchy) can sleep. So
you need to get the cpuset_sem before calling task_rq_lock, so as to
avoid the "scheduling while atomic" oops that you reported. Therefore
the hotplug code, and anyone else calling cpuset_cpus_allowed(), which
means sched_setaffinity(), needs to first grab cpuset_sem, so that they
can grab any atomic locks needed, after getting cpuset_sem, not before.
Acked-by: Paul Jackson <pj@sgi.com>
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@engr.sgi.com> 1.650.933.1373, 1.925.600.0401
next prev parent reply other threads:[~2005-05-11 19:55 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-05-11 19:16 [PATCH] cpusets+hotplug+preepmt broken Dinakar Guniguntala
2005-05-11 19:25 ` [Lse-tech] " Paul Jackson
2005-05-11 19:55 ` Paul Jackson [this message]
2005-05-11 20:26 ` Nathan Lynch
2005-05-11 20:45 ` Paul Jackson
2005-05-11 19:51 ` Nathan Lynch
2005-05-11 20:42 ` Paul Jackson
2005-05-11 20:58 ` Paul Jackson
2005-05-14 2:23 ` Paul Jackson
2005-05-14 12:14 ` Nathan Lynch
2005-05-14 17:04 ` Paul Jackson
2005-05-14 17:44 ` Paul Jackson
2005-05-18 4:14 ` Paul Jackson
2005-05-18 9:29 ` [Lse-tech] " Dinakar Guniguntala
2005-05-18 14:48 ` Nathan Lynch
2005-05-18 21:16 ` Paul Jackson
2005-05-14 16:28 ` Srivatsa Vaddagiri
2005-05-12 15:10 ` Dinakar Guniguntala
2005-05-13 12:15 ` [Lse-tech] " Dinakar Guniguntala
2005-05-13 0:34 ` Nathan Lynch
2005-05-13 12:32 ` [Lse-tech] " Dinakar Guniguntala
2005-05-13 17:25 ` Srivatsa Vaddagiri
2005-05-13 19:59 ` Paul Jackson
2005-05-13 20:20 ` Dipankar Sarma
2005-05-13 20:46 ` Nathan Lynch
2005-05-13 21:05 ` Paul Jackson
2005-05-13 21:06 ` Dipankar Sarma
2005-05-13 20:52 ` Paul Jackson
2005-05-13 21:02 ` Dipankar Sarma
2005-05-14 2:58 ` Paul Jackson
2005-05-14 16:09 ` Srivatsa Vaddagiri
2005-05-14 17:50 ` Paul Jackson
2005-05-14 17:57 ` Paul Jackson
2005-05-14 16:30 ` Nathan Lynch
2005-05-14 17:23 ` Srivatsa Vaddagiri
2005-05-14 23:17 ` Nathan Lynch
2005-05-13 19:59 ` Paul Jackson
2005-05-13 21:27 ` Nathan Lynch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20050511125508.20bf44ec.pj@sgi.com \
--to=pj@sgi.com \
--cc=Simon.Derr@bull.net \
--cc=akpm@osdl.org \
--cc=dino@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lse-tech@lists.sourceforge.net \
--cc=nickpiggin@yahoo.com.au \
--cc=vatsa@in.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox