From: "David S. Miller" <davem@redhat.com>
To: mingo@elte.hu
Cc: kravetz@us.ibm.com, rml@tech9.net, linux-kernel@vger.kernel.org
Subject: Re: Scheduler Bug (set_cpus_allowed)
Date: Wed, 12 Jun 2002 04:54:14 -0700 (PDT) [thread overview]
Message-ID: <20020612.045414.105100110.davem@redhat.com> (raw)
In-Reply-To: <Pine.LNX.4.44.0206102130060.29999-100000@elte.hu>
From: Ingo Molnar <mingo@elte.hu>
Date: Mon, 10 Jun 2002 22:02:28 +0200 (CEST)
David, would it be possible to somehow not recurse back into the scheduler
code (like wakeup) from within the port-specific switch_to() code?
It's a locking problem more than anything else. It's not that
we call back into it, it's that we hold locks in switch_mm.
To recap, from the changeset 1.369.49.1:
Fix scheduler deadlock on some platforms.
Some platforms need to grab mm->page_table_lock during switch_mm().
On the other hand code like swap_out() in mm/vmscan.c needs to hold
mm->page_table_lock during wakeups which needs to grab the runqueue
lock. This creates a conflict and the resolution chosen here is to
not hold the runqueue lock during context_switch().
The implementation is specifically a "frozen" state implemented as a
spinlock, which is held around the context_switch() call. This allows
the runqueue lock to be dropped during this time yet prevent another cpu
from running the "not switched away from yet" task.
So if you're going to delete the frozen code, please replace it with
something that handles the above.
There is nothing weird about holding page_table_lock during
switch_mm, I can imagine many other ports doing it especially those
with TLB pids which want to optimize SMP tlb/cache flushes.
I think changing vmscan.c to not call wakeup is the wrong way to
go about this. I thought about doing that originally, but it looks
to be 100 times more difficult to implement (and verify) than the
scheduler side fix.
next prev parent reply other threads:[~2002-06-12 11:58 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-06-06 23:20 Scheduler Bug (set_cpus_allowed) Mike Kravetz
2002-06-07 18:36 ` Robert Love
2002-06-07 19:12 ` Mike Kravetz
2002-06-10 20:12 ` Ingo Molnar
2002-06-10 20:57 ` Mike Kravetz
2002-06-10 22:35 ` Ingo Molnar
2002-06-10 23:15 ` Mike Kravetz
2002-06-10 23:24 ` Ingo Molnar
2002-06-10 23:28 ` Robert Love
2002-06-11 0:05 ` [patch] current scheduler bits, 2.5.21 Ingo Molnar
2002-06-11 0:27 ` Robert Love
2002-06-11 17:35 ` [patch] current scheduler bits #2, 2.5.21 Ingo Molnar
2002-06-11 18:25 ` Robert Love
2002-06-11 18:33 ` [patch] current scheduler bits #3, 2.5.21 Ingo Molnar
2002-06-13 21:26 ` [PATCH] " Robert Love
2002-06-13 22:06 ` [patch] current scheduler bits #4, 2.5.21 Ingo Molnar
2002-06-07 23:20 ` Scheduler Bug (set_cpus_allowed) J.A. Magallon
2002-06-12 0:00 ` Andrea Arcangeli
2002-06-10 18:50 ` Mike Kravetz
2002-06-10 20:02 ` Ingo Molnar
2002-06-12 11:54 ` David S. Miller [this message]
2002-06-12 16:57 ` [patch] switch_mm()'s desire to run without the rq lock Ingo Molnar
2002-06-12 21:54 ` David S. Miller
2002-06-12 22:31 ` David S. Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20020612.045414.105100110.davem@redhat.com \
--to=davem@redhat.com \
--cc=kravetz@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=rml@tech9.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox