From: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>,
linux-rt-users <linux-rt-users@vger.kernel.org>,
Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>,
LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH -rt] avoid deadlock related with PG_nonewrefs and swap_lock
Date: Mon, 24 Mar 2008 11:24:17 -0700 [thread overview]
Message-ID: <47E7F1D1.6010407@ct.jp.nec.com> (raw)
In-Reply-To: <1205833218.8514.338.camel@twins>
Hi Peter,
I've updated the patch. Could you please review it?
I'm also thinking that it can be in the mainline because it makes
the lock period shorter, correct?
---
From: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
There is a deadlock scenario; remove_mapping() vs free_swap_and_cache().
remove_mapping() turns PG_nonewrefs bit on, then locks swap_lock.
free_swap_and_cache() locks swap_lock, then wait to turn PG_nonewrefs bit
off in find_get_page().
swap_lock can be unlocked before calling find_get_page().
In remove_exclusive_swap_page(), there is similar lock sequence;
swap_lock, then PG_nonewrefs bit. swap_lock can be unlocked before
turning PG_nonewrefs bit on.
Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
---
mm/swapfile.c | 10 ++++++----
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 5036b70..6fbc77e 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -366,6 +366,7 @@ int remove_exclusive_swap_page(struct page *page)
/* Is the only swap cache user the cache itself? */
retval = 0;
if (p->swap_map[swp_offset(entry)] == 1) {
+ spin_unlock(&swap_lock);
/* Recheck the page count with the swapcache lock held.. */
lock_page_ref_irq(page);
if ((page_count(page) == 2) && !PageWriteback(page)) {
@@ -374,8 +375,8 @@ int remove_exclusive_swap_page(struct page *page)
retval = 1;
}
unlock_page_ref_irq(page);
- }
- spin_unlock(&swap_lock);
+ } else
+ spin_unlock(&swap_lock);
if (retval) {
swap_free(entry);
@@ -400,13 +401,14 @@ void free_swap_and_cache(swp_entry_t entry)
p = swap_info_get(entry);
if (p) {
if (swap_entry_free(p, swp_offset(entry)) == 1) {
+ spin_unlock(&swap_lock);
page = find_get_page(&swapper_space, entry.val);
if (page && unlikely(TestSetPageLocked(page))) {
page_cache_release(page);
page = NULL;
}
- }
- spin_unlock(&swap_lock);
+ } else
+ spin_unlock(&swap_lock);
}
if (page) {
int one_user;
--
1.5.4.1
next prev parent reply other threads:[~2008-03-24 18:24 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-03-17 18:26 deadlock on 2.6.24.3-rt3 Hiroshi Shimamoto
2008-03-18 0:14 ` Hiroshi Shimamoto
2008-03-18 1:53 ` Steven Rostedt
2008-03-18 9:40 ` Peter Zijlstra
2008-03-18 17:15 ` Hiroshi Shimamoto
2008-03-20 20:31 ` Hiroshi Shimamoto
2008-03-24 18:24 ` Hiroshi Shimamoto [this message]
2008-03-26 9:50 ` [PATCH -rt] avoid deadlock related with PG_nonewrefs and swap_lock Peter Zijlstra
2008-03-26 10:09 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47E7F1D1.6010407@ct.jp.nec.com \
--to=h-shimamoto@ct.jp.nec.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).