From: Dave Jones <davej@redhat.com>
To: Hugh Dickins <hughd@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
Kyungmin Park <kyungmin.park@samsung.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan@kernel.org>,
Rik van Riel <riel@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Cong Wang <amwang@redhat.com>,
Markus Trippelsdorf <markus@trippelsdorf.de>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: WARNING: at mm/page-writeback.c:1990 __set_page_dirty_nobuffers+0x13a/0x170()
Date: Sun, 3 Jun 2012 14:15:48 -0400 [thread overview]
Message-ID: <20120603181548.GA306@redhat.com> (raw)
In-Reply-To: <alpine.LSU.2.00.1206012108430.11308@eggly.anvils>
On Fri, Jun 01, 2012 at 09:40:35PM -0700, Hugh Dickins wrote:
> In which case, yes, much better to follow your suggestion, and hold
> the lock (with irqs disabled) for only half the time.
>
> Similarly untested patch below.
Things aren't happy with that patch at all.
=============================================
[ INFO: possible recursive locking detected ]
3.5.0-rc1+ #50 Not tainted
---------------------------------------------
trinity-child1/31784 is trying to acquire lock:
(&(&zone->lock)->rlock){-.-.-.}, at: [<ffffffff81165c5d>] suitable_migration_target.isra.15+0x19d/0x1e0
but task is already holding lock:
(&(&zone->lock)->rlock){-.-.-.}, at: [<ffffffff811661fb>] compaction_alloc+0x21b/0x2f0
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&(&zone->lock)->rlock);
lock(&(&zone->lock)->rlock);
*** DEADLOCK ***
May be due to missing lock nesting notation
2 locks held by trinity-child1/31784:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff8115fc46>] vm_mmap_pgoff+0x66/0xb0
#1: (&(&zone->lock)->rlock){-.-.-.}, at: [<ffffffff811661fb>] compaction_alloc+0x21b/0x2f0
stack backtrace:
Pid: 31784, comm: trinity-child1 Not tainted 3.5.0-rc1+ #50
Call Trace:
[<ffffffff810b6584>] __lock_acquire+0x1584/0x1aa0
[<ffffffff810b19c8>] ? trace_hardirqs_off_caller+0x28/0xc0
[<ffffffff8108cd47>] ? local_clock+0x47/0x60
[<ffffffff810b7162>] lock_acquire+0x92/0x1f0
[<ffffffff81165c5d>] ? suitable_migration_target.isra.15+0x19d/0x1e0
[<ffffffff8164ce05>] ? _raw_spin_lock_irqsave+0x25/0x90
[<ffffffff8164ce32>] _raw_spin_lock_irqsave+0x52/0x90
[<ffffffff81165c5d>] ? suitable_migration_target.isra.15+0x19d/0x1e0
[<ffffffff81165c5d>] suitable_migration_target.isra.15+0x19d/0x1e0
[<ffffffff8116620e>] compaction_alloc+0x22e/0x2f0
[<ffffffff81198547>] migrate_pages+0xc7/0x540
[<ffffffff81165fe0>] ? isolate_freepages_block+0x260/0x260
[<ffffffff81166e86>] compact_zone+0x216/0x480
[<ffffffff810b19c8>] ? trace_hardirqs_off_caller+0x28/0xc0
[<ffffffff811673cd>] compact_zone_order+0x8d/0xd0
[<ffffffff811499e5>] ? get_page_from_freelist+0x565/0x970
[<ffffffff811674d9>] try_to_compact_pages+0xc9/0x140
[<ffffffff81642e01>] __alloc_pages_direct_compact+0xaa/0x1d0
Then a bunch of NMI backtraces, and a hard lockup.
Dave
next prev parent reply other threads:[~2012-06-03 18:16 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-30 16:33 WARNING: at mm/page-writeback.c:1990 __set_page_dirty_nobuffers+0x13a/0x170() Dave Jones
2012-05-31 0:57 ` Dave Jones
2012-06-01 2:31 ` Dave Jones
2012-06-01 2:43 ` Linus Torvalds
2012-06-01 13:43 ` Dave Jones
2012-06-01 8:44 ` Hugh Dickins
2012-06-01 8:51 ` KOSAKI Motohiro
2012-06-01 9:08 ` Hugh Dickins
2012-06-01 9:12 ` KOSAKI Motohiro
2012-06-01 14:09 ` Dave Jones
2012-06-01 14:14 ` Dave Jones
2012-06-01 16:12 ` Dave Jones
2012-06-01 17:16 ` Dave Jones
2012-06-01 22:17 ` Hugh Dickins
2012-06-02 1:45 ` Linus Torvalds
2012-06-02 4:40 ` Hugh Dickins
2012-06-02 4:58 ` Linus Torvalds
2012-06-02 7:20 ` Hugh Dickins
2012-06-02 7:17 ` Markus Trippelsdorf
2012-06-02 7:22 ` Hugh Dickins
2012-06-02 7:27 ` [PATCH] mm: fix warning in __set_page_dirty_nobuffers Hugh Dickins
2012-06-03 18:15 ` Dave Jones [this message]
2012-06-03 18:23 ` WARNING: at mm/page-writeback.c:1990 __set_page_dirty_nobuffers+0x13a/0x170() Linus Torvalds
2012-06-03 18:31 ` Dave Jones
2012-06-03 20:53 ` Dave Jones
2012-06-03 21:59 ` Linus Torvalds
2012-06-03 22:13 ` Dave Jones
2012-06-03 22:29 ` Hugh Dickins
2012-06-03 22:17 ` Hugh Dickins
2012-06-03 23:13 ` Linus Torvalds
2012-06-04 0:46 ` KOSAKI Motohiro
2012-06-04 1:18 ` Hugh Dickins
2012-06-04 1:21 ` Minchan Kim
2012-06-04 1:26 ` KOSAKI Motohiro
2012-06-04 2:30 ` Minchan Kim
2012-06-04 1:10 ` Minchan Kim
2012-06-04 1:41 ` Hugh Dickins
2012-06-04 1:47 ` KOSAKI Motohiro
2012-06-04 2:28 ` Minchan Kim
2012-06-04 4:21 ` KOSAKI Motohiro
2012-06-04 13:37 ` Bartlomiej Zolnierkiewicz
2012-06-01 16:16 ` Markus Trippelsdorf
2012-06-01 16:28 ` Linus Torvalds
2012-06-01 16:39 ` Markus Trippelsdorf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120603181548.GA306@redhat.com \
--to=davej@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=amwang@redhat.com \
--cc=b.zolnierkie@samsung.com \
--cc=hughd@google.com \
--cc=kyungmin.park@samsung.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m.szyprowski@samsung.com \
--cc=markus@trippelsdorf.de \
--cc=mgorman@suse.de \
--cc=minchan@kernel.org \
--cc=riel@redhat.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).