From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
To: paulmck@linux.vnet.ibm.com, peterz@infradead.org,
akpm@linux-foundation.org, kirill@shutemov.name,
ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net,
jack@suse.cz, Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com,
npiggin@gmail.com, bsingharora@gmail.com
Subject: [RFC v4 10/20] mm/spf; fix lock dependency against mapping->i_mmap_rwsem
Date: Fri, 9 Jun 2017 16:20:59 +0200 [thread overview]
Message-ID: <1497018069-17790-11-git-send-email-ldufour@linux.vnet.ibm.com> (raw)
In-Reply-To: <1497018069-17790-1-git-send-email-ldufour@linux.vnet.ibm.com>
kworker/32:1/819 is trying to acquire lock:
(&vma->vm_sequence){+.+...}, at: [<c0000000002f20e0>]
zap_page_range_single+0xd0/0x1a0
but task is already holding lock:
(&mapping->i_mmap_rwsem){++++..}, at: [<c0000000002f229c>]
unmap_mapping_range+0x7c/0x160
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&mapping->i_mmap_rwsem){++++..}:
down_write+0x84/0x130
__vma_adjust+0x1f4/0xa80
__split_vma.isra.2+0x174/0x290
do_munmap+0x13c/0x4e0
vm_munmap+0x64/0xb0
elf_map+0x11c/0x130
load_elf_binary+0x6f0/0x15f0
search_binary_handler+0xe0/0x2a0
do_execveat_common.isra.14+0x7fc/0xbe0
call_usermodehelper_exec_async+0x14c/0x1d0
ret_from_kernel_thread+0x5c/0x68
-> #1 (&vma->vm_sequence/1){+.+...}:
__vma_adjust+0x124/0xa80
__split_vma.isra.2+0x174/0x290
do_munmap+0x13c/0x4e0
vm_munmap+0x64/0xb0
elf_map+0x11c/0x130
load_elf_binary+0x6f0/0x15f0
search_binary_handler+0xe0/0x2a0
do_execveat_common.isra.14+0x7fc/0xbe0
call_usermodehelper_exec_async+0x14c/0x1d0
ret_from_kernel_thread+0x5c/0x68
-> #0 (&vma->vm_sequence){+.+...}:
lock_acquire+0xf4/0x310
unmap_page_range+0xcc/0xfa0
zap_page_range_single+0xd0/0x1a0
unmap_mapping_range+0x138/0x160
truncate_pagecache+0x50/0xa0
put_aio_ring_file+0x48/0xb0
aio_free_ring+0x40/0x1b0
free_ioctx+0x38/0xc0
process_one_work+0x2cc/0x8a0
worker_thread+0xac/0x580
kthread+0x164/0x1b0
ret_from_kernel_thread+0x5c/0x68
other info that might help us debug this:
Chain exists of:
&vma->vm_sequence --> &vma->vm_sequence/1 --> &mapping->i_mmap_rwsem
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&mapping->i_mmap_rwsem);
lock(&vma->vm_sequence/1);
lock(&mapping->i_mmap_rwsem);
lock(&vma->vm_sequence);
*** DEADLOCK ***
To fix that we must grab the vm_sequence lock after any mapping one in
__vma_adjust().
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/mmap.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 008fc35aa75e..0cad4d9b71d8 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -707,10 +707,6 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
long adjust_next = 0;
int remove_next = 0;
- write_seqcount_begin(&vma->vm_sequence);
- if (next)
- write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
-
if (next && !insert) {
struct vm_area_struct *exporter = NULL, *importer = NULL;
@@ -818,6 +814,11 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
}
}
+ write_seqcount_begin(&vma->vm_sequence);
+ if (next)
+ write_seqcount_begin_nested(&next->vm_sequence,
+ SINGLE_DEPTH_NESTING);
+
anon_vma = vma->anon_vma;
if (!anon_vma && adjust_next)
anon_vma = next->anon_vma;
@@ -934,8 +935,6 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
* "vma->vm_next" gap must be updated.
*/
next = vma->vm_next;
- if (next)
- write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
} else {
/*
* For the scope of the comment "next" and
@@ -952,11 +951,14 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
if (remove_next == 2) {
remove_next = 1;
end = next->vm_end;
+ write_seqcount_end(&vma->vm_sequence);
goto again;
- }
- else if (next)
+ } else if (next) {
+ if (next != vma)
+ write_seqcount_begin_nested(&next->vm_sequence,
+ SINGLE_DEPTH_NESTING);
vma_gap_update(next);
- else {
+ } else {
/*
* If remove_next == 2 we obviously can't
* reach this path.
@@ -982,7 +984,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
if (insert && file)
uprobe_mmap(insert);
- if (next)
+ if (next && next != vma)
write_seqcount_end(&next->vm_sequence);
write_seqcount_end(&vma->vm_sequence);
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-06-09 14:21 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-09 14:20 [RFC v4 00/20] Speculative page faults Laurent Dufour
2017-06-09 14:20 ` [RFC v4 01/20] mm: Dont assume page-table invariance during faults Laurent Dufour
2017-06-09 14:20 ` [RFC v4 02/20] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2017-06-09 14:20 ` [RFC v4 03/20] mm: Introduce pte_spinlock Laurent Dufour
2017-06-09 14:20 ` [RFC v4 04/20] mm: VMA sequence count Laurent Dufour
2017-06-09 14:20 ` [RFC v4 05/20] mm: RCU free VMAs Laurent Dufour
2017-06-09 14:20 ` [RFC v4 06/20] mm: Provide speculative fault infrastructure Laurent Dufour
2017-06-09 14:20 ` [RFC v4 07/20] mm/spf: Try spin lock in speculative path Laurent Dufour
2017-06-09 14:20 ` [RFC v4 08/20] mm/spf: Fix fe.sequence init in __handle_mm_fault() Laurent Dufour
2017-06-09 14:20 ` [RFC v4 09/20] mm/spf: don't set fault entry's fields if locking failed Laurent Dufour
2017-06-09 14:20 ` Laurent Dufour [this message]
2017-06-09 14:21 ` [RFC v4 11/20] mm/spf: Protect changes to vm_flags Laurent Dufour
2017-06-09 14:21 ` [RFC v4 12/20] mm/spf Protect vm_policy's changes against speculative pf Laurent Dufour
2017-06-09 14:21 ` [RFC v4 13/20] mm/spf: Add check on the VMA's flags Laurent Dufour
2017-06-09 14:21 ` [RFC v4 14/20] mm/spf: protect madvise vs speculative pf Laurent Dufour
2017-06-09 14:21 ` [RFC v4 15/20] mm/spf: protect mremap() against " Laurent Dufour
2017-06-09 14:21 ` [RFC v4 16/20] mm/spf: Don't call user fault callback in the speculative path Laurent Dufour
2017-06-09 14:21 ` [RFC v4 17/20] x86/mm: Add speculative pagefault handling Laurent Dufour
2017-06-09 14:21 ` [RFC v4 18/20] x86/mm: Update the handle_speculative_fault's path Laurent Dufour
2017-06-09 14:21 ` [RFC v4 19/20] powerpc/mm: Add speculative page fault Laurent Dufour
2017-06-09 14:21 ` [RFC v4 20/20] mm/spf: Clear FAULT_FLAG_KILLABLE in the speculative path Laurent Dufour
2017-06-09 15:01 ` [RFC v4 00/20] Speculative page faults Michal Hocko
2017-06-09 15:25 ` Laurent Dufour
2017-06-09 16:35 ` Michal Hocko
2017-06-09 16:59 ` Tim Chen
2017-06-13 10:19 ` Laurent Dufour
2017-06-13 9:58 ` Laurent Dufour
2017-06-09 18:55 ` Paul E. McKenney
2017-06-12 10:20 ` Jan Kara
2017-06-13 10:24 ` Laurent Dufour
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1497018069-17790-11-git-send-email-ldufour@linux.vnet.ibm.com \
--to=ldufour@linux.vnet.ibm.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=bsingharora@gmail.com \
--cc=dave@stgolabs.net \
--cc=haren@linux.vnet.ibm.com \
--cc=jack@suse.cz \
--cc=khandual@linux.vnet.ibm.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=npiggin@gmail.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).