From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
To: paulmck@linux.vnet.ibm.com, peterz@infradead.org,
akpm@linux-foundation.org, kirill@shutemov.name,
ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net,
jack@suse.cz, Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com,
npiggin@gmail.com, bsingharora@gmail.com,
Tim Chen <tim.c.chen@linux.intel.com>
Subject: [RFC v5 05/11] mm: fix lock dependency against mapping->i_mmap_rwsem
Date: Fri, 16 Jun 2017 19:52:29 +0200 [thread overview]
Message-ID: <1497635555-25679-6-git-send-email-ldufour@linux.vnet.ibm.com> (raw)
In-Reply-To: <1497635555-25679-1-git-send-email-ldufour@linux.vnet.ibm.com>
kworker/32:1/819 is trying to acquire lock:
(&vma->vm_sequence){+.+...}, at: [<c0000000002f20e0>]
zap_page_range_single+0xd0/0x1a0
but task is already holding lock:
(&mapping->i_mmap_rwsem){++++..}, at: [<c0000000002f229c>]
unmap_mapping_range+0x7c/0x160
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&mapping->i_mmap_rwsem){++++..}:
down_write+0x84/0x130
__vma_adjust+0x1f4/0xa80
__split_vma.isra.2+0x174/0x290
do_munmap+0x13c/0x4e0
vm_munmap+0x64/0xb0
elf_map+0x11c/0x130
load_elf_binary+0x6f0/0x15f0
search_binary_handler+0xe0/0x2a0
do_execveat_common.isra.14+0x7fc/0xbe0
call_usermodehelper_exec_async+0x14c/0x1d0
ret_from_kernel_thread+0x5c/0x68
-> #1 (&vma->vm_sequence/1){+.+...}:
__vma_adjust+0x124/0xa80
__split_vma.isra.2+0x174/0x290
do_munmap+0x13c/0x4e0
vm_munmap+0x64/0xb0
elf_map+0x11c/0x130
load_elf_binary+0x6f0/0x15f0
search_binary_handler+0xe0/0x2a0
do_execveat_common.isra.14+0x7fc/0xbe0
call_usermodehelper_exec_async+0x14c/0x1d0
ret_from_kernel_thread+0x5c/0x68
-> #0 (&vma->vm_sequence){+.+...}:
lock_acquire+0xf4/0x310
unmap_page_range+0xcc/0xfa0
zap_page_range_single+0xd0/0x1a0
unmap_mapping_range+0x138/0x160
truncate_pagecache+0x50/0xa0
put_aio_ring_file+0x48/0xb0
aio_free_ring+0x40/0x1b0
free_ioctx+0x38/0xc0
process_one_work+0x2cc/0x8a0
worker_thread+0xac/0x580
kthread+0x164/0x1b0
ret_from_kernel_thread+0x5c/0x68
other info that might help us debug this:
Chain exists of:
&vma->vm_sequence --> &vma->vm_sequence/1 --> &mapping->i_mmap_rwsem
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&mapping->i_mmap_rwsem);
lock(&vma->vm_sequence/1);
lock(&mapping->i_mmap_rwsem);
lock(&vma->vm_sequence);
*** DEADLOCK ***
To fix that we must grab the vm_sequence lock after any mapping one in
__vma_adjust().
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/mmap.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 9f86356d0012..ad85f210a92c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -679,10 +679,6 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
long adjust_next = 0;
int remove_next = 0;
- write_seqcount_begin(&vma->vm_sequence);
- if (next)
- write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
-
if (next && !insert) {
struct vm_area_struct *exporter = NULL, *importer = NULL;
@@ -790,6 +786,11 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
}
}
+ write_seqcount_begin(&vma->vm_sequence);
+ if (next)
+ write_seqcount_begin_nested(&next->vm_sequence,
+ SINGLE_DEPTH_NESTING);
+
anon_vma = vma->anon_vma;
if (!anon_vma && adjust_next)
anon_vma = next->anon_vma;
@@ -908,8 +909,6 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
* "vma->vm_next" gap must be updated.
*/
next = vma->vm_next;
- if (next)
- write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
} else {
/*
* For the scope of the comment "next" and
@@ -926,11 +925,14 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
if (remove_next == 2) {
remove_next = 1;
end = next->vm_end;
+ write_seqcount_end(&vma->vm_sequence);
goto again;
- }
- else if (next)
+ } else if (next) {
+ if (next != vma)
+ write_seqcount_begin_nested(&next->vm_sequence,
+ SINGLE_DEPTH_NESTING);
vma_gap_update(next);
- else {
+ } else {
/*
* If remove_next == 2 we obviously can't
* reach this path.
@@ -956,7 +958,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
if (insert && file)
uprobe_mmap(insert);
- if (next)
+ if (next && next != vma)
write_seqcount_end(&next->vm_sequence);
write_seqcount_end(&vma->vm_sequence);
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-06-16 17:52 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-16 17:52 [RFC v5 00/11] Speculative page faults Laurent Dufour
2017-06-16 17:52 ` [RFC v5 01/11] mm: Dont assume page-table invariance during faults Laurent Dufour
2017-07-07 7:07 ` Balbir Singh
2017-07-10 17:48 ` Laurent Dufour
2017-07-11 4:26 ` Balbir Singh
2017-08-08 10:04 ` Anshuman Khandual
2017-08-08 9:45 ` Anshuman Khandual
2017-08-08 12:11 ` Laurent Dufour
2017-06-16 17:52 ` [RFC v5 02/11] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2017-08-08 10:24 ` Anshuman Khandual
2017-08-08 10:42 ` Peter Zijlstra
2017-06-16 17:52 ` [RFC v5 03/11] mm: Introduce pte_spinlock " Laurent Dufour
2017-08-08 10:35 ` Anshuman Khandual
2017-08-08 12:16 ` Laurent Dufour
2017-06-16 17:52 ` [RFC v5 04/11] mm: VMA sequence count Laurent Dufour
2017-08-08 10:59 ` Anshuman Khandual
2017-08-08 11:04 ` Peter Zijlstra
2017-06-16 17:52 ` Laurent Dufour [this message]
2017-08-08 11:17 ` [RFC v5 05/11] mm: fix lock dependency against mapping->i_mmap_rwsem Anshuman Khandual
2017-08-08 12:20 ` Laurent Dufour
2017-08-08 12:49 ` Jan Kara
2017-08-08 13:08 ` Laurent Dufour
2017-08-08 13:15 ` Peter Zijlstra
2017-08-08 13:34 ` Laurent Dufour
2017-06-16 17:52 ` [RFC v5 06/11] mm: Protect VMA modifications using VMA sequence count Laurent Dufour
2017-06-16 17:52 ` [RFC v5 07/11] mm: RCU free VMAs Laurent Dufour
2017-06-16 17:52 ` [RFC v5 08/11] mm: Provide speculative fault infrastructure Laurent Dufour
2017-06-16 17:52 ` [RFC v5 09/11] mm: Try spin lock in speculative path Laurent Dufour
2017-07-05 18:50 ` Peter Zijlstra
2017-07-06 13:46 ` Laurent Dufour
2017-07-06 14:48 ` Peter Zijlstra
2017-07-06 15:29 ` Laurent Dufour
2017-07-06 16:13 ` Peter Zijlstra
2017-06-16 17:52 ` [RFC v5 10/11] x86/mm: Add speculative pagefault handling Laurent Dufour
2017-06-16 17:52 ` [RFC v5 11/11] powerpc/mm: Add speculative page fault Laurent Dufour
2017-07-03 17:32 ` [RFC v5 00/11] Speculative page faults Laurent Dufour
2017-07-07 1:54 ` Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1497635555-25679-6-git-send-email-ldufour@linux.vnet.ibm.com \
--to=ldufour@linux.vnet.ibm.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=bsingharora@gmail.com \
--cc=dave@stgolabs.net \
--cc=haren@linux.vnet.ibm.com \
--cc=jack@suse.cz \
--cc=khandual@linux.vnet.ibm.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=npiggin@gmail.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=tim.c.chen@linux.intel.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).