From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: paulmck@linux.vnet.ibm.com, peterz@infradead.org,
akpm@linux-foundation.org, kirill@shutemov.name,
ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net,
jack@suse.cz, Matthew Wilcox <willy@infradead.org>,
benh@kernel.crashing.org, mpe@ellerman.id.au, paulus@samba.org,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>,
hpa@zytor.com, Will Deacon <will.deacon@arm.com>,
Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com,
npiggin@gmail.com, bsingharora@gmail.com,
Tim Chen <tim.c.chen@linux.intel.com>,
linuxppc-dev@lists.ozlabs.org, x86@kernel.org
Subject: Re: [PATCH v3 04/20] mm: VMA sequence count
Date: Thu, 14 Sep 2017 09:55:13 +0200 [thread overview]
Message-ID: <441ff1c6-72a7-5d96-02c8-063578affb62@linux.vnet.ibm.com> (raw)
In-Reply-To: <20170914003116.GA599@jagdpanzerIV.localdomain>
Hi,
On 14/09/2017 02:31, Sergey Senozhatsky wrote:
> Hi,
>
> On (09/13/17 18:56), Laurent Dufour wrote:
>> Hi Sergey,
>>
>> On 13/09/2017 13:53, Sergey Senozhatsky wrote:
>>> Hi,
>>>
>>> On (09/08/17 20:06), Laurent Dufour wrote:
> [..]
>>> ok, so what I got on my box is:
>>>
>>> vm_munmap() -> down_write_killable(&mm->mmap_sem)
>>> do_munmap()
>>> __split_vma()
>>> __vma_adjust() -> write_seqcount_begin(&vma->vm_sequence)
>>> -> write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING)
>>>
>>> so this gives 3 dependencies ->mmap_sem -> ->vm_seq
>>> ->vm_seq -> ->vm_seq/1
>>> ->mmap_sem -> ->vm_seq/1
>>>
>>>
>>> SyS_mremap() -> down_write_killable(¤t->mm->mmap_sem)
>>> move_vma() -> write_seqcount_begin(&vma->vm_sequence)
>>> -> write_seqcount_begin_nested(&new_vma->vm_sequence, SINGLE_DEPTH_NESTING);
>>> move_page_tables()
>>> __pte_alloc()
>>> pte_alloc_one()
>>> __alloc_pages_nodemask()
>>> fs_reclaim_acquire()
>>>
>>>
>>> I think here we have prepare_alloc_pages() call, that does
>>>
>>> -> fs_reclaim_acquire(gfp_mask)
>>> -> fs_reclaim_release(gfp_mask)
>>>
>>> so that adds one more dependency ->mmap_sem -> ->vm_seq -> fs_reclaim
>>> ->mmap_sem -> ->vm_seq/1 -> fs_reclaim
>>>
>>>
>>> now, under memory pressure we hit the slow path and perform direct
>>> reclaim. direct reclaim is done under fs_reclaim lock, so we end up
>>> with the following call chain
>>>
>>> __alloc_pages_nodemask()
>>> __alloc_pages_slowpath()
>>> __perform_reclaim() -> fs_reclaim_acquire(gfp_mask);
>>> try_to_free_pages()
>>> shrink_node()
>>> shrink_active_list()
>>> rmap_walk_file() -> i_mmap_lock_read(mapping);
>>>
>>>
>>> and this break the existing dependency. since we now take the leaf lock
>>> (fs_reclaim) first and the the root lock (->mmap_sem).
>>
>> Thanks for looking at this.
>> I'm sorry, I should have miss something.
>
> no prob :)
>
>
>> My understanding is that there are 2 chains of locks:
>> 1. from __vma_adjust() mmap_sem -> i_mmap_rwsem -> vm_seq
>> 2. from move_vmap() mmap_sem -> vm_seq -> fs_reclaim
>> 2. from __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem
>
> yes, as far as lockdep warning suggests.
>
>> So the solution would be to have in __vma_adjust()
>> mmap_sem -> vm_seq -> i_mmap_rwsem
>>
>> But this will raised the following dependency from unmap_mapping_range()
>> unmap_mapping_range() -> i_mmap_rwsem
>> unmap_mapping_range_tree()
>> unmap_mapping_range_vma()
>> zap_page_range_single()
>> unmap_single_vma()
>> unmap_page_range() -> vm_seq
>>
>> And there is no way to get rid of it easily as in unmap_mapping_range()
>> there is no VMA identified yet.
>>
>> That's being said I can't see any clear way to get lock dependency cleaned
>> here.
>> Furthermore, this is not clear to me how a deadlock could happen as vm_seq
>> is a sequence lock, and there is no way to get blocked here.
>
> as far as I understand,
> seq locks can deadlock, technically. not on the write() side, but on
> the read() side:
>
> read_seqcount_begin()
> raw_read_seqcount_begin()
> __read_seqcount_begin()
>
> and __read_seqcount_begin() spins for ever
>
> __read_seqcount_begin()
> {
> repeat:
> ret = READ_ONCE(s->sequence);
> if (unlikely(ret & 1)) {
> cpu_relax();
> goto repeat;
> }
> return ret;
> }
>
>
> so if there are two CPUs, one doing write_seqcount() and the other one
> doing read_seqcount() then what can happen is something like this
>
> CPU0 CPU1
>
> fs_reclaim_acquire()
> write_seqcount_begin()
> fs_reclaim_acquire() read_seqcount_begin()
> write_seqcount_end()
>
> CPU0 can't write_seqcount_end() because of fs_reclaim_acquire() from
> CPU1, CPU1 can't read_seqcount_begin() because CPU0 did write_seqcount_begin()
> and now waits for fs_reclaim_acquire(). makes sense?
Yes, this makes sense.
But in the case of this series, there is no call to
__read_seqcount_begin(), and the reader (the speculative page fault
handler), is just checking for (vm_seq & 1) and if this is true, simply
exit the speculative path without waiting.
So there is no deadlock possibility.
The bad case would be to have 2 concurrent threads calling
write_seqcount_begin() on the same VMA, leading a wrongly freed sequence
lock but this can't happen because of the mmap_sem holding for write in
such a case.
Cheers,
Laurent.
next prev parent reply other threads:[~2017-09-14 7:55 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-08 18:06 [PATCH v3 00/20] Speculative page faults Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 01/20] mm: Dont assume page-table invariance during faults Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 02/20] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 03/20] mm: Introduce pte_spinlock " Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 04/20] mm: VMA sequence count Laurent Dufour
2017-09-13 11:53 ` Sergey Senozhatsky
2017-09-13 16:56 ` Laurent Dufour
2017-09-14 0:31 ` Sergey Senozhatsky
2017-09-14 7:55 ` Laurent Dufour [this message]
2017-09-14 8:13 ` Sergey Senozhatsky
2017-09-14 8:58 ` Laurent Dufour
2017-09-14 9:11 ` Sergey Senozhatsky
2017-09-14 9:15 ` Laurent Dufour
2017-09-14 9:40 ` Sergey Senozhatsky
2017-09-15 12:38 ` Laurent Dufour
2017-09-25 12:22 ` Peter Zijlstra
2017-09-08 18:06 ` [PATCH v3 05/20] mm: Protect VMA modifications using " Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 06/20] mm: RCU free VMAs Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 07/20] mm: Cache some VMA fields in the vm_fault structure Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 08/20] mm: Protect SPF handler against anon_vma changes Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 09/20] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 10/20] mm: Introduce __lru_cache_add_active_or_unevictable Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 11/20] mm: Introduce __maybe_mkwrite() Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 12/20] mm: Introduce __vm_normal_page() Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 13/20] mm: Introduce __page_add_new_anon_rmap() Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 14/20] mm: Provide speculative fault infrastructure Laurent Dufour
2017-09-08 18:06 ` [PATCH v3 15/20] mm: Try spin lock in speculative path Laurent Dufour
2017-09-08 18:07 ` [PATCH v3 16/20] mm: Adding speculative page fault failure trace events Laurent Dufour
2017-09-08 18:07 ` [PATCH v3 17/20] perf: Add a speculative page fault sw event Laurent Dufour
2017-09-08 18:07 ` [PATCH v3 18/20] perf tools: Add support for the SPF perf event Laurent Dufour
2017-09-08 18:07 ` [PATCH v3 19/20] x86/mm: Add speculative pagefault handling Laurent Dufour
2017-09-08 18:07 ` [PATCH v3 20/20] powerpc/mm: Add speculative page fault Laurent Dufour
2017-09-18 7:15 ` [PATCH v3 00/20] Speculative page faults Laurent Dufour
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=441ff1c6-72a7-5d96-02c8-063578affb62@linux.vnet.ibm.com \
--to=ldufour@linux.vnet.ibm.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=benh@kernel.crashing.org \
--cc=bsingharora@gmail.com \
--cc=dave@stgolabs.net \
--cc=haren@linux.vnet.ibm.com \
--cc=hpa@zytor.com \
--cc=jack@suse.cz \
--cc=khandual@linux.vnet.ibm.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=paulus@samba.org \
--cc=peterz@infradead.org \
--cc=sergey.senozhatsky.work@gmail.com \
--cc=sergey.senozhatsky@gmail.com \
--cc=tglx@linutronix.de \
--cc=tim.c.chen@linux.intel.com \
--cc=will.deacon@arm.com \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).