From: Peter Zijlstra <peterz@infradead.org>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: akpm@linux-foundation.org, mm-commits@vger.kernel.org,
will.deacon@arm.com, stable@vger.kernel.org, npiggin@gmail.com,
namit@vmware.com, minchan@kernel.org, mgorman@suse.de,
jstancek@redhat.com, yang.shi@linux.alibaba.com
Subject: Re: + mm-mmu_gather-remove-__tlb_reset_range-for-force-flush.patch added to -mm tree
Date: Mon, 27 May 2019 16:25:52 +0200 [thread overview]
Message-ID: <20190527142552.GD2623@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <335de44e-02f5-ce92-c026-e8ac4a34a766@linux.ibm.com>
On Mon, May 27, 2019 at 06:59:08PM +0530, Aneesh Kumar K.V wrote:
> On 5/27/19 4:31 PM, Peter Zijlstra wrote:
> > On Tue, May 21, 2019 at 04:18:33PM -0700, akpm@linux-foundation.org wrote:
> > > --- a/mm/mmu_gather.c~mm-mmu_gather-remove-__tlb_reset_range-for-force-flush
> > > +++ a/mm/mmu_gather.c
> > > @@ -245,14 +245,28 @@ void tlb_finish_mmu(struct mmu_gather *t
> > > {
> > > /*
> > > * If there are parallel threads are doing PTE changes on same range
> > > - * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB
> > > - * flush by batching, a thread has stable TLB entry can fail to flush
> > > - * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
> > > - * forcefully if we detect parallel PTE batching threads.
> > > + * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
> > > + * flush by batching, one thread may end up seeing inconsistent PTEs
> > > + * and result in having stale TLB entries. So flush TLB forcefully
> > > + * if we detect parallel PTE batching threads.
> > > + *
> > > + * However, some syscalls, e.g. munmap(), may free page tables, this
> > > + * needs force flush everything in the given range. Otherwise this
> > > + * may result in having stale TLB entries for some architectures,
> > > + * e.g. aarch64, that could specify flush what level TLB.
> > > */
> > > if (mm_tlb_flush_nested(tlb->mm)) {
> > > + /*
> > > + * The aarch64 yields better performance with fullmm by
> > > + * avoiding multiple CPUs spamming TLBI messages at the
> > > + * same time.
> > > + *
> > > + * On x86 non-fullmm doesn't yield significant difference
> > > + * against fullmm.
> > > + */
> > > + tlb->fullmm = 1;
> > > __tlb_reset_range(tlb);
> > > - __tlb_adjust_range(tlb, start, end - start);
> > > + tlb->freed_tables = 1;
> > > }
> > > tlb_flush_mmu(tlb);
> >
> > Nick, Aneesh, can we now do this?
> >
> > ---
> >
> > diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
> > index 4d841369399f..8d28b83914cb 100644
> > --- a/arch/powerpc/mm/book3s64/radix_tlb.c
> > +++ b/arch/powerpc/mm/book3s64/radix_tlb.c
> > @@ -881,39 +881,6 @@ void radix__tlb_flush(struct mmu_gather *tlb)
> > */
> > if (tlb->fullmm) {
> > __flush_all_mm(mm, true);
> > -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLB_PAGE)
> > - } else if (mm_tlb_flush_nested(mm)) {
> > - /*
> > - * If there is a concurrent invalidation that is clearing ptes,
> > - * then it's possible this invalidation will miss one of those
> > - * cleared ptes and miss flushing the TLB. If this invalidate
> > - * returns before the other one flushes TLBs, that can result
> > - * in it returning while there are still valid TLBs inside the
> > - * range to be invalidated.
> > - *
> > - * See mm/memory.c:tlb_finish_mmu() for more details.
> > - *
> > - * The solution to this is ensure the entire range is always
> > - * flushed here. The problem for powerpc is that the flushes
> > - * are page size specific, so this "forced flush" would not
> > - * do the right thing if there are a mix of page sizes in
> > - * the range to be invalidated. So use __flush_tlb_range
> > - * which invalidates all possible page sizes in the range.
> > - *
> > - * PWC flush probably is not be required because the core code
> > - * shouldn't free page tables in this path, but accounting
> > - * for the possibility makes us a bit more robust.
> > - *
> > - * need_flush_all is an uncommon case because page table
> > - * teardown should be done with exclusive locks held (but
> > - * after locks are dropped another invalidate could come
> > - * in), it could be optimized further if necessary.
> > - */
> > - if (!tlb->need_flush_all)
> > - __radix__flush_tlb_range(mm, start, end, true);
> > - else
> > - radix__flush_all_mm(mm);
> > -#endif
> > } else if ( (psize = radix_get_mmu_psize(page_size)) == -1) {
> > if (!tlb->need_flush_all)
> > radix__flush_tlb_mm(mm);
> >
>
>
> I guess we can revert most of the commit
> 02390f66bd2362df114a0a0770d80ec33061f6d1. That is the only place we flush
> multiple page sizes? . But should we evaluate the performance impact of that
> fullmm flush on ppc64?
Maybe, but given the patch that went into -mm, PPC will never hit that
branch I killed anymore -- and that really shouldn't be in architecture
code anyway.
Also; as I noted last time: __radix___flush_tlb_range() and
__radix__flush_tlb_range_psize() look similar enough that they might
want to be a single function (and instead of @flush_all_sizes, have it
take @gflush, @hflush, @flush and @pwc).
next prev parent reply other threads:[~2019-05-27 14:26 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-21 23:18 + mm-mmu_gather-remove-__tlb_reset_range-for-force-flush.patch added to -mm tree akpm
2019-05-27 11:01 ` Peter Zijlstra
2019-05-27 13:29 ` Aneesh Kumar K.V
2019-05-27 14:25 ` Peter Zijlstra [this message]
2019-05-30 21:55 ` Jan Stancek
2019-05-31 2:46 ` Nicholas Piggin
2019-05-31 9:49 ` Peter Zijlstra
2019-06-03 2:11 ` Nicholas Piggin
2019-06-03 10:30 ` Will Deacon
2019-06-03 14:10 ` Nicholas Piggin
2019-06-03 17:57 ` Will Deacon
2019-06-04 8:18 ` Nicholas Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190527142552.GD2623@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=jstancek@redhat.com \
--cc=mgorman@suse.de \
--cc=minchan@kernel.org \
--cc=mm-commits@vger.kernel.org \
--cc=namit@vmware.com \
--cc=npiggin@gmail.com \
--cc=stable@vger.kernel.org \
--cc=will.deacon@arm.com \
--cc=yang.shi@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox