From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752617AbdHBInw (ORCPT ); Wed, 2 Aug 2017 04:43:52 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:49890 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752193AbdHBInv (ORCPT ); Wed, 2 Aug 2017 04:43:51 -0400 Date: Wed, 2 Aug 2017 09:43:50 +0100 From: Will Deacon To: Peter Zijlstra Cc: Benjamin Herrenschmidt , torvalds@linux-foundation.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, mpe@ellerman.id.au, npiggin@gmail.com, linux-kernel@vger.kernel.org, mingo@kernel.org, stern@rowland.harvard.edu, Mel Gorman , Rik van Riel Subject: Re: [RFC][PATCH 1/5] mm: Rework {set,clear,mm}_tlb_flush_pending() Message-ID: <20170802084350.GC15219@arm.com> References: <20170801103157.GD8702@arm.com> <1501588965.2792.121.camel@kernel.crashing.org> <20170801121419.a365inyyk5hghb6w@hirez.programming.kicks-ass.net> <20170801163903.wuwrk6ysyd52dwxm@hirez.programming.kicks-ass.net> <20170801164414.GB12027@arm.com> <20170801164820.s46g2325kjjrymom@hirez.programming.kicks-ass.net> <20170801225912.c23e6xave7qy5kzt@hirez.programming.kicks-ass.net> <1501636992.2792.139.camel@kernel.crashing.org> <20170802081106.kdl4grcb6sicqa3v@hirez.programming.kicks-ass.net> <20170802081523.GB15219@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170802081523.GB15219@arm.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 02, 2017 at 09:15:23AM +0100, Will Deacon wrote: > On Wed, Aug 02, 2017 at 10:11:06AM +0200, Peter Zijlstra wrote: > > On Wed, Aug 02, 2017 at 11:23:12AM +1000, Benjamin Herrenschmidt wrote: > > > On Wed, 2017-08-02 at 00:59 +0200, Peter Zijlstra wrote: > > > > > PowerPC for example uses PTESYNC before the TBLIE, so does a SYNC after > > > > > work? Ben? > > > > > From what I gather it is not. You have TLBSYNC for it. So the good news > > > > > > tlbsync is pretty much a nop these days. ptesync is a strict superset > > > of sync and we have it after every tlbie. > > > > In the radix code, yes. I got lost going through the hash code, and I > > didn't look at the 32bit code at all. > > > > So the radix code does: > > > > PTESYNC > > TLBIE > > EIEIO; TLBSYNC; PTESYNC > > > > which should be completely ordered against anything prior and anything > > following, and is I think the behaviour we want from TLB flushes in > > general, but is very much not provided by a number of architectures > > afaict. > > > > Ah, found the hash-64 code, yes that's good too. The hash32 code lives > > in asm and confuses me, it has a bunch of SYNC, SYNC_601 and isync in. > > The nohash variant seems to do a isync after tlbwe, but again no clue. > > > > > > Now, do I go and attempt fixing all that needs fixing? > > > > > > x86 is good, our CR3 writes or INVLPG stuff is fully serializing. > > > > arm is good, it does DSB ISH before and after > > > > arm64 looks good too, although it plays silly games with the first > > barrier, but I trust that to be sufficient. > > The first barrier only orders prior stores for us, because page table > updates are made using stores. A prior load could be reordered past the > invalidation, but can't make it past the second barrier. > > I really think we should avoid defining TLB invalidation in terms of > smp_mb() because it's a lot more subtle than that. Another worry I have here is with architectures that can optimise the "only need to flush the local TLB" case. For example, this version of 'R': P0: WRITE_ONCE(x, 1); smp_mb(); WRITE_ONCE(y, 1); P1: WRITE_ONCE(y, 2); flush_tlb_range(...); // Only needs to flush the local TLB r0 = READ_ONCE(x); It doesn't seem unreasonable to me for y==2 && r0==0 if the flush_tlb_range(...) ends up only doing local invalidation. As a concrete example, imagine a CPU with a page table walker that can snoop the local store-buffer. Then, the local flush_tlb_range in P1 only needs to progress the write to y as far as the store-buffer before it can invalidate the local TLB. Once the TLB is invalidated, it can read x knowing that the translation is up-to-date wrt the page table, but that read doesn't need to wait for write to y to become visible to other CPUs. So flush_tlb_range is actually weaker than smp_mb in some respects, yet the flush_tlb_pending stuff will still work correctly. Will