From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752052AbdHCCe6 (ORCPT ); Wed, 2 Aug 2017 22:34:58 -0400 Received: from gate.crashing.org ([63.228.1.57]:46063 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751188AbdHCCe4 (ORCPT ); Wed, 2 Aug 2017 22:34:56 -0400 Message-ID: <1501714480.2664.10.camel@kernel.crashing.org> Subject: Re: [RFC][PATCH 1/5] mm: Rework {set,clear,mm}_tlb_flush_pending() From: Benjamin Herrenschmidt To: Will Deacon , Peter Zijlstra Cc: torvalds@linux-foundation.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, mpe@ellerman.id.au, npiggin@gmail.com, linux-kernel@vger.kernel.org, mingo@kernel.org, stern@rowland.harvard.edu, Mel Gorman , Rik van Riel Date: Thu, 03 Aug 2017 08:54:40 +1000 In-Reply-To: <20170802090220.GE15219@arm.com> References: <20170801121419.a365inyyk5hghb6w@hirez.programming.kicks-ass.net> <20170801163903.wuwrk6ysyd52dwxm@hirez.programming.kicks-ass.net> <20170801164414.GB12027@arm.com> <20170801164820.s46g2325kjjrymom@hirez.programming.kicks-ass.net> <20170801225912.c23e6xave7qy5kzt@hirez.programming.kicks-ass.net> <1501636992.2792.139.camel@kernel.crashing.org> <20170802081106.kdl4grcb6sicqa3v@hirez.programming.kicks-ass.net> <20170802081523.GB15219@arm.com> <20170802084350.GC15219@arm.com> <20170802085111.iupsx6s3hw42a52b@hirez.programming.kicks-ass.net> <20170802090220.GE15219@arm.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.24.4 (3.24.4-1.fc26) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 2017-08-02 at 10:02 +0100, Will Deacon wrote: > > > So flush_tlb_range is actually weaker than smp_mb in some respects, yet the > > > flush_tlb_pending stuff will still work correctly. > > > > So while I think you're right, and we could live with this, after all, > > if we know the mm is CPU local, there shouldn't be any SMP concerns wrt > > its page tables. Do you really want to make this more complicated? > > It gives us a nice performance lift on arm64 and I have a patch...[1] We do that on powerpc too, though there are ongoing questions a to whether an smp_mb() after setting the mask bit in switch_mm is sufficient vs. prefetch brining entries in the TLB after the context is switched. But that's a powerpc specific issue. Nick Piggin is working on sorting that out. Cheers, Ben.