From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B37ADC48BE5 for ; Thu, 17 Jun 2021 23:49:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4EAB361351 for ; Thu, 17 Jun 2021 23:49:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EAB361351 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D97AE6B0070; Thu, 17 Jun 2021 19:49:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D5AAA6B0071; Thu, 17 Jun 2021 19:49:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B99E06B0072; Thu, 17 Jun 2021 19:49:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 81D176B0070 for ; Thu, 17 Jun 2021 19:49:33 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0A588181AEF00 for ; Thu, 17 Jun 2021 23:49:33 +0000 (UTC) X-FDA: 78264860226.03.B10260B Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf21.hostedemail.com (Postfix) with ESMTP id 9105CE000263 for ; Thu, 17 Jun 2021 23:49:32 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id D99E36128B; Thu, 17 Jun 2021 23:49:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623973770; bh=nSxwA4kUOitAUVHMWqIwKq28yMKy3VPBBPja5BVj3ac=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=QdUqaxaCNxKcwchVDO6u0pqLB7txf9m3HI6+iqYyd/t6cWk0Djt8524fn/h5xpJNn blqY7bla8WHR9zJlFkbR8v6C0ERlv5fzT+aePOzsYsOvmn2KGCJH5eMGaBjVUw4SK9 UFg5hPthTZtDuNiur4iNawgaE040CndKO1RiPfuDlQcQUj7fPCJeQuZl38Hih57Pkk 5NSYAM3iL5+XGb2UhZq+W7AMTLBSB64QbVAN/6BT2k8szd3Ga7pXoGCYXcFwnnaSPp JjVPKr/a/G6Iypa+jOU5UXfUnHbFrmvxrRJLI1dRnTHWtjGx8kisUvuIwY55jsTfES fY66yrkVTvMew== Subject: Re: [PATCH 4/8] membarrier: Make the post-switch-mm barrier explicit To: Nicholas Piggin , "Peter Zijlstra (Intel)" , Rik van Riel Cc: Andrew Morton , Dave Hansen , Linux Kernel Mailing List , linux-mm@kvack.org, Mathieu Desnoyers , "Paul E. McKenney" , the arch/x86 maintainers References: <1623816595.myt8wbkcar.astroid@bobo.none> <617cb897-58b1-8266-ecec-ef210832e927@kernel.org> <1623893358.bbty474jyy.astroid@bobo.none> <58b949fb-663e-4675-8592-25933a3e361c@www.fastmail.com> <1623911501.q97zemobmw.astroid@bobo.none> From: Andy Lutomirski Message-ID: <5efaca70-35a0-1ce5-98ff-651a5f153a0a@kernel.org> Date: Thu, 17 Jun 2021 16:49:29 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <1623911501.q97zemobmw.astroid@bobo.none> Content-Type: text/plain; charset=utf-8 Content-Language: en-US X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9105CE000263 X-Stat-Signature: rmb4ow8e7a4j5rqja43ab3xhtoyof5ye Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=QdUqaxaC; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of luto@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=luto@kernel.org X-HE-Tag: 1623973772-773507 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/16/21 11:51 PM, Nicholas Piggin wrote: > Excerpts from Andy Lutomirski's message of June 17, 2021 3:32 pm: >> On Wed, Jun 16, 2021, at 7:57 PM, Andy Lutomirski wrote: >>> >>> >>> On Wed, Jun 16, 2021, at 6:37 PM, Nicholas Piggin wrote: >>>> Excerpts from Andy Lutomirski's message of June 17, 2021 4:41 am: >>>>> On 6/16/21 12:35 AM, Peter Zijlstra wrote: >>>>>> On Wed, Jun 16, 2021 at 02:19:49PM +1000, Nicholas Piggin wrote: >>>>>>> Excerpts from Andy Lutomirski's message of June 16, 2021 1:21 pm: >>>>>>>> membarrier() needs a barrier after any CPU changes mm. There is= currently >>>>>>>> a comment explaining why this barrier probably exists in all cas= es. This >>>>>>>> is very fragile -- any change to the relevant parts of the sched= uler >>>>>>>> might get rid of these barriers, and it's not really clear to me= that >>>>>>>> the barrier actually exists in all necessary cases. >>>>>>> >>>>>>> The comments and barriers in the mmdrop() hunks? I don't see what= is=20 >>>>>>> fragile or maybe-buggy about this. The barrier definitely exists. >>>>>>> >>>>>>> And any change can change anything, that doesn't make it fragile.= My >>>>>>> lazy tlb refcounting change avoids the mmdrop in some cases, but = it >>>>>>> replaces it with smp_mb for example. >>>>>> >>>>>> I'm with Nick again, on this. You're adding extra barriers for no >>>>>> discernible reason, that's not generally encouraged, seeing how ex= tra >>>>>> barriers is extra slow. >>>>>> >>>>>> Both mmdrop() itself, as well as the callsite have comments saying= how >>>>>> membarrier relies on the implied barrier, what's fragile about tha= t? >>>>>> >>>>> >>>>> My real motivation is that mmgrab() and mmdrop() don't actually nee= d to >>>>> be full barriers. The current implementation has them being full >>>>> barriers, and the current implementation is quite slow. So let's t= ry >>>>> that commit message again: >>>>> >>>>> membarrier() needs a barrier after any CPU changes mm. There is cu= rrently >>>>> a comment explaining why this barrier probably exists in all cases.= The >>>>> logic is based on ensuring that the barrier exists on every control= flow >>>>> path through the scheduler. It also relies on mmgrab() and mmdrop(= ) being >>>>> full barriers. >>>>> >>>>> mmgrab() and mmdrop() would be better if they were not full barrier= s. As a >>>>> trivial optimization, mmgrab() could use a relaxed atomic and mmdro= p() >>>>> could use a release on architectures that have these operations. >>>> >>>> I'm not against the idea, I've looked at something similar before (n= ot >>>> for mmdrop but a different primitive). Also my lazy tlb shootdown se= ries=20 >>>> could possibly take advantage of this, I might cherry pick it and te= st=20 >>>> performance :) >>>> >>>> I don't think it belongs in this series though. Should go together w= ith >>>> something that takes advantage of it. >>> >>> I=E2=80=99m going to see if I can get hazard pointers into shape quic= kly. >> >> Here it is. Not even boot tested! >> >> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/= ?h=3Dsched/lazymm&id=3Decc3992c36cb88087df9c537e2326efb51c95e31 >> >> Nick, I think you can accomplish much the same thing as your patch by: >> >> #define for_each_possible_lazymm_cpu while (false) >=20 > I'm not sure what you mean? For powerpc, other CPUs can be using the mm= =20 > as lazy at this point. I must be missing something. What I mean is: if you want to shoot down lazies instead of doing the hazard pointer trick to track them, you could do: #define for_each_possible_lazymm_cpu while (false) which would promise to the core code that you don't have any lazies left by the time exit_mmap() is done. You might need a new hook in exit_mmap() depending on exactly how you implement the lazy shootdown.